chash
stringlengths
16
16
content
stringlengths
267
674k
95ba9d4c41892297
Uncertainty principle From Wikipedia, the free encyclopedia   (Redirected from Heisenberg uncertainty principle) Jump to navigation Jump to search Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[2] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[3] later that year and by Hermann Weyl[4] in 1928: where ħ is the reduced Planck constant, h/(2π). Historically, the uncertainty principle has been confused[5][6] with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10][note 1] Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting[12] or quantum optics[13] systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.[14] Click to see animation. The evolution of an initially very localized gaussian wave function of a free particle in two-dimensional space, with color and intensity indicating phase and amplitude. The spreading of the wave function in all directions shows that the initial momentum has a spread of values, unmodified in time; while the spread in position increases in time: as a result, the uncertainty Δx Δp increases in time. The superposition of several plane waves to form a wave packet. This wave packet becomes increasingly localized with the addition of many waves. The Fourier transform is a mathematical operation that separates a wave packet into its individual plane waves. Note that the waves shown here are real for illustrative purposes only, whereas in quantum mechanics the wave function is generally complex. The uncertainty principle is not readily apparent on the macroscopic scales of everyday experience.[15] So it is helpful to demonstrate how it applies to more easily understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.[16] Wave mechanics interpretation[edit] (Ref [10]) Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. As the amplitude increases above zero the curvature reverses sign, so the amplitude begins to decrease again, and vice versa—the result is an alternating amplitude: a wave. According to the de Broglie hypothesis, every object in the universe is a wave, i.e., a situation which gives rise to this phenomenon. The position of the particle is described by a wave function . The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is In the case of the single-moded plane wave, is a uniform distribution. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. On the other hand, consider a wave function that is a sum of many waves, which we may write this as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta. One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation. The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. Click the show button below to see a semi-formal derivation of the Kennard inequality using wave mechanics. Matrix mechanics interpretation[edit] (Ref [10]) In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and , one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that Applying the commutator to yields where Î is the identity operator. Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. Robertson–Schrödinger uncertainty relations[edit] The most common general form of the uncertainty principle is the Robertson uncertainty relation.[17] For an arbitrary Hermitian operator we can associate a standard deviation where the brackets indicate an expectation value. For a pair of operators and , we may define their commutator as In this notation, the Robertson uncertainty relation is given by The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation,[18] where we have introduced the anticommutator, Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. • For position and linear momentum, the canonical commutation relation implies the Kennard inequality from above: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m〉, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as j(j + 1) ≥ m(m + 1), and hence j ≥ m, among others. • In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelshtam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows.[26][27] For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds: where σE is the standard deviation of the energy operator (Hamiltonian) in the state ψ, σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B: In other words, this is the time intervalt) after which the expectation value changes appreciably. An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth.[28] The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width). A counterexample[edit] Suppose we consider a quantum particle on a ring, where the wave function depends on an angular variable , which we may take to lie in the interval . Define "position" and "momentum" operators and by where we impose periodic boundary conditions on . Note that the definition of depends on our choice to have range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, .[31] Now let be any of the eigenstates of , which are given by . Note that these states are normalizable, unlike the eigenstates of the momentum operator on the line. Note also that the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that Although this result appears to violate the Robertson uncertainty principle, the paradox is resolved when we note that is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on .[22] Thus, the derivation of the Robertson relation, which requires and to be defined, does not apply. (These also furnish an example of operators satisfying the canonical commutation relations but not the Weyl relations.[32]) For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and are defined in the state , the Heisenberg uncertainty principle holds, even if fails to be in the domain of or of .[33] (Refs [10][19]) Quantum harmonic oscillator stationary states[edit] Consider a one-dimensional quantum harmonic oscillator (QHO). It is possible to express the position and momentum operators in terms of the creation and annihilation operators: Using the standard rules for creation and annihilation operators on the eigenstates of the QHO, the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound[3] is saturated for the ground state n=0, for which the probability density is just the normal distribution. Quantum harmonic oscillator with Gaussian initial condition[edit] Position (blue) and momentum (red) probability densities for an initially Gaussian distribution. From top to bottom, the animations show the cases Ω=ω, Ω=2ω, and Ω=ω/2. Note the tradeoff between the widths of the distributions. In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as From the relations we can conclude the following: (the right most equality holds only when Ω = ω) . Coherent states[edit] A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as In the picture where the coherent state is a massive particle in a QHO, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Particle in a box[edit] Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are where and we have used the de Broglie relation . The variances of and can be calculated explicitly: The product of the standard deviations is therefore For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case Constant momentum[edit] Position space probability density of an initially Gaussian state moving at minimally uncertain, constant momentum in free space Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale , with describing the width of the distribution−−cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are Since and this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as Additional uncertainty relations[edit] Mixed states[edit] The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.[34] The Maccone–Pati uncertainty relations[edit] The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Maccone and Pati give non-trivial bounds on the sum of the variances for two incompatible observables.[35] For two non-commuting observables and the first stronger uncertainty relation is given by where , , is a normalized vector that is orthogonal to the state of the system and one should choose the sign of to make this real quantity a positive number. The second stronger uncertainty relation is given by where is a state orthogonal to . The form of implies that the right-hand side of the new uncertainty relation is nonzero unless is an eigenstate of . One may note that can be an eigenstate of without being an eigenstate of either or . However, when is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless is an eigenstate of both. Phase space[edit] In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:[36] Choosing , we arrive at Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are positive. The positive eigenvalues then imply a corresponding positivity condition on the determinant: or, explicitly, after algebraic manipulation, Systematic and statistical errors[edit] The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Ozawa[6] — encompassing both systematic and statistical errors — holds: Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.[37][38] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence[8][39][40][41] that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism,[1] it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily[42] unsharp or weak. It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson[1] and Ozawa relations we obtain The four terms can be written as: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Fujikawa[43] established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Quantum entropic uncertainty principle[edit] For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[24][44][45][46] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[47] This conjecture, also studied by Hirschman[48] and proven in 1975 by Beckner[49] and by Iwo Bialynicki-Birula and Jerzy Mycielski[50] is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where the Shannon information entropies are subject to the following constraint, where the logarithms may be in any base. The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefunction φ(p), the above constraint can be written for the corresponding entropies as where h is Planck's constant. Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then If, instead, x0 p0 is chosen to be ħ, then If x0 and p0 are chosen to be unity in whatever system of units are being used, then where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units. The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities[51] (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as Under the above definition, the entropic uncertainty relation is Here we note that δx δp/h is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Harmonic analysis[edit] In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂:[52][53][54] Signal processing [edit] In the context of signal processing, and in particular time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. Thus where and are the standard deviations of the time and frequency estimates respectively [55]. Stated alternatively, "One cannot simultaneously sharply localize a signal (function f ) in both the time domain and frequency domain ( ƒ̂, its Fourier transform)". When applied to filters, the result implies that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. DFT-Uncertainty principle[edit] There is an uncertainty principle that uses signal sparsity (or the number of non-zero coefficients).[56] Let be a sequence of N complex numbers and its discrete Fourier transform. Denote by the number of non-zero elements in the time sequence and by the number of non-zero elements in the frequency sequence . Then, Benedicks's theorem[edit] Amrein–Berthier[57] and Benedicks's theorem[58] intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is non-zero cannot both be small. Specifically, it is impossible for a function f in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is[59][60] One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex. Hardy's uncertainty principle[edit] The mathematician G. H. Hardy formulated the following uncertainty principle:[61] it is not possible for f and ƒ̂ to both be "very rapidly decreasing". Specifically, if f in is such that ( an integer), then, if ab > 1, f = 0, while if ab = 1, then there is a polynomial P of degree N such that This was later improved as follows: if is such that where P is a polynomial of degree (Nd)/2 and A is a real d×d positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander[62] (the case ) and Bonami, Demange, and Jaming[63] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.[64] A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref.[65] Theorem. If a tempered distribution is such that for some convenient polynomial P and real positive definite matrix A of type d × d. Werner Heisenberg formulated the uncertainty principle at Niels Bohr's institute in Copenhagen, while working on the mathematical foundations of quantum mechanics.[66] Werner Heisenberg and Niels Bohr In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad hoc old quantum theory with modern quantum mechanics. The central premise was that the classical concept of motion does not fit at the quantum level, as electrons in an atom do not travel on sharply defined orbits. Rather, their motion is smeared out in a strange way: the Fourier transform of its time dependence only involves those frequencies that could be observed in the quantum jumps of their radiation. Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theorist to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going. In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. This implication provided a clear physical interpretation for the non-commutativity, and it laid the foundation for what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relation implies an uncertainty, or in Bohr's language a complementarity.[67] Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote: It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.[68] In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement,[2] but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture[69] he refined his principle: Kennard[3] in 1927 first proved the modern inequality: where ħ = h/2π, and σx, σp are the standard deviations of position and momentum. Heisenberg only proved relation (2) for the special case of Gaussian states.[69] Terminology and translation[edit] Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word, "Ungenauigkeit" ("indeterminacy"),[2] to describe the basic theoretical principle. Only in the endnote did he switch to the word, "Unsicherheit" ("uncertainty"). When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, the translation "uncertainty" was used, and it became the more commonly used term in the English language thereafter.[70] Heisenberg's microscope[edit] Heisenberg's gamma-ray microscope for locating an electron (shown in blue). The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma-ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light. The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by utilizing the observer effect of an imaginary microscope as a measuring device.[69] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.[71]:49–50 Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely. Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to Planck's constant.[72] Heisenberg did not care to formulate the uncertainty principle as an exact limit (which is elaborated below), and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Critical reactions[edit] The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were, in fact, seen as twin targets by detractors who believed in an underlying determinism and realism. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. The ideal of the detached observer[edit] Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): "Like the moon has a definite position" Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer. • Letter from Pauli to Niels Bohr, February 15, 1955[73] Einstein's slit[edit] The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Consider a particle passing through a slit of width d. The slit introduces an uncertainty in momentum of approximately h/d because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum. Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Δp, the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to h/Δp, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.[74] Einstein's box[edit] Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to Planck's constant."[75] Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box."[76] "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."[75] Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the earth's surface will result in an uncertainty in the rate of the clock,"[77] because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."[78] EPR paradox for entangled particles[edit] Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen (see EPR paradox) published an analysis of widely separated entangled particles. Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.[79] But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed the "natural basic assumption" that a complete description of reality would have to predict the results of experiments from "locally changing deterministic quantities" and therefore would have to include more information than the maximum possible allowed by the uncertainty principle. In 1964, John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. These hidden variables may be "hidden" because of an illusion that occurs during observations of objects that are too large or too small. This illusion can be likened to rotating fan blades that seem to pop in and out of existence at different locations and sometimes seem to be in the same place at the same time when observed. This same illusion manifests itself in the observation of subatomic particles. Both the fan blades and the subatomic particles are moving so fast that the illusion is seen by the observer. Therefore, it is possible that there would be predictability of the subatomic particles behavior and characteristics to a recording device capable of very high speed tracking....Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper to the Heisenberg inequality itself, see below. While it is possible to assume that quantum mechanical predictions are due to nonlocal, hidden variables, and in fact David Bohm invented such a formulation, this resolution is not satisfactory to the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and it can be potentially intractable. If the hidden variables are not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption—that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer would encounter fundamental obstacles when attempting to factor numbers of approximately 10,000 digits or more; a potentially achievable task in quantum mechanics.[80][full citation needed] Popper's criticism[edit] Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.[81] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".[81][82] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables. In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften,[83] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: [Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements. [original emphasis][84] Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Weizsäcker, Heisenberg, and Einstein; this experiment may have influenced the formulation of the EPR experiment.[81][85] Many-worlds uncertainty[edit] The many-worlds interpretation originally outlined by Hugh Everett III in 1957 is partly meant to reconcile the differences between Einstein's and Bohr's views by replacing Bohr's wave function collapse with an ensemble of deterministic and independent universes whose distribution is governed by wave functions and the Schrödinger equation. Thus, uncertainty in the many-worlds interpretation follows from each observer within any universe having no knowledge of what goes on in the other universes. Free will[edit] Some scientists including Arthur Compton[86] and Martin Heisenberg[87] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.[88] The standard view, however, is that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.[88] The second law of thermodynamics[edit] There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics.[89] See also[edit] 1. ^ a b c Sen, D. (2014). "The Uncertainty relations in quantum mechanics" (PDF). Current Science. 107 (2): 203–218. 2. ^ a b c Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", Zeitschrift für Physik (in German), 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280.. Annotated pre-publication proof sheet of Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, March 21, 1927. 3. ^ a b c Kennard, E. H. (1927), "Zur Quantenmechanik einfacher Bewegungstypen", Zeitschrift für Physik (in German), 44 (4–5): 326–352, Bibcode:1927ZPhy...44..326K, doi:10.1007/BF01391200. 4. ^ Weyl, H. (1928), Gruppentheorie und Quantenmechanik, Leipzig: Hirzel 5. ^ Furuta, Aya (2012), "One Thing Is Certain: Heisenberg's Uncertainty Principle Is Not Dead", Scientific American 6. ^ a b Ozawa, Masanao (2003), "Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement", Physical Review A, 67 (4): 42105, arXiv:quant-ph/0207121, Bibcode:2003PhRvA..67d2105O, doi:10.1103/PhysRevA.67.042105 7. ^ Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20 8. ^ a b Rozema, L. A.; Darabi, A.; Mahler, D. H.; Hayat, A.; Soudagar, Y.; Steinberg, A. M. (2012). "Violation of Heisenberg's Measurement–Disturbance Relationship by Weak Measurements". Physical Review Letters. 109 (10): 100404. arXiv:1208.0034v2. Bibcode:2012PhRvL.109j0404R. doi:10.1103/PhysRevLett.109.100404. PMID 23005268. 9. ^ Indian Institute of Technology Madras, Professor V. Balakrishnan, Lecture 1 – Introduction to Quantum Physics; Heisenberg's uncertainty principle, National Programme of Technology Enhanced Learning on YouTube 10. ^ a b c d L.D. Landau, E. M. Lifshitz (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. Online copy. 12. ^ Elion, W. J.; M. Matters, U. Geigenmüller & J. E. Mooij; Geigenmüller, U.; Mooij, J. E. (1994), "Direct demonstration of Heisenberg's uncertainty principle in a superconductor", Nature, 371 (6498): 594–595, Bibcode:1994Natur.371..594E, doi:10.1038/371594a0 13. ^ Smithey, D. T.; M. Beck, J. Cooper, M. G. Raymer; Cooper, J.; Raymer, M. G. (1993), "Measurement of number–phase uncertainty relations of optical fields", Phys. Rev. A, 48 (4): 3159–3167, Bibcode:1993PhRvA..48.3159S, doi:10.1103/PhysRevA.48.3159, PMID 9909968CS1 maint: Multiple names: authors list (link) 14. ^ Caves, Carlton (1981), "Quantum-mechanical noise in an interferometer", Phys. Rev. D, 23 (8): 1693–1708, Bibcode:1981PhRvD..23.1693C, doi:10.1103/PhysRevD.23.1693 16. ^ Claude Cohen-Tannoudji; Bernard Diu; Franck Laloë (1996), Quantum mechanics, Wiley-Interscience: Wiley, pp. 231–233, ISBN 978-0-471-56952-7 17. ^ a b Robertson, H. P. (1929), "The Uncertainty Principle", Phys. Rev., 34: 163–64, Bibcode:1929PhRv...34..163R, doi:10.1103/PhysRev.34.163 18. ^ a b Schrödinger, E. (1930), "Zum Heisenbergschen Unschärfeprinzip", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, 14: 296–303 19. ^ a b Griffiths, David (2005), Quantum Mechanics, New Jersey: Pearson 20. ^ Riley, K. F.; M. P. Hobson and S. J. Bence (2006), Mathematical Methods for Physics and Engineering, Cambridge, p. 246 21. ^ Davidson, E. R. (1965), "On Derivations of the Uncertainty Principle", J. Chem. Phys., 42 (4): 1461, Bibcode:1965JChPh..42.1461D, doi:10.1063/1.1696139 22. ^ a b c Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245 23. ^ Jackiw, Roman (1968), "Minimum Uncertainty Product, Number‐Phase Uncertainty Product, and Coherent States", J. Math. Phys., 9 (3): 339, Bibcode:1968JMP.....9..339J, doi:10.1063/1.1664585 24. ^ a b Carruthers, P.; Nieto, M. M. (1968), "Phase and Angle Variables in Quantum Mechanics", Rev. Mod. Phys., 40 (2): 411–440, Bibcode:1968RvMP...40..411C, doi:10.1103/RevModPhys.40.411 25. ^ Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer 26. ^ L. I. Mandelshtam, I. E. Tamm, The uncertainty relation between energy and time in nonrelativistic quantum mechanics, 1945. 27. ^ Hilgevoord, Jan (1996). "The uncertainty principle for energy and time" (PDF). American Journal of Physics. 64 (12): 1451–1456. Bibcode:1996AmJPh..64.1451H. doi:10.1119/1.18410.; Hilgevoord, Jan (1998). "The uncertainty principle for energy and time. II". American Journal of Physics. 66 (5): 396–402. Bibcode:1998AmJPh..66..396H. doi:10.1119/1.18880. 28. ^ The broad linewidth of fast-decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay rate, to get sharper peaks. Gabrielse, Gerald; H. Dehmelt (1985), "Observation of Inhibited Spontaneous Emission", Physical Review Letters, 55 (1): 67–70, Bibcode:1985PhRvL..55...67G, doi:10.1103/PhysRevLett.55.67, PMID 10031682 29. ^ Likharev, K. K.; A. B. Zorin (1985), "Theory of Bloch-Wave Oscillations in Small Josephson Junctions", J. Low Temp. Phys., 59 (3/4): 347–382, Bibcode:1985JLTP...59..347L, doi:10.1007/BF00683782 30. ^ Anderson, P. W. (1964), "Special Effects in Superconductivity", in Caianiello, E. R. (ed.), Lectures on the Many-Body Problem, Vol. 2, New York: Academic Press 31. ^ More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space. See Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245 34. ^ Steiger, Nathan. "Quantum Uncertainty and Conservation Law Restrictions on Gate Fidelity". Brigham Young University. Retrieved 19 June 2011. 35. ^ Maccone, Lorenzo; Pati, Arun K. (31 December 2014). "Stronger Uncertainty Relations for All Incompatible Observables". Physical Review Letters. 113 (26): 260401. arXiv:1407.0338. Bibcode:2014PhRvL.113z0401M. doi:10.1103/PhysRevLett.113.260401. 36. ^ Curtright, T.; Zachos, C. (2001). "Negative Probability and Uncertainty Relations". Modern Physics Letters A. 16 (37): 2381–2385. arXiv:hep-th/0105226. Bibcode:2001MPLA...16.2381C. doi:10.1142/S021773230100576X. 37. ^ Busch, P.; Lahti, P.; Werner, R. F. (2013). "Proof of Heisenberg's Error-Disturbance Relation". Physical Review Letters. 111 (16): 160405. arXiv:1306.1565. Bibcode:2013PhRvL.111p0405B. doi:10.1103/PhysRevLett.111.160405. PMID 24182239. 38. ^ Busch, P.; Lahti, P.; Werner, R. F. (2014). "Heisenberg uncertainty for qubit measurements". Physical Review A. 89. arXiv:1311.0837. Bibcode:2014PhRvA..89a2129B. doi:10.1103/PhysRevA.89.012129. 39. ^ Erhart, J.; Sponar, S.; Sulyok, G.; Badurek, G.; Ozawa, M.; Hasegawa, Y. (2012). "Experimental demonstration of a universally valid error-disturbance uncertainty relation in spin measurements". Nature Physics. 8 (3): 185–189. arXiv:1201.1833. Bibcode:2012NatPh...8..185E. doi:10.1038/nphys2194. 40. ^ Baek, S.-Y.; Kaneda, F.; Ozawa, M.; Edamatsu, K. (2013). "Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation". Scientific Reports. 3: 2221. Bibcode:2013NatSR...3E2221B. doi:10.1038/srep02221. PMC 3713528. PMID 23860715. 41. ^ Ringbauer, M.; Biggerstaff, D.N.; Broome, M.A.; Fedrizzi, A.; Branciard, C.; White, A.G. (2014). "Experimental Joint Quantum Measurements with Minimum Uncertainty". Physical Review Letters. 112: 020401. arXiv:1308.5688. Bibcode:2014PhRvL.112b0401R. doi:10.1103/PhysRevLett.112.020401. PMID 24483993. 42. ^ Björk, G.; Söderholm, J.; Trifonov, A.; Tsegaye, T.; Karlsson, A. (1999). "Complementarity and the uncertainty relations". Physical Review. A60: 1878. arXiv:quant-ph/9904069. Bibcode:1999PhRvA..60.1874B. doi:10.1103/PhysRevA.60.1874. 43. ^ Fujikawa, Kazuo (2012). "Universally valid Heisenberg uncertainty relation". Physical Review A. 85 (6). arXiv:1205.1360. Bibcode:2012PhRvA..85f2117F. doi:10.1103/PhysRevA.85.062117. 44. ^ Judge, D. (1964), "On the uncertainty relation for angle variables", Il Nuovo Cimento, 31 (2): 332–340, Bibcode:1964NCim...31..332J, doi:10.1007/BF02733639 45. ^ Bouten, M.; Maene, N.; Van Leuven, P. (1965), "On an uncertainty relation for angle variables", Il Nuovo Cimento, 37 (3): 1119–1125, Bibcode:1965NCim...37.1119B, doi:10.1007/BF02773197 46. ^ Louisell, W. H. (1963), "Amplitude and phase uncertainty relations", Physics Letters, 7 (1): 60–61, Bibcode:1963PhL.....7...60L, doi:10.1016/0031-9163(63)90442-6 47. ^ DeWitt, B. S.; Graham, N. (1973), The Many-Worlds Interpretation of Quantum Mechanics, Princeton: Princeton University Press, pp. 52–53, ISBN 0-691-08126-3 48. ^ Hirschman, I. I., Jr. (1957), "A note on entropy", American Journal of Mathematics, 79 (1): 152–156, doi:10.2307/2372390, JSTOR 2372390. 49. ^ Beckner, W. (1975), "Inequalities in Fourier analysis", Annals of Mathematics, 102 (6): 159–182, doi:10.2307/1970980, JSTOR 1970980. 50. ^ Bialynicki-Birula, I.; Mycielski, J. (1975), "Uncertainty Relations for Information Entropy in Wave Mechanics", Communications in Mathematical Physics, 44 (2): 129–132, Bibcode:1975CMaPh..44..129B, doi:10.1007/BF01608825 51. ^ Chafaï, D. (2003), Gaussian maximum of entropy and reversed log-Sobolev inequality, pp. 194–200, arXiv:math/0102227, doi:10.1007/978-3-540-36107-7_5, ISBN 978-3-540-00072-3 52. ^ Havin, V.; Jöricke, B. (1994), The Uncertainty Principle in Harmonic Analysis, Springer-Verlag 53. ^ Folland, Gerald; Sitaram, Alladi (May 1997), "The Uncertainty Principle: A Mathematical Survey", Journal of Fourier Analysis and Applications, 3 (3): 207–238, doi:10.1007/BF02649110, MR 1448337 54. ^ Sitaram, A (2001) [1994], "Uncertainty principle, mathematical", in Hazewinkel, Michiel (ed.), Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 55. ^ Matt Hall, "What is the Gabor uncertainty principle?" 56. ^ Donoho, D.L.; Stark, P.B (1989). "Uncertainty principles and signal recovery". SIAM Journal on Applied Mathematics. 49 (3): 906–931. doi:10.1137/0149053. 57. ^ Amrein, W.O.; Berthier, A.M. (1977), "On support properties of Lp-functions and their Fourier transforms", Journal of Functional Analysis, 24 (3): 258–267, doi:10.1016/0022-1236(77)90056-8. 58. ^ Benedicks, M. (1985), "On Fourier transforms of functions supported on sets of finite Lebesgue measure", J. Math. Anal. Appl., 106 (1): 180–183, doi:10.1016/0022-247X(85)90140-4 59. ^ Nazarov, F. (1994), "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type", St. Petersburg Math. J., 5: 663–717 60. ^ Jaming, Ph. (2007), "Nazarov's uncertainty principles in higher dimension", J. Approx. Theory, 149 (1): 30–41, arXiv:math/0612367, doi:10.1016/j.jat.2007.04.005 61. ^ Hardy, G.H. (1933), "A theorem concerning Fourier transforms", Journal of the London Mathematical Society, 8 (3): 227–231, doi:10.1112/jlms/s1-8.3.227 62. ^ Hörmander, L. (1991), "A uniqueness theorem of Beurling for Fourier transform pairs", Ark. Mat., 29: 231–240, Bibcode:1991ArM....29..237H, doi:10.1007/BF02384339 63. ^ Bonami, A.; Demange, B.; Jaming, Ph. (2003), "Hermite functions and uncertainty principles for the Fourier and the windowed Fourier transforms", Rev. Mat. Iberoamericana, 19: 23–55., arXiv:math/0102111, Bibcode:2001math......2111B, doi:10.4171/RMI/337 64. ^ Hedenmalm, H. (2012), "Heisenberg's uncertainty principle in the sense of Beurling", J. Anal. Math., 118 (2): 691–702, arXiv:1203.5222, doi:10.1007/s11854-012-0048-9 65. ^ Demange, Bruno (2009), Uncertainty Principles Associated to Non-degenerate Quadratic Forms, Société Mathématique de France, ISBN 978-2-85629-297-6 66. ^ American Physical Society online exhibit on the Uncertainty Principle 67. ^ Bohr, Niels; Noll, Waldemar (1958), "Atomic Physics and Human Knowledge", American Journal of Physics, New York: Wiley, 26 (8): 38, Bibcode:1958AmJPh..26..596B, doi:10.1119/1.1934707 68. ^ Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. 69. ^ a b c Heisenberg, W. (1930), Physikalische Prinzipien der Quantentheorie (in German), Leipzig: Hirzel English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930. 70. ^ Cassidy, David; Saperstein, Alvin M. (2009), "Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb", Physics Today, New York: Bellevue Literary Press, 63: 185, Bibcode:2010PhT....63a..49C, doi:10.1063/1.3293416 71. ^ George Greenstein; Arthur Zajonc (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics. Jones & Bartlett Learning. ISBN 978-0-7637-2470-2. 72. ^ Tipler, Paul A.; Llewellyn, Ralph A. (1999), "5–5", Modern Physics (3rd ed.), W. H. Freeman and Co., ISBN 1-57259-164-1 73. ^ Enz, Charles P.; Meyenn, Karl von, eds. (1994). Writings on physics and philosophy by Wolfgang Pauli. Springer-Verlag. p. 43. ISBN 3-540-56859-X; translated by Robert Schlapp 74. ^ Feynman lectures on Physics, vol 3, 2–2 75. ^ a b Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p.260. 76. ^ Kumar, M., Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality, Icon, 2009, p. 282. 77. ^ Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p. 260–261. 79. ^ Isaacson, Walter (2007), Einstein: His Life and Universe, New York: Simon & Schuster, p. 452, ISBN 978-0-7432-6473-0 80. ^ Gerardus 't Hooft has at times advocated this point of view. 81. ^ a b c Popper, Karl (1959), The Logic of Scientific Discovery, Hutchinson & Co. 82. ^ Jarvie, Ian Charles; Milford, Karl; Miller, David W (2006), Karl Popper: a centenary assessment, 3, Ashgate Publishing, ISBN 978-0-7546-5712-5 83. ^ Popper, Karl; Carl Friedrich von Weizsäcker (1934), "Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations)", Naturwissenschaften, 22 (48): 807–808, Bibcode:1934NW.....22..807P, doi:10.1007/BF01496543. 84. ^ Popper, K. Quantum theory and the schism in Physics, Unwin Hyman Ltd, 1982, pp. 53–54. 85. ^ Mehra, Jagdish; Rechenberg, Helmut (2001), The Historical Development of Quantum Theory, Springer, ISBN 978-0-387-95086-0 86. ^ Compton, A. H. (1931). "The Uncertainty Principle and Free Will". Science. 74 (1911): 172. Bibcode:1931Sci....74..172C. doi:10.1126/science.74.1911.172. PMID 17808216. 87. ^ Heisenberg, M. (2009). "Is free will an illusion?". Nature. 459 (7244): 164–165. Bibcode:2009Natur.459..164H. doi:10.1038/459164a. 88. ^ a b Davies, P. C. W. (2004). "Does quantum mechanics play a non-trivial role in life?". Biosystems. 78 (1–3): 69–79. doi:10.1016/j.biosystems.2004.07.001. PMID 15555759. 89. ^ E. Hanggi, S. Wehner, A violation of the uncertainty principle also implies the violation of the second law of thermodynamics; 2012, arXiv:1205.6894v1 (quant-phy). External links[edit]
9e982a80c42eb9a4
I asked in this thread Time-dependet Schrödinger equation how to solve the Time-dependent Schrödinger equation. One of JamalS' recommendations was the Fourier transform, which is why I want to quote his example: Now, my question would be: What are meaningful initial conditions for this ODE? I mean, what you probably want to look at is how a wavefunction $\Psi(t=0,x)$ propagates in time? So how do you set up meaningful initial conditions for this Fourier-transformed Schrödinger equation? You don't need to refer to this particular ODE(with this potential). My question is rather: When you solve this ODE, what are appropriate initial/boundary conditions for this Fourier transformed ODE, cause this is were my imagination fails. If anything is unclear, please let me know. • $\begingroup$ Just for your information, I plucked that example from thin air because it was convenient, so don't expect a physical interpretation. $\endgroup$ – JamalS Jun 20 '14 at 12:54 Working in the frequency space helps simplify the differential equation you need to solve. Now it should be possible to find a bunch of solutions to the new differential equation. However, in the end what you want to solve is still the time-dependent one. So you need to come back to the initial or boundary conditions of the original time-dependent equation to fix the uncertainty. To be more specific, you can try to build the time-dependent wave function with those solutions you obtained. Certainly there will be unknown coefficients to be determined at the last step. • 1 $\begingroup$ so you are saying: Fourier-Transform Schrödinger equation-> Solve it-> Transform back-> Adjust initial/boundary conditions? $\endgroup$ – Xin Wang Jun 21 '14 at 11:23 • 2 $\begingroup$ Yes, that's what I meant. $\endgroup$ – Pu Zhang Jun 21 '14 at 11:35 Your Answer
1a060c38322424dc
Slow motion computer simulation of the black hole binary system GW150914 as seen by a nearby observer, during 0.33 s of its final inspiral, merge, and ringdown. The star field behind the black holes is being heavily distorted and appears to rotate and move, due to extreme gravitational lensing, as spacetime itself is distorted and dragged around by the rotating black holes.[1] Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.[2] Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall, he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations.[3] These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, and form the core of Einstein's general theory of relativity.[4] The 19th century mathematician Bernhard Riemann's non-Euclidean geometry, called Riemannian Geometry, provided the key mathematical framework which Einstein fit his physical ideas of gravity on, and enabled him to develop general relativity.[5] During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors").[10] Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919,[11] making Einstein instantly famous.[12] Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity.[13] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations.[14] Ever more precise solar system tests confirmed the theory's predictive power,[15] and relativistic cosmology, too, became amenable to direct observational tests.[16] Over the years, general relativity has acquired a reputation as a theory of extraordinary beauty.[2][17][18] Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed, a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory.[19] Other elements of beauty associated with the general theory of relativity are its simplicity, symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.[20] From classical mechanics to general relativityEdit Geometry of Newtonian gravityEdit Relativistic generalizationEdit Einstein's equationsEdit Einstein's field equations On the right-hand side,   is the energy–momentum tensor. All tensors are written in abstract index notation.[37] Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant can be fixed as κ = 8πG/c4, with G the gravitational constant and c the speed of light.[38] When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic. The geodesic equation is: where s is a scalar parameter of motion (e.g. the proper time), and   are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices   and  . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four space-time coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Alternatives to general relativityEdit There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.[39] Definition and basic applicationsEdit Definition and basic propertiesEdit General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime.[40] Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow.[41] The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.[42] As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems.[44] Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers.[45] Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.[46] Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly.[48] Nevertheless, a number of exact solutions are known, although only a few have direct physical applications.[49] The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe,[50] and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos.[51] Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).[52] Consequences of Einstein's theoryEdit Gravitational time dilation and frequency shiftEdit Gravitational redshift has been measured in the laboratory[59] and using astronomical observations.[60] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks,[61] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS).[62] Tests in stronger gravitational fields are provided by the observation of binary pulsars.[63] All results are in agreement with general relativity.[64] However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.[65] Light deflection and gravitational time delayEdit General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a star. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.[66] This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity.[67] As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion),[68] several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light,[69] the angle of deflection resulting from such calculations is only half the value given by general relativity.[70] Gravitational wavesEdit Predicted in 1916[73][74] by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On February 11, 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.[75][76][77] Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space[80] or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves.[81] But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.[82] Orbital effects and the relativity of directionEdit Precession of apsidesEdit In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.[83] The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass)[84] or the much more general post-Newtonian formalism.[85] It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations).[86] Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth),[87] as well as in binary pulsar systems, where it is larger by five orders of magnitude.[88] In general relativity the perihelion shift σ, expressed in radians per revolution, is approximately given by:[89] Orbital decayEdit Orbital decay for PSR1913+16: time shift in seconds, tracked over three decades.[90] Geodetic precession and frame-draggingEdit Several relativistic effects are directly related to the relativity of direction.[94] One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport").[95] For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging.[96] More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.[97][98] Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable.[99] Such effects can again be tested through their influence on the orientation of gyroscopes in free fall.[100] Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction.[101] Also the Mars Global Surveyor probe around Mars has been used.[102][103] Astrophysical applicationsEdit Gravitational lensingEdit The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing.[104] Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.[105] The earliest example was discovered in 1979;[106] since then, more than a hundred gravitational lenses have been observed.[107] Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.[108] Gravitational wave astronomyEdit Artist's impression of the space-borne gravitational wave detector LISA Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research.[110] Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO.[111] Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes.[112] A European space-based detector, eLISA / NGO, is currently under development,[113] with a precursor mission (LISA Pathfinder) having launched in December 2015.[114] Observations of gravitational waves promise to complement observations in the electromagnetic spectrum.[115] They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.[116] In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.[75][76][77] Black holes and other compact objectsEdit Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars.[117] Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center,[118] and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.[119] Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation.[120] Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars.[121] In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.[122] General relativity plays a central role in modelling all these phenomena,[123] and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.[124] where   is the spacetime metric.[127] Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions,[128] allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase.[129] Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation,[130] further observational data can be used to put the models to the test.[131] Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis,[132] the large-scale structure of the universe,[133] and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.[134] Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly.[135] There is no generally accepted description of this new kind of matter, within the framework of known particle physics[136] or otherwise.[137] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.[138] An inflationary phase,[139] an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation.[140] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario.[141] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations.[142] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed[143] (cf. the section on quantum gravity, below). Time travelEdit Kurt Gödel showed[144] that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Advanced conceptsEdit Causal structure and global geometryEdit Penrose–Carter diagram of an infinite Minkowski universe Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass-energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorems: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.[149] There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon).[153] Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.[154] Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values.[155] Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole,[156] or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole.[157] The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.[158] Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization.[159] The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage[160] and also at the beginning of a wide class of expanding universes.[161] However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture).[162] The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.[163] Evolution equationsEdit To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism.[165] These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified.[166] Such formulations of Einstein's field equations are the basis of numerical relativity.[167] Global and quasi-local quantitiesEdit Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass)[169] or suitable symmetries (Komar mass).[170] If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity.[171] Just as in classical physics, it can be shown that these masses are positive.[172] Corresponding global definitions exist for momentum and angular momentum.[173] There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.[174] Relationship with quantum theoryEdit If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics, would be the other.[175] However, how to reconcile quantum theory with general relativity is still an open question. Quantum field theory in curved spacetimeEdit Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth.[176] In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.[177] Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time.[178] As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.[179] Quantum gravityEdit The demand for consistency between a quantum description of matter and a geometric description of spacetime,[180] as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics.[181] Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.[182][183] Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems.[184] Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity.[185] At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").[186] Simple spin network of the type used in loop quantum gravity. One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects.[187] The theory promises to be a unified description of all particles and interactions, including gravity;[188] the price to pay is unusual features such as six extra dimensions of space in addition to the usual three.[189] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[190] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[191] Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff.[192] However, with the introduction of what are now known as Ashtekar variables,[193] this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.[194] Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced,[195] there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge Calculus,[182] dynamical triangulations,[196] causal sets,[197] twistor models[198] or the path integral based models of quantum cosmology.[199] Current statusEdit Observation of gravitational waves from binary black hole merger GW150914. General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete.[201] The problem of quantum gravity and the question of the reality of spacetime singularities remain open.[202] Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.[203] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations,[204] while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes).[205] In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015.[77][206][207] A century after its introduction, general relativity remains a highly active area of research.[208] See alsoEdit 2. ^ a b Landau & Lifshitz 1975, p. 228 "...the general theory of relativity...was established by Einstein, and represents probably the most beautiful of all existing physical theories." 5. ^ Moshe Carmeli (2008).Relativity: Modern Large-Scale Structures of the Cosmos. pp.92, 93.World Scientific Publishing 7. ^ Einstein 1917, cf. Pais 1982, ch. 15e 10. ^ Pais 1982, pp. 253–254 11. ^ Kennefick 2005, Kennefick 2007 12. ^ Pais 1982, ch. 16 13. ^ Thorne, Kip (2003). The future of theoretical physics and cosmology: celebrating Stephen Hawking's 60th birthday. Cambridge University Press. p. 74. ISBN 978-0-521-82081-3. Extract of page 74 16. ^ Section Cosmology and references therein; the historical development is in Overbye 1999 17. ^ Wald 1984, p. 3 18. ^ Rovelli 2015, pp. 1–6 "General relativity is not just an extraordinarily beautiful physical theory providing the best description of the gravitational interaction we have so far. It is more." 19. ^ Chandrasekhar 1984, p. 6 20. ^ Engler 2002 21. ^ The following exposition re-traces that of Ehlers 1973, sec. 1 22. ^ Arnold 1989, ch. 1 23. ^ Ehlers 1973, pp. 5f 24. ^ Will 1993, sec. 2.4, Will 2006, sec. 2 25. ^ Wheeler 1990, ch. 2 27. ^ Ehlers 1973, pp. 10f 29. ^ An in-depth comparison between the two symmetry groups can be found in Giulini 2006 31. ^ Ehlers 1973, sec. 2.3 32. ^ Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1 38. ^ Kenyon 1990, sec. 7.4 41. ^ At least approximately, cf. Poisson 2004 42. ^ Wheeler 1990, p. xi 43. ^ Wald 1984, sec. 4.4 44. ^ Wald 1984, sec. 4.1 46. ^ section 5 in ch. 12 of Weinberg 1972 47. ^ Introductory chapters of Stephani et al. 2003 50. ^ Chandrasekhar 1983, ch. 3,5,6 51. ^ Narlikar 1993, ch. 4, sec. 3.3 53. ^ Lehner 2002 54. ^ For instance Wald 1984, sec. 4.4 55. ^ Will 1993, sec. 4.1 and 4.2 56. ^ Will 2006, sec. 3.2, Will 1993, ch. 4 63. ^ Stairs 2003 and Kramer 2004 65. ^ Ohanian & Ruffini 1994, pp. 164–172 66. ^ Cf. Kennefick 2005 for the classic early measurements by Arthur Eddington's expeditions. For an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004 68. ^ Blanchet 2006, sec. 1.3 72. ^ Will 1993, sec. 7.1 and 7.2 77. ^ a b c "Gravitational waves detected 100 years after Einstein's prediction". NSF - National Science Foundation. 11 February 2016. 79. ^ For example Jaranowski & Królak 2005 80. ^ Rindler 2001, ch. 13 81. ^ Gowdy 1971, Gowdy 1974 84. ^ Rindler 2001, sec. 11.9 85. ^ Will 1993, pp. 177–181 88. ^ Kramer et al. 2006 93. ^ Kramer 2004 96. ^ Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003 97. ^ Kahn 2007 101. ^ Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009 102. ^ Iorio L. (August 2006), "COMMENTS, REPLIES AND NOTES: A note on the evidence of the gravitomagnetic field of Mars", Classical and Quantum Gravity, 23 (17): 5451–5454, arXiv:gr-qc/0606092, Bibcode:2006CQGra..23.5451I, doi:10.1088/0264-9381/23/17/N01 106. ^ Walsh, Carswell & Weymann 1979 108. ^ Roulet & Mollerach 1997 109. ^ Narayan & Bartelmann 1997, sec. 3.7 110. ^ Barish 2005, Bartusiak 2000, Blair & McNamara 1997 111. ^ Hough & Rowan 2000 113. ^ Danzmann & Rüdiger 2003 115. ^ Thorne 1995 116. ^ Cutler & Thorne 2002 117. ^ Miller 2002, lectures 19 and 21 118. ^ Celotti, Miller & Sciama 1999, sec. 3 119. ^ Springel et al. 2005 and the accompanying summary Gnedin 2005 120. ^ Blandford 1987, sec. 8.2.4 125. ^ Dalal et al. 2006 126. ^ Barack & Cutler 2004 127. ^ Originally Einstein 1917; cf. Pais 1982, pp. 285–288 128. ^ Carroll 2001, ch. 2 130. ^ E.g. with WMAP data, see Spergel et al. 2003 133. ^ Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005 139. ^ A good introduction is Linde 2005; for a more recent review, see Linde 2006 141. ^ Spergel et al. 2007, sec. 5,6 143. ^ Brandenberger 2008, sec. 2 144. ^ Gödel 1949 151. ^ Bekenstein 1973, Bekenstein 1974 153. ^ Narlikar 1993, sec. 4.4.4, 4.4.5 160. ^ Namely when there are trapped null surfaces, cf. Penrose 1965 161. ^ Hawking 1966 164. ^ Hawking & Ellis 1973, sec. 7.1 168. ^ Misner, Thorne & Wheeler 1973, §20.4 169. ^ Arnowitt, Deser & Misner 1962 171. ^ For a pedagogical introduction, see Wald 1984, sec. 11.2 173. ^ Townsend 1997, ch. 5 177. ^ Wald 1994, Birrell & Davies 1984 179. ^ Wald 2001, ch. 3 181. ^ Schutz 2003, p. 407 182. ^ a b Hamber 2009 183. ^ A timeline and overview can be found in Rovelli 2000 184. ^ 't Hooft & Veltman 1974 185. ^ Donoghue 1995 189. ^ Green, Schwarz & Witten 1987, sec. 4.2 190. ^ Weinberg 2000, ch. 31 191. ^ Townsend 1996, Duff 1996 192. ^ Kuchař 1973, sec. 3 194. ^ For a review, see Thiemann 2007; more extensive accounts can be found in Rovelli 1998, Ashtekar & Lewandowski 2004 as well as in the lecture notes Thiemann 2003 195. ^ Isham 1994, Sorkin 1997 196. ^ Loll 1998 197. ^ Sorkin 2005 198. ^ Penrose 2004, ch. 33 and refs therein 199. ^ Hawking 1987 200. ^ Ashtekar 2007, Schwarz 2007 202. ^ section Quantum gravity, above 203. ^ section Cosmology, above 204. ^ Friedrich 2005 206. ^ See Bartusiak 2000 for an account up to that year; up-to-date news can be found on the websites of major detector collaborations such as GEO 600 Archived 2007-02-18 at the Wayback Machine and LIGO 207. ^ For the most recent papers on gravitational wave polarizations of inspiralling compact binaries, see Blanchet et al. 2008, and Arun et al. 2008; for a review of work on compact binaries, see Blanchet 2006 and Futamase & Itoh 2006; for a general review of experimental tests of general relativity, see Will 2006 208. ^ See, e.g., the electronic review journal Living Reviews in Relativity Further readingEdit Popular booksEdit Beginning undergraduate textbooksEdit • Callahan, James J. (2000), The Geometry of Spacetime: an Introduction to Special and General Relativity, New York: Springer, ISBN 978-0-387-98641-8 • Taylor, Edwin F.; Wheeler, John Archibald (2000), Exploring Black Holes: Introduction to General Relativity, Addison Wesley, ISBN 978-0-201-38423-9 Advanced undergraduate textbooksEdit • B. F. Schutz (2009), A First Course in General Relativity (Second ed.), Cambridge University Press, ISBN 978-0-521-88705-2 • Hartle, James B. (2003), Gravity: an Introduction to Einstein's General Relativity, San Francisco: Addison-Wesley, ISBN 978-0-8053-8662-2 • Hughston, L. P. & Tod, K. P. (1991), Introduction to General Relativity, Cambridge: Cambridge University Press, ISBN 978-0-521-33943-8CS1 maint: Multiple names: authors list (link) • d'Inverno, Ray (1992), Introducing Einstein's Relativity, Oxford: Oxford University Press, ISBN 978-0-19-859686-8 • Ludyk, Günter (2013). Einstein in Matrix Form (First ed.). Berlin: Springer. ISBN 978-3-642-35797-8. Graduate-level textbooksEdit External linksEdit • Courses • Lectures • Tutorials
81413d9dce09f166
Topological Quantum Chemistry, the band theory of solids is now complete This work is on the cover of Nature magazine. As the editor puts it: “[…] a new and complete theory for calculating the topological properties of the electronic band structures of materials. […] As a result, they complete the theory of electronic band structure […] The theory should greatly simplify the search for further materials with exotic properties and also shed light on the underlying physics of existing topological materials.” Cover illustration by JVG. Extended and refined by Bloch and others during the 1930s, Bloch’s theory, known as the band theory of solids, accounts very well for the conducting behaviour of materials. When atoms are joined together into a crystal, each of the individual quantum states of the atoms joins with the corresponding states in other (identical) atoms in the crystal to form the various energy bands within the material. The electrons in the atoms then fill up the available states within each band. Topological behaviour arises from the global properties of the band electrons in something called momentum space, a mathematical concept. But this is a physicist’s point of view, one that considers the crystal as a whole. A chemist, on the other hand, would have a much more local approach where hybridization, ionic chemical bonding and finite-range interactions are paramount. Chemistry, therefore, operates in a real-space (rather than momentum space) description, where atoms and electronic orbitals sit in periodic arrangements. In any case, and despite the apparent success that the field of new materials has had in predicting some topological insulators, conventional band theory is ill-suited to a natural treatment of topological insulators. One need only look at the paucity of known topological insulators (less than 400 materials out of 200,000 existent in crystal structure databases!) to see the failings of the theory. Now, a team of researchers from Princeton University, UPV/EHU, Max Planck Institute and DIPC present 1 a new and complete understanding of the structure of bands in a material and links its topological features to the chemical orbitals at the Fermi level. It is, therefore, a revolutionary theory of Topological Quantum Chemistry, a description of the universal global properties of all possible band structures and materials. The evolution of a theory In 1928, just two years after the formulation of quantum mechanics, the German physicist Arnold Sommerfeld modified the classical free-electron model by treating the electrons according to quantum mechanics. But the new theory still contained the unrealistic assumption that the electrons do not interact with the charged lattice ions except to collide with them. As before, Sommerfeld also considered the electrons to be little charged particles of matter. Beginning in the same year, Felix Bloch, an assistant to Werner Heisenberg in Leipzig, began to make more realistic assumptions in an attempt to formulate a more complete quantum mechanics of electrical conductivity. First, because he wanted to assign a definite momentum and energy to each of the electrons, but not a definite position or a time interval, he chose the wave side of the wave-particle duality. He assumed that the electrons behave, not like particles, but like infinitely extended de Broglie waves. As a result, Bloch did not treat electrons inside metals as a “gas” of particles, but rather as periodic waves extending throughout the periodic crystal lattice. This, it later turned out, helped to explain how electricity can begin to flow in a wire the instant a wire is plugged into a wall socket. If the electrons are viewed as balls of matter, it would take a small amount of time for the current to begin flowing at the rate specified by Ohm’s law. Bloch made a second assumption. He assumed that the positive metal ions, which are arranged in an infinite, periodic array (that is, in a perfect crystal), each exerts an attractive electric force on the negative electrons. This attractive force formed in visual terms a potential energy that looked like a type of “potential well.” The wells of neighbouring ions then overlapped so that together they formed a periodic arrangement that gave the electron waves a very bumpy ride down the wire. Bloch then solved the Schrödinger equation for the energies that these types of de Broglie waves (wave functions) could possess while moving in this type of periodic potential. He discovered that the allowed energies of the electrons in the material are joined together into bands of quantum states, just as there are certain quantum stationary states within each atom in which the electrons are allowed to exist. Between the bands, as between the quantum states, there is a range of energies in which electrons are forbidden to exist. The bands in the material are actually created by the joining together of the quantum states of the individual atoms. In fact, if there are a total of N identical atoms in the material, then there are N quantum states within each band. According to a rule in quantum mechanics (the Pauli exclusion principle), only two electrons are allowed to occupy any one quantum energy state of a single atom, and this is allowed only because the two electrons spin on their axes in opposite directions. Time to incorporate topological insulators But crystal symmetries place strong constraints on the allowed connections of bands. This is where some crystallographic concepts become handy. In the theory of X-ray diffraction the concept of reciprocal lattice is fundamental, with diffraction patterns being much more related to reciprocal lattices than to real-space ones, even though the former can be defined from the latter. A cell in the reciprocal lattice is called a Brillouin zone, after Léon Brillouin who introduced the concept in 1930. In this work the researchers first compiled all the possible ways energy bands in a solid can be connected throughout the Brillouin zone to obtain all realizable band structures in all non-magnetic space-groups. Group theory itself places constraints – “compatibility relations” – on how this can be done. Each solution to these compatibility relations gives groups of bands with different connectivities, corresponding to different physically-realizable phases of matter (trivial or topological). The scientists solve all compatibility relations for all 230 space groups by mapping connectivity in band theory to the graph-theoretic problem of constructing multipartite graphs. Then the researchers developed the tools to compute how the real-space orbitals in a material determine the symmetry character of the electronic bands. Given only the Wyckoff positions and the orbital symmetry (s, p, d) of the elements/orbitals in a material, they derive the symmetry character of all energy bands at all points in the Brillouin zone. To do this general, they extend the notion of band representation (all bands linked to localized orbitals respecting the crystal symmetry), to the physically relevant case of materials with spin-orbit coupling and/or time-reversal symmetry, and identify a set of elementary band representations (EBR). How the theory applies to graphene with spin-orbit coupling. We begin by inputting the orbitals and lattice positions relevant near the Fermi level. Following the first arrow, we then induce an EBR (see the main text) from these orbitals, which subduces to little group representations at the high symmetry points, shown here as nodes in a graph. Standard k·p theory allows us to deduce the symmetry and degeneracy of energy bands in a small neighborhood near these points – the different coloured edges emanating from these nodes. The graph theory mapping allows us to solve the compatibility relations along these lines in two topologically distinct ways. On the left, we obtain a graph with one connected component, indicating that in this phase graphene is a symmetry-protected semimetal. In contrast, the graph on the right has two disconnected components, corresponding to the topological phase of graphene. These elementary band representations allow to easily identify candidate semimetallic materials: If the number of electrons is a fraction of the number of connected bands forming an elementary band representation, then the system is a symmetry-enforced semimetal. If, however, the number of connected bands is smaller than the total number of bands in the elementary band representation, then the disconnected bands are topological. Thus, the researchers were able to classify all topological crystalline insulators. And they show how powerful the method is by predicting hundreds of new topological insulators and semimetals. All these data are now available on the Bilbao Crystallographic Server. This blend of chemistry and physics is what finally completes the band theory of solids. 1. Barry Bradlyn,, L. Elcoro, Jennifer Cano, M. G. Vergniory, Zhijun Wang, C. Felser, M. I. Aroyo, and B. Andrei Bernevig (2017) Topological Quantum Chemistry Nature doi: 10.1038/nature23268 Leave a reply
f30c83f708b4d0fe
Schrödinger cat For in the end I have relocated to write and in fact, this will be completed if the machine becomes me to hang. Let that maybe it pilláis in the process so excuse me in advance. Anyway, I wanted to talk about something that many have spoken before and so I doubt me tell you something you do not know already. I’ll talk Schrödinger’s cat. First of all let’s start with Schrödinger. Erwin Schrödinger was greater than the physical quantum physics contributed by the Schrödinger equation which explains how it evolves a wave function of a particle in time. And it is “wise” before our friend or intuited that at very small distances the particles behaved as a wave but no one had been able to think about their behavior over time. Schrödinger is comparable to Einstein in quantum physics and its equation is similar (in scope of time). Without going into too much to the plate, Schrödinger insert Planck so that it is able to calculate as varies the wave equation in time allowing calculate, for example, an electron which can be “statistically” in time orbiting an atom, something that was previously impossible to know. Including achievement relate your equation depending on the speed of the particle is very fast (relativistic they say) or not. Quite an achievement. Schrödinger too and as I explained the other day, I try to show people the like quantum physics was not something separate or different to classical physical world. To prove that quantum physics through his equations was nothing more than a singularity of classical physics devised his famous experiment Schrödinger’s cat. The experiment, as you know, a cat is enclosed in a box with a radioactive particle, a radioactive atom. With them is a Geiger counter that, as you know is capable of measuring a boat radiactividady cyanide. The particle in question has a 50% chance of emitting radioactivity. If the meter detects radioactivity releases cyanide and cat… dies. It’s simple. And this is where all the grace of the matter is. The phenomenon of disintegration depends on the wave of the radioactive particle, then we know the equation of the wave function of the particle and the Schrödinger equation we know that equation over time. At first it was thought that quantum phenomena influenced classical physics, were like separate worlds. But this experiment if the particle undergoes disintegration (a quantum phenomenon) affect the cat (not quantum) relating both types of physics. That is the first and most important conclusion. The second conclusion is that we all know. Quantum wave function is superimposed in two states (and therefore in two states for the cat) which explains how a particle, at these levels, it may be in more than one state at a time, so while in classical physics would only have a state (alive or dead cat, “whole” or disintegrated particle) in quantum physics this in both states at once without any problem. And here it could go on with the Heisenberg uncertainty principle, which leads to the third conclusion, the funniest. The cat and the particle, while in the box, are in both states, perfect, both quantum states, but really, for an observer can only be in one state because if you open the box or the cat is alive or dead ( unless it is a zombie). What does this mean?. Honestly, the wave function of a particle does not tell us exactly where this or that speed is much less as it develops over time, despite the Schrödinger equations. What are the odds indicates that there this but we can never know exactly where the mere fact that the observer influences the wave equation changing it every time you try to measure it (cool huh?). This indicates that the observer is, when does the measurement, which “condenses” (tuned to that word) the wave equation giving a measurement and some data (but then changed) as they spend probability to something concrete. Well, back to the cat, when in the box in both possibilities and we opened the box condense their quantum states, its wave function on something particular. And because the chances of disintegration are those that are not mean that a moment before or a moment later the cat was in the same state (alive or dead, if I live too) if we had not done our observation, our “condensation”. With what the observer is what makes reality. What does this mean?. Well, honestly that there may be many realities depending on who measured. The observer or observers of what happens in our lives are / are the ones that condense in a timeline, in a specific time line, there may be other timelines for the wave function and therefore to different realities. And this to next ?. A basis of the theory of the multiverse in which they are merely different time lines of the quantum wave function of what surrounds us. The collapse and condensation of observation leads to a concrete reality that can be (or not) the same as that could have happened to someone else. Something that leads to a very complicated issue condensation waves by different observers and these physical quantities, a mathematical topic very interesting and much more philosophically that I leave for what you think. An example: can I condense the wave equations in a different way to someone else when this is not present? Therefore we have different time lines? And when we come together and observe the same process we collapsed the same timeline? And in that case, as two different time lines in a single same for two observers they collapse? What if the problem is for more than two observers?… All this gives a (if complicated) very complicated mathematics that now are developing and I, particularly, it costs me a lot to understand with all the tequila from the world over. Leave a Reply
5f1c0c5252e40240
Bronze-level article Scalar wave From RationalWiki Jump to: navigation, search Style over substance Icon pseudoscience.svg Popular pseudosciences Random examples The central conceit is that scalar waves restore certain useful aspects of Maxwell's equations "discarded" in the 19th century by those fools Heaviside, Hertz and Gibbs.[1] Nikola Tesla was also interested in them, in his more-than-a-little-odd period. Free energy advocates have pushed the concept since the 1990s,[2] particularly Thomas E. Bearden. It has since been adopted by some alternative medicine practitioners as the new "quantum": a universally-applicable sciencey handwave to support any arbitrary claim whatsoever.[3] Conspiracy theorists hold that it is behind weather-changing superweapons that brought down space shuttle Columbia. In real physics[edit] In physics, a quantity described as "scalar" only contains information about its magnitude. In contrast, a "vector" quantity contains information both about its magnitude and about its direction. By this definition, a "scalar wave" in physics would be defined as any solution to a "scalar wave equation".[4] In reality, this definition is far too general to be useful, and as a result the term "scalar wave" is used exclusively by cranks and peddlers of woo. Solutions to scalar wave equations are actually quite prevalent (and useful) in physics. Some prominent examples include acoustic (sound) waves, the motion of a taut string being stretched (such as a guitar string being plucked), and the motion of waves in water (such as the ripples from a stone being dropped into a pond). In contrast, electromagnetic waves are vector quantities derived as solutions to a set of vector wave equations (in this case Maxwell's equations). The concept of a "scalar field theory"[5] also exists, and plays an important role in several branches of physics. In comparison, "scalar waves" have never been observed in nature, and are rooted in sound physics about as well as the average chemtrail is rooted to the ground (not at all). Free energy subculture[edit] The main current proponent of scalar wave pseudophysics is zero-point energy advocate Thomas E. Bearden, who has concocted an entire pseudoscientific "scalar field theory" unrelated to anything in actual physics of that name. It starts with Maxwell's equations originally having been written as quaternions; Bearden holds that the (mathematical) transformation to vectors lost important information.[1] Bearden says that scalar waves differ from conventional electromagnetic transverse waves by having two oscillations anti-parallel with each other, each originating from opposite charge sources, thereby lacking any net directionality. The waves are conjugates of each other, and so, if left unperturbed, can pass through ordinary matter with relative ease. So they are not included in mainstream physics. They don't work like ordinary longitudinal waves either. (Got that?)[6] You can apparently make scalar waves with a bifilar coil (one wound with a pair of wires instead of a single wire) and pushing opposing currents through the wires (join the far ends together). So if you want to experiment with this stuff, you can build a remarkable just-post-steampunk lab filled with coils and wires and sparks.[7] The really astonishing thing about this — which fascinated Tesla for years and years[8] — is that you can pour practically limitless amounts of power into such an apparatus and achieve precisely nothing other than converting electricity into heat — each of the two wires in the coil produces a magnetic field, but since the currents are going in opposite directions, the two magnetic fields cancel out. Richard C. Hoagland thinks Col. Bearden is dead right. So much so that he has adopted Bearden's view and given it a different name, "hyperdimensional physics." According to him, the vectorization of Maxwell's quaternions eliminated a whole dimension from which energy magically appears. Hoagland is so mathematically challenged that it's doubtful if he even understands what a quaternion is, let alone knows how to use one in a calculation. Certainly in all he has written and said about HD physics[9] he has never cited a single one of the quaternions. Scalar superweapon conspiracy theory[edit] According to Bearden, the Scalar Interferometer is a powerful superweapon that the Soviet Union used for years to modify weather in the rest of the world.[10] It taps the quantum vacuum energy, using a method discovered by T. Henry Moray in the 1920s.[11] It may have brought down the Columbia spacecraft.[12] However, some conspiracy theorists believe Bearden is an agent of disinformation on this topic.[13] Alternative medicine[edit] Bearden was pushing the medical effects of scalar waves as early as 1991. He specifically attributed their powers to cure AIDS, cancer and genetic diseases to their quantum effects and their use in "engineering the Schrödinger equation." They are also useful in mind control.[14] Scalar waves appear to have broken out into the woo mainstream around 2005 or 2006, with this text (now widely quoted as the standard explanation) from The Heart of Health; the Principles of Physical Health and Vitality by Stephen Linsteadt, NHD: Scalar waves are produced when two electromagnetic waves of the same frequency are exactly out of phase (opposite to each other) and the amplitudes subtract and cancel or destroy each other. The result is not exactly an annihilation of magnetic fields but a transformation of energy back into a scalar wave. This scalar field has reverted back to a vacuum state of potentiality. Scalar waves can be created by wrapping electrical wires around a figure eight in the shape of a Möbius coil. When an electric current flows through the wires in opposite directions, the opposing electromagnetic fields from the two wires cancel each other and create a scalar wave. The DNA antenna in our cells' energy production centers (mitochondria) assumes the shape of what is called a super-coil. Supercoil DNA look like a series of Möbius coils. These Möbius supercoil DNA are hypothetically able to generate scalar waves. Most cells in the body contain thousands of these Möbius supercoils, which are generating scalar waves throughout the cell and throughout the body.[15] At this point it was all-in. Scalar waves explain homeopathy,[16] achieve lymph detoxification[17], cure diabetes, short sightedness, kidney stones, Parkinson's, strokes, arthritis,[18] cancer,[14] and reverse the aging process.[19] Scalar waves are also part of the biological powers of ORMUS.[20] "Harmonized H2O" is a drinkable sunscreen, made only of pure water, that works by scalar waves.[21] It apparently cancels out the UVA and UVB slightly above the skin.[22] Scalar Wave Laser[edit] The Scalar Wave Laser is a "quantum cold laser rejuvenation technology" which "combines the most advanced low level laser technology with state of the art quantum scalar waves."[23] The device is a small handheld unit with a wand end that shines light on the patient's skin. It uses eight 5mW 650nm (red) laser diodes and eight 5mw 780nm (near infrared) laser diodes. Oddly, these happen to be the wavelengths used for DVD and CD reading, respectively. It also has 20 5mW violet LEDs. The unit costs only $3500.[24] The laser directly delivers energy (photons) and electrons directly to cells. The mitochondria convert the photons to ATP, promptly initiating healing and rejuvenation.[25] So these guys have discovered photosynthesis in humans! Quick, hand them their Nobel Prizes! The Scalar Wave Laser also cures goat polio.[26] Scalar wave mind control[edit] …Who makes the crop circles? The technology used is the most dangerous technology ever invented…SCALAR WAVES…[27] Cranks have wholeheartedly adopted the concept of scalar waves as a form of dangerous mind control, mostly because it sounds scary, and it's easy to make people afraid of anything related to modern wireless technology (even if it actually isn't). Internet fearmongers vigorously promote the idea that scalar waves are some kind of treacherous radio waves (rather than a mathematical solution to an ordinary Wave equationWikipedia's W.svg). For example, many cranks perceive the shift from analog to digital television with suspicion, and feel mind control messages must be embedded "via the flickering of the TV picture", and believe that the use of such evil waves allows mobile phones and Wi-Fi (and don't forget to throw in the traditional boogeymen HAARP and GWEN towers to raise the level of batshit insanity) to "program" people. These mythical scalar waves are somehow even supposedly able to produce crop circles, earthquakes, and hurricanes, too.[28][29][30] In popular culture[edit] "The Black Weapon", a "scalar weapon"[31] that generates a non-nuclear electromagnetic pulse, is the MacGuffin in the single-player campaign for Battlefield: Bad Company 2. The old PBEM game "VGA Planets" has among the engines that can equip a ship a "Scalar Wave Thruster"[32]. It has a fair energy output and is very resistant to damage, but gives low speed. 1. 1.0 1.1 James Clerk Maxwell 2. Solid State Generators Researches (JLN Labs) 3. RWer: "I'm about to descend into the pits of stupid again." Loved one: "And rant about it for the next three days. What is it?" RWer: "Scalar waves." Loved one: "OH GOD NO, NOT THEM." It's always the family that suffers. 4. e.g. Scalar wave equation in three space dimensions.Wikipedia's W.svg 5. See the Wikipedia article on Scalar field theory. 6. This RationalWiki article includes text from the deleted Wikipedia article "Scalar field theory (pseudoscience)", primary author Thomas E. Bearden, used under CC by-sa. Some surviving text here. 7. The Time Energy Pump v2.1 (JLN Labs) 8. Perreault, Bruce. "New generation of radiant energy devices." Exotic Research Report vol 2 no 2, Apr/May/Jun 1998. 9. start here, if you must 10. [1], [2] 11. [3] 12. [4] 14. 14.0 14.1 Archived December 24, 2001 at the Wayback Machine 15. Scalar Waves and the Human Möbius Coil System ( 16. Or "neo-homeopathy". [5] 19. Archived July 12, 2011 at the Wayback Machine Waves Of Healing: About Scalar Waves (one of the most illiterate pages actually selling something you'll find) 24. Archived May 23, 2012 at the Wayback Machine Does the Quantum Scalar Wave Laser Have Real Laser Diodes? (Quantum Scalar Wave Lasers) 26. [6] "After three weeks down, and Scio, Quantum-Touch®, scalar-wave laser, homeopathy and Bach flower remedies, the goat is walking again." 28. GWEN Towers — ELF Scalar Mind Control Weapons The Event Chronicle 29. Scalar Waves Used to Control Us 30. HAARP, Scalar Waves and how they affect you Alpha44 31. Scalar Weapon (Battlefield Wikia) 32. Engines
942a0748a50ad6bf
Home > consciousness, cosmology, Perception, philosophy, physics, universe > A Quantum Analogy with Dice, Fans, and Basketball A Quantum Analogy with Dice, Fans, and Basketball September 10, 2016 Leave a comment Go to comments This is as much for my own edification as anything else, but I’m trying to get across my understanding of what is called the quantum wave function collapse. After that, it goes off into my usual attempt to say something absolutely particular about absolutely everything in general. From what I have gathered the quantum wave function is a statistical mean which may or may not correspond to a physical phenomenon. Now, in QM we try to predict the probability density for a particle’s position (or momentum, or energy, or whatever).  We could try to do this by writing an equation for how p(x) changes over time, but it turns out that doesn’t give us enough information; there are situations where particles start with identical p(x) but do different things as time goes on. It’s found that we do get enough information to make predictions if we write an equation for a complex-valued function ψ(x), and derive the probability density from it as p(x)=ψ∗(x)ψ(x) The way the complex phase of ψ(x) varies from point to point encodes additional information about the particle’s momentum, which is necessary to predict its future behavior. It has units of the square root of a probability density, which is a bit weird but perfectly mathematically acceptable.  This is of course the wavefunction, and the equation that determines how it varies is the Schrödinger equation.-source From another source: An observable is “something we can observe”, and is it represented in quantum mechanics by an operator, that is, something that operates on a quantum state. A very simple example of an operator is the position operator. We usually write the position operator along the x axis as x^ (which is just x with a “hat” on top of it). If the quantum state |Ψ⟩ represents a particle, that means that it contains all the information about that particle, including its position along the x axis. So we calculate the following: Note that the state |Ψ⟩ appears as both a bra and a ket, and the operator x^ in “sandwiched” in the middle. This is called an expectation value. When we calculate this expression, we will get the value for the position of the particle that one would “expect” to find, according to the laws of probability. To be more accurate, this is a weighted average of all possible positions; so a position that is more probable would contribute more to the expectation value. However, in many cases the expectation value is not even a value that the observable can get. For example, if the particle can be at position x=+1 with probability ½ or at position x=−1 with probability ½, then the expectation value would be x=0, whereas the particle could never actually be in that position. – source In the terms of the dice analogy, the table above shows a bell curve function of probability density for the observables of the dice. To make this a metaphor for quantum observations I think it would look more this way: The difference is that we can’t observe the wave function, we can only think of the set of possible observables for a given system and give it a name. This is important because in my view, quantum theory actually oversteps its mandate as a rational solution to a set of physical problems to become a faith-based solution to a set of metaphysical/mathematical problems. There can never be any observation of quantum, there can only be qualitative observations from which we can infer quantitative ideas of relation*. *note that ‘relation’ is itself an aesthetic quality which is dependent upon a preferred sense of grouping. This preference, so far as we can ever know, only occurs within a sensed experience in which aesthetic phenomena are presented as sharing a common quality. Physics in and of itself can have no relations, as general relation qualities cannot be decomposed into fundamental physical forces. No physical mechanism can make quantitative ‘relations’ happen. What the quotes above are trying to say, in my view, is that the wave function itself is an imaginary square root of the inferred probability density of the mentally counted sets of actually observed phenomena. We want to think that quantum particles are the observed dice rolls: a pair of upturned faces of cubes containing a finite number of dots or ‘pips’, and that the wave function is the set of numbers 1 to 6 corresponding to each possible set of dots, but in reality there may not be two dice at all. The observable reality is that when we look at one die, the other one disappears, and we can only see both dice if we don’t look at the dots. Two more analogies illustrating the reducibility of quantum ‘particles’ to qualitative sense: 1. Looking at a ceiling fan in motion, we can either see a circular blur, or if we follow the blur with our eyes at the same frequency as the fan, we can see the fan blades (or a standing-wave of averaged images of fan blades) but not really the circular blur. 2. I’m in my house and hear noises coming from outside. One sounds like a loud motor, and one sounds like a frequent thumping. I know from experience that the neighbors do like to play basketball in their driveway when the weather is nice. I also know that the neighbors across the street are having their roof replaced which may or may not involve some kind of compressor noise. Finally, I know that Saturday morning is a time when there are a lot of neighbors mowing their lawn. The point of this example is to illustrate the common/superficial understanding of the wavefunction collapse would be analogous to me going outside and looking around. By observing, I find out whether there are roofers running some kind of noisy machine and pounding on shingles, or whether there is one neighbor mowing their lawn and another pounding on their fence or something, or whether there’s some combination of things going on which may include a basketball game. By ‘finding out’ what’s going on, I am collapsing the wave function of possibilities because I now know what the noises I heard inside my house refer to outside. This is not correct as an analogy though either. It cannot be applied to quantum observables. The delayed choice quantum eraser and other experiments show surreal phenomena such as entanglement, contextuality, and the mutual exclusivity of entanglement and contextuality. It would be like me like going outside and seeing that the hammering is definitely coming from the roofers across the street, but then going outside again later and seeing that the there is a dude playing basketball instead and there were never any roofers. Entanglement/Contextuality would be like if I went out and played basketball with the neighbors then as long as I was playing, suddenly no neighbor could have their roof repaired. In terms of the fan, it would be like if I had two fans in two separate rooms controlled by the same light switch, putting my hand in the way of one fan not only stops the other, but you can tell by filling the rooms with feathers that stopping one fan makes it so no feathers had ever blown around in the other room. Entanglement and contextuality are opposite orientations of the same thing. The entanglement view focuses on the synchronization of what has been connected experimentally while the contextuality view focuses on the strange contradiction to our expectations about causality extending from the past to the present. Anyhow, this too is not correct in my view. What is being overlooked is that we are taking for granted that the quality of finality in our experience is identical to the property of factuality. We want to say that because we have actually seen the blades of the fan, they are the physical objects which exist and the circular blur is an optical illusion – true enough in the case of a fan. We want to say that seeing a roofer pounding nails into a shingle is evidence that roofing is what is actually going on and the idea that the sound we heard inside could have been a basketball bouncing was a misperception. This is not what physics is telling us, however. Instead, it is telling us, in my view, that there is no fan or basketball or roofer, nor is there any mistake of misperception, there are only sensory experiences, some of which acquire a higher aesthetic density of ‘realism’ than others. We say ‘seeing is believing’ because visual sense presents such an unambiguous seeming experience most of the time but we know from optical illusions and from comparing binocular differences that even seeing should not be believed. What we are seeing when we look at something like the double slit experiment is a context in which perception itself is revealed to be 1. more fundamental than the ‘object’ which is sensed and 2. a revealing of (sense experience) itself as both a self-revealer and a self-concealer. In the phenomenon of seeing visible light we have a metaphor about the relation between metaphor and non-metaphor which is expressed non-metaphorically. It is a context in which the contextualization of contextuality is presented as an uncontextualized/absolute text. (sense = the sole abtext?) Philosophically, we should see that it is necessary to reverse the priority assigned by Galileo and Locke to tangible/physical qualities being primary and phenomenal qualities being secondary. Physics should be considered a set of phenomenal qualities which have been reduced by the subtraction of intangible modes of sensitivity. It is only in the intangible modes which nature can be fully appreciated as the self-revealing, self-concealing meta-phenomenon that it is. Finally, here’s another serendipitous experiment with light. On a polished granite surface I see the reflection of a single overhead light as two separate reflections. With one eye open, I can see the image of the light is on the edge of the surface, while with the other eye open instead, the image of the light is in the center of the surface. Try it next time you see a floor or counter like this and can play with closing one eye or the other. Notice how you can choose between two separate but entangled images of the light which move as your head moves or, you can focus your sight so that there is only a single image of the light. In the former case, the details of the surface are clear – you can see the patterns of granite and can tell exactly which colored spots seem to be illuminated by the overhead light. In the latter case, you have to look ‘through’ or passed the grain of the stone and focus your visual attention on the image which is reflected from the polished surface. To make the former view real is the materialist orientation. To make the latter view real is the information-theoretic orientation. Both orientations entail the disorientation/de-realization of the other. The materialist says the floor is the real thing being illuminated, while the computationalist says that the floor and light are only generic vehicles for the underlying reality of mathematical laws of relation. What is left out of both of these views is the connection to the eye and the experience of seeing. The eye’s location is what is telling my experience of where the image of the light’s reflection appears to be. Indeed, that appearance *is* the actual location of the lights reflection as seen through one eye. When seen through the other eye, there is a different actual location. When seen through both eyes, there are either two semi-actual locations or there is one actual light reflection against a single blurred semi-actual location. I cannot emphasize this enough: Quantum theory is about perceiving perception. It tells us not that the reality of nature is inconceivably weird and unfamiliar, but that nature is more than ‘reality’. The different concepts of wave function, probablility density, and observables map to quantum contextuality, quantum entanglement, and classical (collapsed) realism respectively. QM is about how appearances acquire density of realism by consensus of accumulated limits. For a quantum phenomenon (which is totally abstract) to begin to seem concretely ‘real’, the sense of contextuality or entanglement must in one frame of reference seem to be shared as an isomorphic sense in every other frame of reference, without contradiction. Thus there is no mysterious ‘classical limit’ at which quantum decoherence occurs, and no magical ‘emergent properties’ which appear out of nowhere to turn intangible figments of math into concrete objects – there is only a dynamic aesthetic phenomena (sense experiences or qualia) which merge and diffract as aesthetic meta-phenomena (veridical perceptions or ‘shared reality’). There is no ‘finding out’ what really happened, there is only an adding of dimensions of realism by sacrificing qualities that extend beyond realism. This goes for our own consensus of sense modalities as well as a consensus among peer-reviewed scientific papers. The sense of realism arises from the multiplicity of limited perspectives, which then divides the total entropy of doubt/uncertainty. With only one slit or sense or scientific mind, any given phenomenon is presented as-is – an observed effect only. With multiple senses or slits or peers, we observe a different effect which enables a cross-reference that goes beyond the observation itself to an observation of the observation process. This opens the door not only to theories which connect the particular observations but which can apply to many other kinds of observations, as well as to theories of observation in general. In this way, the general/rational/contextual/illuminating and the particular/empirical/textual-entangled/illuminated can be reconciled as opposite ends of a single spectrum of sense/aesthetic/ab-textual/visibility. 1. No comments yet. 1. No trackbacks yet. Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s Perfect Chaos God's Perfect Purpose in a Chaotic World Art from Chaos Lucid Being🎋 I can't believe it! Problems of today, Ideas for tomorrow Rationalising The Universe one post at a time Conscience and Consciousness Academic Philosophy for a General Audience Exploring the Origins and Nature of Awareness BRAINSTORM- An Evolving and propitious Synergy Mode~! Musings and Thoughts on the Universe, Personal Development and Current Topics Paul's Bench Ruminations on philosophy, psychology, life Political Joint A political blog centralized on current events Zumwalt Poems Online dhamma footsteps postcards from the present moment Observational Tranquillity. %d bloggers like this:
f4246eed5f37730c
Open access peer-reviewed chapter Defects in Graphene and its Derivatives By Soumyajyoti Haldar and Biplab Sanyal Submitted: November 23rd 2015Reviewed: May 18th 2016Published: October 12th 2016 DOI: 10.5772/64297 Downloaded: 833 The experimental realization of graphene along with its unique properties in 2004 triggered huge scientific researches in the field of graphene and other two-dimensional (2D) materials. The experimental preparation processes of these materials are prone to defect formation. These defects affect the properties of the pristine system, which can be beneficial or detrimental from the application point of view. In this book chapter, we discuss a few cases of defects in 2D materials such as graphene and its derivatives and their roles in applications. • defects in graphene and its derivatives • graphene defects • hybrid materials • gas sensing • ab initio theory • magnetism 1. Introduction The (re)discovery [1, 2] of graphene—a single layer of carbon atoms arranged in a honeycomb lattice—in 2004 by Novoselov et al. has triggered a new aspect of research in two-dimensional (2D) materials [3, 4]. Although the existence of materials with their properties governed by their 2D units was well known for quite some time [5, 6], it is the experimental realization of a single layer graphene has showed that it is possible to exfoliate stable 2D materials from the 3D solids exhibiting various fascinating properties. A huge number of crystalline solid-state materials having different mechanical, electronic, and transport properties exist from which stable 2D materials can be created due to the presence of weak interaction between the layers [7]. 2D allotropes (e.g., silicene, graphyne, germanene), compounds (e.g., graphane, hexagonal boron nitride, transition metal di-chalcogenides) are the few examples of 2D materials. These 2D materials have the potential for a wide range of applications due to the interesting electronic and structural properties [2, 812]. To exploit these various properties, the samples have to be made in a scalable way. Chemical vapor deposition (CVD) has become a very common method for large-scale fabrication. Nonetheless, the CVD samples inevitably contain defects, for example, edges, hetero structures, grain boundaries, vacancies, and interstitial impurities [1315]. These defects can be seen very easily in transmission electron microscopy (TEM) experiments [16] or scanning tunnelling microscopy (STM) experiments [17]. Figure 1a, b shows experimental STM and TEM images of an isolated single vacancy in graphene. In the STM image, the single vacancy can be seen as a blob because of increased local density of states. These states appear due to the presence of dangling bonds around the single vacancy. Figure 1. (a) Experimental STM image of single isolated vacancy in graphene. Reprinted with permission from Ugeda et al. [17], copyright (2010) by the American Physical Society. (b) Experimental TEM image of reconstructed single vacancy with atomic configurations. Reprinted (adapted) with permission from Meyer et al. [16], copyright (2008) by the American Chemical Society. In general, these defects manipulate the properties of the materials and hence their avoidance or deliberate engineering requires a thorough understanding. In one hand, defects can be detrimental to device properties [13], but on the other hand, especially at the nanoscale, defects can bring new functionalities which could be utilized for applications [18, 19]. In this book chapter, we address a few cases of defects in 2D materials such as graphene and its derivatives. We show how one can tune the various properties of the pristine materials with the control insertion of defects in these systems and use them in various applications. 2. Theoretical methods We have mainly used ab initio density functional theory-based methods to calculate various properties of defected 2D materials such as graphene and its derivatives in general. In this section, we will provide a brief introduction to the theoretical methods used. 2.1. Density functional theory Various different properties of these many-body systems are described by the wave functions associated with it. These wave functions are governed by the time-dependent Schrödinger equation where H^is the Hamiltonian of the many-body systems and represents the energy operator, and E is the total energy of the system. However, one needs various approximations to solve the Schrödinger equation for all kinds of systems. In density functional theory (DFT), the electron density n(r)is used to obtain the solution of the Schrödinger equation. The core concept of the DFT is given by two theorems of Hohenberg and Kohn [20], where they showed that the properties of interacting systems can be obtained exactly by the ground state electron density, n0(r). Following the two theorems, the total energy of the system can be written as follows: Where functional F represents kinetic energy and all electron-electron interactions. Functional F does not depend on the external potential, and hence, it is same for all the systems. However, Hohenberg-Kohn theorem does not provide any solution toward the exact form of the functional F. Kohn and Sham [21] gave a way around to obtain the functional F by replacing the interacting many-body system with a non-interacting system consisting of a set of one electron functions (orbitals) while keeping the same ground state. According to the Kohn-Sham formalism, the total energy functional can be written as follows: Where TS is the kinetic energy term of the non-interacting electrons, and Vext is the external potential. The third term in the above equation is the Hartree term representing the classical Coulomb interactions between electrons, and the last term is known as exchange-correlation energy (EXC), which contains all the many-body effects. The formalism of Kohn-Sham is an exact theory. If the form of the EXC is exactly known, then using this formalism, one can calculate the exact ground state of the interacting many-body system. In reality, the exact form of the exchange-correlation is not trivial, and hence, it is necessary to model the form of the exchange-correlation. Different forms of exchange-correlation can be constructed depending upon various level of approximation, for example, local density approximation (LDA) [20, 22, 23], generalized gradient approximation (GGA) [2426], hybrid functionals (a mixture of Hartree-Fock and DFT functionals) etc. It is also important to remember that the implementation of single-particle Kohn-Sham equation is not trivial due to the complex behavior of wave functions in different spatial region, for example, in the core and in the valence region. To describe this complex wave function, a complete basis function is needed which can be of different form, for example, plane waves, localized atomic-like orbitals, Gaussian functions etc. 3. Manipulation of properties of graphene with defects As mentioned in the introduction, an immense amount of scientific activities is going on in the field of graphene research because of its special properties [27, 28]. However, the lack of band gap limits the usage of graphene in electronic device applications. Therefore, the modification and tuning of graphene properties to open up an energy gap have become a cutting edge research interest among the scientific community. In this section, we show a few examples of manipulating the properties of graphene and hybrid systems with graphene. 3.1. Magnetic impurities in graphene/graphane interface Graphane—another 2D material—is hydrogenated graphene, where each carbon atom is attached with a hydrogen atom. Unlike graphene, this material is an insulating system with sp3 hybridization resulting in a large band gap. It is one of the materials, which was first predicted by ab initio theory [29] and then latter synthesized in the experiments [30]. Depending upon the concentration of hydrogenation in graphene, semimetal to metal to insulator transition is observed [31]. It has been shown that patterning graphene with partial hydrogenation leads to modification of graphene properties, for example, conducting channels, band gap opening, quantum dots, and magnetically coupled interfaces [3136]. As a potential material for spintronic applications, graphene/graphane interfaces are of particular interest as these interfaces can mimic the edge properties that can be seen in zigzag or armchair graphene nanoribbons [3741]. Hence, it will be interesting to study the effect of Fe adatom, as a representative of magnetic impurities, in these hybrid 2D superlattice structures [42]. Figure 2 shows the two different graphene-graphane superlattice structures considered in our calculation. The hydrogen atoms are removed along the diagonal (edge) of the graphane to create armchair (zigzag) graphene-graphane superlattices. We have considered three, five, and seven rows of channel widths for both configurations. In order to find out the stable adsorption site in the graphene channel, we have placed Fe adatom in different places. The analysis of formation energy indicates that the preferred adsorption site for Fe in armchair channel is at the hollow site of graphene channel equidistant from the interface. However, for the zigzag channel, the Fe adatom prefers to bind at a hollow site closer to the interface. Further analysis of energetics as a function of channel width shows that with increasing channel width, the binding energy remains almost constant in the zigzag channel and it decreases in the armchair channel. The calculated value of total magnetic moments for all the systems are ~2.0 μB, which is similar to the value of total magnetic moment of Fe adatom substitutionaly placed in graphene [43]. However, the value of onsite local moments is different in both channels and is ~0.5 μB higher in the zigzag channel. Our result shows that the binding energy of Fe adatom in the zigzag channels is higher than the binding energy of Fe adatom on pristine graphene by ~0.2 eV. Hence, we can conclude that the mixed sp2–sp3 character of graphene-graphane superlattice helps a strong binding of Fe adatom. Figure 2. Representative decorations of (a) armchair and (b) zigzag channel in graphane. Reprinted with permission from Haldar et al. [42], copyright (2012) American Physical Society. Figure 3 shows the total density of states for a single Fe adatom placed on a three-row armchair channel. The analysis of site projected DOS shows that the Fe d spin-down electrons induce states below the Fermi energy and reduces the gap quite significantly. Similar features can be observed in the higher row channels although the value of the gap depends on the width. Figure 3. Total DOS of Fe adatom in three-row armchair channel (blue solid line). Total DOS for pristine channel is shown in red dashed line. Reprinted with permission from Haldar et al. [42], copyright (2012) American Physical Society. The spin density plot of Fe adatom adsorbed in three-row armchair and zigzag channels is shown in Figure 4. For armchair channel, it is quite evident from the figure that most of the spin-up density is localized on Fe. The interaction of Fe d orbitals with the pz orbitals of the surrounding C atoms induces polarization in the surrounding C ring, and it induces negative moment on the C atoms. However, in the zigzag channel, the spin density is delocalized and the effect of Fe adatoms can be seen up to fourth nearest neighbor along the interface. The absorption of Fe adatom in this case reduces the onsite magnetic moments of the edge C atoms. The maximum reduction of this onsite moments can be seen on the nearest site and it can be upto ~15%. In the zigzag channel, the Fe adatom surrounding C atoms ring is not antiferromagnetically ordered and only three C atoms from the same sublattice show significant spin-down densities. Figure 4. The spin density plot for Fe adatom adsorbed on the three row-armchair and zigzag channel. Reprinted with permission from Haldar et al. [42], copyright (2012) American Physical Society. We have also calculated the magnetic interactions of two Fe adatoms in these channels. Our result indicates that two Fe adatoms in the armchair channel interact very weakly and hence the exchange energy is also very small favoring an antiferromagnetic interaction. In contrast to the armchair channel, the two Fe adatoms in the zigzag channel interact strongly. A very strong ferromagnetic coupling can be observed in this case between the Fe adatoms, and consequently, they have significantly higher exchange energy compared to the armchair channel. 3.2. Improvement in gas sensing activities Graphene has also potential application toward the gas sensing properties. This is mainly due to two of the following facts 1. The two-dimensional nature of graphene that consists of only surface and no volume. This feature of graphene enhances the effects of surface dopants. 2. Graphene has a very high conductivity and electrical noise, which enables to detect very small signal changes due to gas molecule absorptions. Experiments have demonstrated the application of graphene as a solid-state gas sensing device and especially in the detection of single gas molecule, for example, NO2 [44]. Gaseous molecules act as electron donors or acceptors and modify the carrier density of graphene. Hence, it changes electrical resistance of graphene. Therefore, by measuring the electrical resistance changes, graphene can be used as a gas sensing device [44, 45]. On a pristine graphene lattice, NO2 molecules are physisorbed. However, chemisorption affects the conduction electron much more than the physisorption. Pristine graphene surface does not have dangling bonds that can chemisorb the gas molecules. However, the presence of defects can make chemisorption stronger. Hence, in order to increase the gas sensing properties of graphene, one needs to understand the reaction of gas molecules with defected graphene. In this work, we have created defects in graphene using ion beams and studied the gas-sensing properties using current-time measurements, Raman spectroscopy, and gated conductivity characterization [46]. In this study, the graphene flakes were created using the mechanical exfoliation technique on heavily doped Si substrates containing 300 nm SiO2 top layer. Electron beam lithography was used to fabricate the electrical contacts on device. The defects were created in the pristine graphene by irradiation with 30 keV Ga+ ions in a vacuum chamber under ~10−6 mbar pressure. We have irradiated 20 × 20 μm2 area and one single irradiation consists of an ion dose of ~1012 ions cm−2. We have used a mixture of N2 and 100-ppm NO2 gasses as target gas and N2 gas as a purging gas. We have used Raman spectroscopy (514 nm wavelength) and atomic force microscopy experiments to determine the thickness of the graphene flakes. From the shape of 2D peak, one can determine the number of layers. Figure 5 shows the evolution of the Raman spectra with respect to the ion irradiation of graphene. For comparison, the Raman spectrum of the pristine graphene is also shown. Our analysis shows that the graphene flake in this study has a bilayer structure. The D-peak appears at 1352 cm−1, which indicates the formation of defects in graphene. The breathing modes of sp2 rings cause the appearance of D-peak and only the presence of defects activates it. The intensity of D-peak increases further after the second irradiation and also D’-peak appears at 1626 cm−1 which suggest an increase of defects in graphene. Figure 5. Evolution of Raman spectra with respect to ion irradiation of graphene. Reprinted with permission from Hajati et al. [46], copyright (2012) IOP Publishing. We have performed gated conductivity experiments to measure the gas sensing properties. Figure 6 shows the normalized conductance (G/G0) responses during the exposure of 100 ppm NO2 in N2 at room temperature. The conductance of graphene before the exposure is denoted by G0. The exposure of NO2 increases the conductance. In pristine graphene, the electrons are transferred from graphene to NO2 molecules thus increasing the hole density in graphene. A faster response in changing conductance can be observed when the defected graphene (after first irradiation) is exposed to NO2 gas. These show higher sensitivity to NO2 gas when compared to the pristine graphene. However, the gas sensing properties decrease after the second irradiation due to the increase of defects, which increases the number of scattering states. Hence, it reduces the conductance. Figure 6. Normalized conductance (G/G0) response of the graphene gas sensor. The exposure of the NO2 gas started after 110 s in all three cases. The average rise times for pristine, first, and second defected graphene (during NO2 exposure) are 500, 328, and 420 s respectively. Reprinted with permission from Hajati et al. [46], copyright (2012) IOP Publishing. We have also performed ab initio density functional calculations in order to understand the interactions between NO2 and defected graphene. We have used a monolayer graphene for our calculation as most of the defects are same in both monolayer and bilayer graphene. We have studied the binding of NO2 gas with graphene with different defects, for example, monovacancy, divacancy (585 defect), 686 structure [47], and Stone-Wales (SW) defect. Our analysis shows that the SW vacancy has the highest binding energy with NO2 molecule (0.72 eV) when compared to the other defects where binding energies are ~0.3 eV. In Figure 7, we have shown the total and molecular NO2 spin polarized DOS, inverse participation ratio (IPR) [48] for the electronic states of SW + NO2. The calculated DOS shows that the spin-polarized molecular levels of NO2 molecules appear near the Fermi energy. These cause 1 μB/unit cell magnetic moments. We have also calculated the IPR, which is inversely proportional to the number of atoms contributing to a particular molecular orbital, and hence, IPR gives a quantitative characterization of localization of molecular orbitals. In Figure 7, the calculated IPR has very small values near the Fermi energy, which shows conducting character of the states. Figure 7. (a) Total and molecular NO2 spin polarized DOS. (b) Inverse participation ratio (IPR) for the electronic states of SW + NO2. (b) Optimized geometry of NO2 at SW-defect site in the graphene lattice is shown as top and side views. Reprinted with permission from Hajati et al. [46]. Copyright (2012) IOP Publishing. 3.3. Fluorination of graphene using defect insertion Figure 8. Characterization of pristine graphene, defected graphene (DG) and fluorinated graphene (FG). (a) Scanning electron microscope (SEM) image of local functionalization of graphene (100 μm × 100 μm) with ion doses of 1013 ions/cm2 and simultaneous 167 s gas exposure. (b) Scanning tunneling microscopy image of DG under the same ion dosage. (c) X-ray photoelectron spectroscopy spectra of F 1 s peak of pristine graphene, DG and FG. FG reveals a distinguished F 1 s peak, and the F 1 s spectrum of pristine graphene as well as DG is given as a reference. (d) Raman comparison of pristine graphene, DG and FG. Lower ID/IG in FG in contrast to DG indicates lower degree of defects density and larger crystalline size. Reprinted with permission from Li et al. [49]. Functionalization of graphene has attracted significant attention as it has the potential to make graphene useful for applications. Functionalization of graphene by fluorination is one of the ways. In general, the functionalization of graphene is a challenging process. Local functionalization is a promising tool to keep the desired properties of graphene intact after the modification. In this section, we discuss an interesting technique, which allows precise site-selective fluorination [49]. We have used a focused ion beam irradiation under XeF2 environment in high vacuum to design the site-selective fluorination. In this method, the graphene surface is locally radicalized using high-energy ion irradiation under fluorine contained precursor molecule environment. We have used X-ray photoelectron spectroscopy (XPS), Raman spectroscopy, scanning tunnelling microscopy (STM), and density functional theory (DFT) calculations to verify the fluorination process and explain the mechanism. The defected structures shown in Figure 8a, b are obtained by irradiating graphene locally with high-energy (30 kV) Ga+ ions with an irradiation dose of 1013 ions cm−2. Under this amount of irradiation dose, graphene retains most of its lattice structure. However, the damaged part shows significant defect formations, which are mainly vacancies. The formation of fluorinated graphene can be seen from Figure 8c, where XPS shows a clear signal of F 1 s peak. We have also used Raman spectroscopy to find out the structural information. From the Raman spectroscopy figure (Figure 8d), we can see that the intensity of the D-peak (at 1350 cm−1) increases after irradiation and the intensity of the 2D-peak decreases sharply. It means that the translational symmetry of sp2 bond is broken. Compared to the defected graphene, in fluorinated graphene, the ratio of D and G peak (ID/IG) is lower. This implies that fluorinated graphene contains less structural disorder. Figure 9. STM images of fluorinated graphene. (a) 20 × 20 nm2 area. (b) Zoom in image of a hole defect showing standing waves pattern. (c) Other area 15 × 15 nm2 showing bright feature decorating holes (blue arrows) attributed to fluorine atoms. (d) FFT of (a). It reveals the first Brillouin zone with hexagonal lattice and K points (red arrows) associated to the standing waves pattern due to intervalley scattering. Reprinted with permission from Li et al. [49]. STM experiments were carried out on the fluorinated graphene for a better understanding of structures. Figure 9 shows these images, which are taken at low bias voltage of −75 mV at Fermi level. In Figure 9a, a 20 × 20 nm2 area of fluorinated graphene is shown. The surface is covered by various defects, corrugations, etc. Standing waves with different structures near the defect area form these corrugations. The Fast Fourier transformation of Figure 9a is shown in Figure 9d, which clearly shows the first Brillouin zone of the hexagonal lattice and the K points. These K points are related to the standing waves pattern from the intervalley scattering [50]. All the defects in the surface are connected with the standing waves. Larger defects (zoomed in Figure 9b) show standing waves as straight lines similar to as observed in step edges. Thus, it can be concluded that the fluorinated graphene remains metallic [51]. In Figure 9c, the bright features are associated with the fluorine atoms. Combining the observation of standing waves related to delocalized electron at the conjugated sp2 bonds from Figure 9b, it could be concluded that the fluorination happens near the defect sites created by the ion irradiation. Figure 10. Ab initio density functional theory (DFT) calculation models of fluorinated graphene. Di-vacancy model (a) and hole-defect model (b), 0.95 nm in length, are based on the STM observation. Binding energies are shown in Table 1. StructureEabs (eV)Hybridization Di-vacancy at site A−2.86sp3 Di-vacancy at site B−2.25sp3 Hole-defect site C−5.64sp2 Hole-defect site D−2.18sp3 Table 1. Adsorption energies of fluorine adatom on pristine graphene as well as the edge carbon atoms surrounding the two defects. Reprinted with permission from Li et al. [49]. We have also performed ab initio density functional-based calculations to find out fluorine adsorption characteristics on defected graphene. We have used two models of defected graphene for our calculations as shown in Figure 10. These models are: (i) divacancy model and (ii) hole-defect model. These are the two types of models that can be seen in the STM images. In these two types of defects, there are only four possible places for single fluorine adatoms to be adsorbed. These four places are marked as site A–D. In Table 1, we have tabulated the energetics. Our calculations show that the adsorption energy of the fluorine adatom on pristine graphene is very high compared to the di-vacancy and the hole defects. It implies that the fluorine atoms are prone to react with the carbon atom surrounding the defect sites. At site C, the carbon atom is radicalized due to the presence of dangling bonds and hence has very low adsorption energy for fluorine adsorption. In this case, the C–F bond is planar with a bond length of 0.136 nm, typical for sp2 hybridization. The C–F bonds in other sites are all out of plane (perpendicular to the graphene lattice) and have sp3 hybridization. This strong bond between dangling bond and fluorine atoms implies that different gases could be utilized to functionalize graphene. In conclusion, we have showed an experimental technique to design site-selective local fluorination using high kinetic energy ion irradiation and simultaneous XeF2 gas injection. Our method opens up a possibility of functionalize graphene locally with a wide range of other functional materials. 4. Conclusion From the discussions of Fe adatom adsorption at the partially hydrogenated channels, we conclude that the magnetic adatoms in the zigzag channel interact quite substantially as compared to the armchair channel. The response of the two channels in the presence of magnetic impurities is quite different, viz., localized (delocalized) in the armchair (zigzag) channel. In the semiconducting armchair channel, the magnetic coupling is weakly antiferromagnetic. However, in the delocalized zigzag channel, a relatively stronger ferromagnetic coupling can be observed. Hence, it may be possible to design magnetic graphene lattice by depositing suitable magnetic impurities by means of scanning tunneling microscopy tips, which can lead to the possibility of designing ultrathin magnetic devices. We have also studied how defects in graphene affect the gas sensing properties. The defects are created using the Ga+ ion irradiation. The defected graphene shows higher conductivity changes in the presence of NO2 gas when compared to the pristine graphene. Hence, one can conclude that the defected graphene has higher sensitivity in gas detection. The NO2 gas molecules bind strongly with SW defects in graphene, which changes the local electronic structure and enhance the transport properties. We have also demonstrated how defects in graphene can be used for various important applications, for example, spintronics and gas sensing. The presence of defects modifies the structural and electronic properties of the 2D material as well as the binding entities. The understanding of these phenomena can be achieved by materials specific theoretical methods. From the experimental side, the controlled nanoengineering of defects may lead to novel applications and should be pursued seriously in near future. How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Soumyajyoti Haldar and Biplab Sanyal (October 12th 2016). Defects in Graphene and its Derivatives, Recent Advances in Graphene Research, Pramoda Kumar Nayak, IntechOpen, DOI: 10.5772/64297. Available from: chapter statistics 833total chapter downloads 1Crossref citations More statistics for editors and authors Access personal reporting Related Content This Book Next chapter Harvesting Plasmonic Excitations in Graphene for Tunable Terahertz/Infrared Metamaterials By Yuancheng Fan, Fuli Zhang, Quanhong Fu and Hongqiang Li Related Book First chapter Synthesis Strategies about 2D Materials By Jianghao Wang, Guangshe Li and Liping Li More about us
5f59eb7a2b155a3a
European physicists have won the race to observe zitterbewegung, the violent trembling motion of an elementary particle that was predicted by Erwin Schrödinger in 1930. To observe this phenomenon, the team simulated the behaviour of a free electron with a single, laser-manipulated calcium ion trapped in an electrodynamic cage. They took this approach because it is currently impossible to detect the quivering of a free electron, which has an amplitude of just 10–13 m and a frequency of 1021 Hz. Computational simulations are also ruled out, because today's computers have insufficient power and memory capabilities. The researchers claim that their triumph may also serve as an important step towards using trapped ions and atoms to simulate high-temperature superconductivity, magnetism and even black holes. Relativistic realization According to Christian Roos at the University of Innsbruck, Austria, one of the keys to success was to make their non-relativistic ion behave as if it was a relativistic particle. This is crucial because zitterbewegung is predicted by the Dirac equation, which describes relativistic quantum mechanics. Roos did the work along with colleagues at Innsbruck and the University of the Basque Country. "When the right conditions are met, the Schrödinger equation that describes this ion as a quantum system looks identical to the Dirac equation of the free electron," he explained. The trapped, laser-manipulated ion can then be studied as an analogue of a relativistic free electron. Calcium ions were chosen because they can be excited with visible wavelength lasers. "In addition, calcium's level structure is sufficiently simple to allow the experimentalist a near-perfect control over the internal states of the ion, but complex enough to carry out the quantum measurements needed for inferring the position of the particle." Simulations begin by putting the calcium ion into a particular quantum state. This is allowed to evolve for a certain time, before the researchers measure the position of the ion. Tiny movements "In these measurements the particle moves by much less than the wavelength of visible light, so we cannot directly use an imaging technique to determine the position of the ion," explains Roos. "Instead, we use a suitably tailored laser-ion interaction that maps the information about the position of the particle onto the internal states of the ion." The ion's position is then determined from its internal state, and this uncovers the quivering motion. The act of measuring the ion's position collapses its wave function, so the researchers have to reconstruct the desired initial wave function for every single measurement. This process is relatively quick, however, and they are able to carry out 50 experiments per second. Adjusting the output of the laser alters the simulated particle's kinetic energy to rest-mass energy ratio, and opens the door to studies of relativistic and non-relativistic physics. The researchers found that changes to the particle's effective mass while its momentum was kept constant led to the disappearance of zitterbewegung in the non-relativistic and highly relativistic limits (large and small effective masses, respectively). However, the quivering motion was clearly present in the regime between these limits. Inspirational work Jay Vaishnav from Bucknell University, Pennsylvania, says that the work of Roos and his co-workers represents a major step forward for quantum mechanical simulations, and she believes that it will inspire other research groups to attempt similar things. She says that the building of an atomic version of the Datta-Das transistor – a spin-based device that has never successfully been built with electrons – could lead on from Roos' work. "The workings of this transistor are based on creating a relativistic set-up using cold atoms." The work is reported in Nature.
169a8baa55039cce
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term Some applications of fractional equations. (English) Zbl 1041.35073 The authors deal with the application of fractional equations in physics. They consider the kinetic equation with fractional derivatives P(x,t) t ' = 2 P x '2 +ε α P |x ' | α ,1<α<2,(1) where ε is a constant, and α P |x ' | α is the Riesz derivatives. The authors study the competition between normal diffusion and diffusion induced by fractional derivatives for (1). It is shown that for large times the fractional derivative term dominates the solution and leads to power type tails. Moreover a corresponding fractional generalization of the Ginzburg-Landau and nonlinear Schrödinger equations is proposed. 35Q55NLS-like (nonlinear Schrödinger) equations 26A33Fractional derivatives and integrals (real functions)
b61605ee7cb53702
On Determinism By Sean Carroll | December 5, 2011 10:19 am Back in 1814, Pierre-Simon Laplace was mulling over the implications of Newtonian mechanics, and realized something profound. If there were a vast intelligence — since dubbed Laplace’s Demon — that knew the exact state of the universe at any one moment, and knew all the laws of physics, and had arbitrarily large computational capacity, it could both predict the future and reconstruct the past with perfect accuracy. While this is a straightforward consequence of Newton’s theory, it seems to conflict with our intuitive notion of free will. Even if there is no such demon, presumably there is some particular state of the universe, which implies that the future is fixed by the present. What room, then, for free choice? What’s surprising is that we still don’t have a consensus answer to this question. Subsequent developments, most relevantly in the probabilistic nature of predictions in quantum mechanics, have muddied the waters more than clarifying them. Massimo Pigliucci has written a primer for skeptics of determinism, in part spurred by reading (and taking issue with) Alex Rosenberg’s new book The Atheist’s Guide to Reality, which I mentioned here. And Jerry Coyne responds, mostly to say that none of this amounts to “free will” over and above the laws of physics. (Which is true, even if, as I’ll mention below, quantum indeterminacy can propagate upward to classical behavior.) I wanted to give my own two cents, partly as a physicist and partly as a guy who just can’t resist giving his two cents. Echoing Massimo’s structure, here are some talking points: * There are probably many notions of what determinism means, but let’s distinguish two. The crucial thing is that the universe can be divided up into different moments of time. (The division will generally be highly non-unique, but that’s okay.) Then we can call “global determinism” the claim that, if we know the exact state of the whole universe at one time, the future and past are completely determined. But we can also define “local determinism” to be the claim that, if we know the exact state of some part of the universe at one time, the future and past of a certain region of the universe (the “domain of dependence”) is completely determined. Both are reasonable and relevant. * It makes sense to be interested, as Massimo seems to be, in whether or not the one true correct ultimate set of laws of physics are deterministic or not. He argues that we don’t know, and that’s obviously right, since we don’t know what the final theory is. But that’s a rather defeatist attitude all by itself; we can look at the theories we do understand and try to draw lessons from them. * Classical mechanics, which you might have thought was deterministic if anything was, actually has some loopholes. We can think of certain situations where more than one future obeys the equations of motion starting from the same past. This is discussed a bit in the Stanford Encyclopedia of Philosophy article on causal determinism. But I personally don’t find the examples that impressive. For one thing, they are highly non-generic; you have to work really hard to find these kinds of solutions, and they certainly aren’t stable under small perturbations. More importantly, classical mechanics isn’t right; it’s just an approximation to quantum mechanics, and these finely-tuned classical solutions would be dramatically altered by quantum effects. * General relativity is a classical theory, so it’s also not correct, but we don’t have the final theory of quantum gravity so it’s worth a look. As Massimo points out, there are good examples in GR where traditional global determinism breaks down; naked singularities would be an example. (Basically, determinism breaks down when information can in principle “flow in” from a singularity or boundary that isn’t included in “the whole universe at one moment of time.”) We might sidestep this problem by arguing that naked singularities aren’t physical, which is quite reasonable. But there are much more benign examples, such as anti-de Sitter space — a maximally symmetric spacetime with a negative cosmological constant. This universe has no singularities, but does have a boundary at infinity, so a single moment of time only determines part of the universe, not the whole thing. On the other hand, like the classical-mechanics examples alluded to above, this seems like a technicality that can be cleared up with a slight change of definition, e.g. by imposing some simple boundary condition at infinity. Much more importantly, these kinds of GR phenomena are very far away from our everyday lives; there’s really no relevance to discussions of free will. GR violates global determinism in the strict sense, but certainly obeys local determinism; that’s all that should be required for this kind of discussion. The traditional (“Copenhagen”) view is that QM is truly non-deterministic, and that probability plays a central role in the measurement process when wave functions collapse. Unfortunately, this process is extremely unsatisfying, not just because it runs contrary to our philosophical prejudices but because what counts as a “measurement” and the quantum/classical split are extremely ill-defined. Almost everyone agrees we should do better, despite the fact that we still teach this approach in textbooks. Someone like Tom Banks would try to eliminate the magical process of wave function collapse, but keep probability (and thus a loss of determinism) as a central feature. There is a whole school of thought along these lines, which treats the quantum state as a device for tracking probabilities; see this excellent post by Matt Leifer for more details. The other way to go is many-worlds, which says that the ordinary deterministic evolution of the Schrödinger equation is all that ever happens. The problem there is comporting such a claim with the reality of our experience — we see Schrödinger’s cat to be alive or dead, not ever in a live/dead superposition as QM would seem to imply. The resolution is that “we” are not described by the entire quantum state; rather, we live in one branch of the wave function, which also includes numerous other branches where different outcomes were observed. This approach (which I favor) restores determinism at the level of the fundamental equations, but sacrifices it for the observational predictions made by real observers. If I were keeping a tally, I would certainly put this one in the non-determinism camp, for anyone interested in questions of free will. * Then there is the question of whether or not the lack of determinism in QM plays any role at all in our everyday lives. When we flip a coin or play the lottery, one might think that the relevant probabilities are “purely classical” — i.e. they stem from our lack of knowledge about the state of the muscles and nerves in my hand and the wind and the coin that is about to be flipped, but if I knew all of those things I could make a perfectly deterministic prediction about what would happen to the coin. (Indeed, a well-trained magician can flip a coin and get whatever result they want.) This is actually a tricky problem, to which the answers aren’t clear. Yes, there may be a level of classical description in terms of a probability distribution; but where does that probability distribution come from? Physicists disagree about whether or not quantum mechanics plays a crucial role here. Since I have friends in high places, this weekend I emailed Andy Albrecht, who answered and brought David Deutsch into the conversation. They both argue — plausibly, although I’m not really qualified to pass judgment — that essentially all classical probabilities can ultimately traced down to the quantum wave function. And indeed, that this reasoning provides the only sensible basis for talking about probabilities at all! (David mentions that Lev Vaidman seems to disagree, so it’s not uncontroversial by any means.) They are both, in other words, firmly anti-Bayesian in their view on probability. A good Bayesian thinks that probabilities are always statements about our fundamental ignorance concerning what is “really” going on. Albrecht and Deutsch would argue that’s not true, probabilities are ultimately always statements about the wave function of the universe. If they’re right — and again, it looks plausible, but I need to think about it more — then QM effects are indeed of crucial importance in accounting for our inability to predict the future in the everyday world. * I should say something about chaos, which always comes up in these discussions. In classical mechanics, even when the underlying model is perfectly deterministic, it can often be the case that a small uncertainty in our knowledge of the initial state can lead to large uncertainty in the future/past evolution. (E.g. for the tumbling of Hyperion.) This is sometimes brought up as if it causes problems for determinism: “since tiny mistakes propagate, you couldn’t realistically predict the future anyway.” This is about as irrelevant as it is possible to be irrelevant. The Laplacian viewpoint was always that if you had perfect information, you could predict the past and future. But that was always a statement of principle, not of practice. Of course, in practice, you have nowhere near enough information to make the kinds of calculation that Laplace’s vast intellect likes to do. That was perfectly obvious long before the advent of chaos theory. The correct statement is “in a classical deterministic system, with perfect information and arbitrary computing power you can predict the future in principle, but not in practice,” and that statement is completely unaltered by an understanding of chaos. So where does that leave us? My personal suspicion is that the ultimate laws of physics will embody something like the many-worlds philosophy: the underlying laws are perfectly deterministic, but what happens along any specific history is irreducibly probabilistic. (In a better understanding of quantum gravity, our notion of “time” might be altered, and therefore our notion of “determinism” might be affected; but I suspect that there will still be some underlying equations that are rigidly obeyed.) But that’s just a suspicion, not anything worth taking to the bank. For everyday-life purposes, we can’t get around the fact that quantum mechanics makes it impossible to predict the future robustly. Of course, this is all utterly irrelevant for questions of free will. (I’m sure Massimo knows this, but he didn’t discuss it in his blog post.) We can imagine four different possibilities: determinism + free will, indeterminism + free will, determinism + no free will, and indeterminism + no free will. All of these are logically possible, and in fact beliefs that some people actually hold! Bringing determinism into discussions of free will is a red herring. It matters, of course, how one defines “free will.” The usual strategy in these discussions is to pick your own definition, and then argue on that basis, no matter what definition is being used by the person you’re arguing with. It’s not a strategy that advances human knowledge, but it makes for an endless string of debates. A better question is, if we choose to think of human beings as collections of atoms and particles evolving according to the laws of physics, is such a description accurate and complete? Or is there something about human consciousness — some strong sense of “free will” — that allows us to deviate from the predictions that such a purely mechanistic model would make? If that’s your definition of free will, then it doesn’t matter whether the laws of physics are deterministic or not — all that matters is that there are laws. If the atoms and particles that make up human beings obey those laws, there is no free will in this strong sense; if there is such a notion of free will, the laws are violated. In particular, if you want to use the lack of determinism in quantum mechanics to make room for supra-physical human volition (or, for that matter, occasional interventions by God in the course of biological evolution, as Francis Collins believes), then let’s be clear: you are not making use of the rules of quantum mechanics, you are simply violating them. Quantum mechanics doesn’t say “we don’t know what’s going to happen, but maybe our ineffable spirit energies are secretly making the choices”; it says “the probability of an outcome is the modulus squared of the quantum amplitude,” full stop. Just because there are probabilities doesn’t mean there is room for free will in that sense. On the other hand, if you use a weak sense of free will, along the lines of “a useful theory of macroscopic human behavior models people as rational agents capable of making choices,” then free will is completely compatible with the underlying laws of physics, whether they are deterministic or not. That is the (fairly standard) compatibilist position, as defended by me in Free Will is as Real as Baseball. I would argue that this is the most useful notion of free will, the one people have in mind as they contemplate whether to go right to law school or spend a year hiking through Europe. It is not so weak as to be tautological: we could imagine a universe in which there were simple robust future boundary conditions, such that a model of rational agents would not be sufficient to describe the world. E.g. a world in which there were accurate prophesies of the future: “You will grow up to marry a handsome prince.” (Like it or not.) For better or for worse, that’s not the world we live in. What happens to you in the future is a combination of choices you make and forces well beyond your control — make the best of it! CATEGORIZED UNDER: Philosophy, Science, Top Posts • http://mmcirvin.livejournal.com/ Matt McIrvin Collins’ position, or a logically equivalent statement about free will, seems superficially plausible as long as you’re only looking at values of a single operator, like the position of a particle. The distribution has to have a certain shape over the long term–that leaves a lot of wiggle room for different patterns of results, right? You could do a lot with that! God could do a lot with that! Not so. QM makes predictions about the distribution of values of any observable operator, not just x. God, or free-will power, would have to work in ways so mysterious that they cannot be detected in the distribution of values of any observable quantity. That’s actually a pretty powerful constraint. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean That’s a good way of putting it, yes. • Physicalist Yes, the real issue isn’t whether freedom is compatible with determinism, it’s whether freedom is compatible with the completeness of physics (or, if one rejects physicalism, with the completeness of natural laws). I would certainly put [many worlds] in the non-determinism camp, for anyone interested in questions of free will. Really? It strikes me as even more deterministic than classical mechanics in a way — it seems like on this account it would be nearly impossible to avoid any action (since in some world that action will occur). At least given classical determinism I can choose to perform some actions and avoid others. With many worlds, it will (often? always?) be the case that the action both will be performed and won’t be performed. I suppose choices will make a difference to the extent that quantum uncertainties wash out and are irrelevant to large-scale bodily behavior, but given the multiplicity of real worlds, it’s not really clear how much of a difference this makes. It’s pretty hard to make sense of rationality and morality in a many-worlds scenario. • http://scienceblogs.com/startswithabang/ Ethan Siegel Oh please. You get one three-body collision in there — in the past or the future — and all your predictive power is gone. And surely if Newton knew this, LaPlace knew it too. • http://mmcirvin.livejournal.com/ Matt McIrvin Greg Egan wrote some science-fiction stories set in a world in which post-human beings made sure their brains were specially engineered such that their choices would never be indeterminate on the quantum level, because they felt that only then would their decisions really matter! • Physicalist Ethan Siegel says: You get one three-body collision in there . . . and all your predictive power is gone. That’s why Sean listed “arbitrary computing power” in there. We’re interested in the metaphysics, not in our practical abilities to predict. (Which is also why Sean rightly dismisses chaos as irrelevant.) The relevant point is that the behavior does follow the physical laws. And note that even though we can’t solve the three-body problem to predict future behavior, once we have the behavior in hand, we can confirm that it didn’t violate the laws. If I give you the solution, it’s generally pretty straightforward to confirm that it is indeed a solution to the equations of motion. That’s all we need to establish the completeness of physics. • http://scienceblogs.com/startswithabang/ Ethan Siegel No physicalist @6, not the “three body gravitational problem,” but a simultaneous collision between three particles. You know, where particles “1″ “2″ and “3″ collide, each with some initial momentum, simultaneously. That, classically, is a totally unpredictable system. As you can imagine, if “1 hits 2 which then hits 3″ you get a different answer than if “3 hits 2 which then hits 1.” But if “1, 2, and 3 collide simultaneously,” your system is under-constrained, and there are multiple possible solutions to what the final momenta of the three particles will be. Classically. It’s non-deterministic, and — like I said — LaPlace surely knew this. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Ethan, as I said in the post, there are non-deterministic problems in classical mechanics (albeit they appear to be a set of measure zero). Whether Laplace knew this is an empirical question, about which I don’t have any data. But he certainly did write the famous passage about a vast intelligence, as quoted in Wikipedia. • max The free will discussion has never really seemed that interesting for reasons that Sean outlines above. Everyone has a different definition, and everyone just seems to talk over each other. We would really need a robust theory of consciousness before we’d be able to tackle free will in a more meaningful way, and we’re nowhere near that. For what it’s worth, the compatibilist view seems like the sensible way to think about it. If you want to do something and you choose to do it and you do do it, then you’ve acted freely. It doesn’t matter whether your base desires are “freely” chosen or not (whatever that might mean). • Physicalist “a simultaneous collision between three particles.” Ah. Then I take it that this will be a case like the one that Norton discusses, and that Sean refers to in the S.E.P. entry. Then the relevant question (as Sean points out) is whether such a scenario is realistic enough to worry about. Two issues: (1) Does the real world ever admit of actually instantaneous interactions of this sort? As you say, you get different answers depending on which collision happens first — but if as a matter of fact one does occur first, then the outcome is determined (though we might be in a position to predict that outcome). So, are there real collisions in which three bodies really collide at precisely the same instant? Of course, when you try to get down to such fine-grained detail, you notice that classical mechanics’ description of colliding rigid bodies isn’t exactly right — so the answer is that there are not collisions of this sort in the real world. Which leads us to the second issue: (2) Given that the real world is quantum mechanical, what should we say about determinism? Sean has given his answer above. The main thing that I would add is that we shouldn’t rule out non-local hidden variable interpretations like Bohm’s. It seems to me that such a theory is robustly deterministic. (And I’m inclined to say that many worlds and spontaneous collapse are just as non-local as Bohm’s theory, but that’s an argument for another day.) • http://scienceblogs.com/startswithabang/ Ethan Siegel I don’t think you give the simple example of a three-body collision its due; if you were willing to consider it, you’d arrive at the same non-determinism that arises in the final momenta in neutron decay. While the examples you link to in the Stanford Encyclopedia may be of measure zero, this one isn’t; it’s far more general than the very specific example given there. You can take a simultaneous collision between any three classical objects with a combination of any initial momenta, and your final state is not determined. LaPlace didn’t know about the densities and energies of the early stages of the Big Bang (where the three-body collision, at those high densities, are simultaneous if one considers timescales smaller than the Planck time to be simultaneous), nor of simple matter-antimatter processes (electron-positron annihilation, treating it as classically as possible), but if one is willing to accept these basic things that happen, you cannot keep determinism even if you don’t step into the quantum realm. Not saying LaPlace didn’t “forget” (or ignore) a 3-body collision when he wrote his passage, just that — given some very basic things that we know — that argument holds no water, even classically (that is, without any reference to or consideration of a quantum mechanical wavefunction). • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Ethan, three particles in three dimensions describe an eighteen-dimensional phase space. The constraint that all three particles are at the same location at some time is a six-dimensional constraint (choose one position, the other two positions are determined). We’re left with a twelve-dimensional submanifold of an eighteen-dimensional space. That is bigger than a point, but is certainly a set of measure zero in the larger space. Said another way, more directly relevant to this discussion: a generic perturbation in phase space moves you off the constraint surface. Everyone agrees that classical mechanics allows for non-deterministic evolution, but it’s not the generic case, so I’m not sure what the argument is here. • Pingback: On Determinism | Quantum Pie with Krister Shalm • Dan L. Hmm, a really interesting point that gets to the core of the failures of classical mechanics that led to both relativity and QM. Einstein was pretty explicit that a lot of his reasoning about special relativity came from questions about what “simultaneity” meant. And, although I don’t have the acumen to fairly judge, it seems to me like demanding that A, B, and C collide simultaneously violates the uncertainty principle (you are demanding absolutely zero error in your time measurements of the collisions of A, B, and C pairwise). Would you mind providing some kind of citation or link where I can read more, though? Wikipedia doesn’t seem to treat this particular version of the “three body problem” on the relevant page. I’d be really curious to hear more about the conversation with Albrecht and Deutch, Sean. I’ve been trending Bayesian (I guess the pun is intended…sorry folks) but I’d be love to hear some of the arguments against. • http://rohanmedia.co.uk Rohan So in Many-worlds free-will. Consciousness would be some 5D snake like thing branching when a conscious action takes place? • Somite Whatever biologists make out of consciousness and free will my feeling is that it will have very little to do with physics and specially QM. Just like nature abhors a vacuum biology abhors indeterminacy. You can see this on biological process that could involve quantum phenomenona but don’t, like the electron transport chain and photoreceptors. Both of these processes have quite deterministic outputs. Consciousness will be a matter of competition and selection, with an aggregator that measures the output of different neuronal constellations that represent each competing thought. This aggregator will turn out with something as mundane as the level of neurotransmitter or number of active synapses. It will not involve physics or QM in an interesting way. • Pingback: Determinism and free-will « Only Yes • Charlie @Somite Says #16, On the contrary, indeterminacy is sometimes useful in biological systems. Molecular signaling pathways can be “engineered” to be more or less deterministic versus stochastic, and there are probably good reasons (i.e., fitness benefits) for both kinds of circuit function. Just Google Scholar “lambda phage stochastic” for examples. My own bias is that this sort of stochastic understanding will play an important role in any good model of behavior/decision making/consciousness/free will/whatever-you-want-to-call-it (though this is still an open question). I agree that there is no need for biologists to call on QM directly, as yet, but the mathematics that underlie it may be useful (perhaps have already been used? I’m no expert here) for developing stochastic kinetic models. • Somite I agree. Stochastic yes. QM no. When I wrote indeterminacy I meant that of QM and now I realize I meant something like uncertainty. • Tom Hi, I’ve been following your blog for a while and I love it. I am not a physicist, just an avid follower of physics. This is my first time submitting a comment. Sorry it’s a bit off-topic, but maybe it’s something you could spend a future post on? I imagine many of your readers would have the same question. There’s something about many-worlds that I’ve always wondered: does it explain probabilities, and if so, how? If I imagine a 50:50 choice, like Schrodinger’s Cat, then it makes sense that (after opening the box) the wavefunction contains 2 pieces, and , and that I have a 50:50 chance of being part of either piece. One can certainly build up many rational-number probabilities with lots of 50:50 choices. However, I can change the setup so the probability is an irrational ratio, let’s say P():P() = 1:sqrt(2). How does many-worlds handle that case? The wavefunction still has two pieces, and . They have different amplitudes, but the “me” that is part of each feels just as real. Yet there must be some way in which “I” am more likely to be the “me” that is part of than . Does that arise out of the many-worlds program, or is it a further postulate, as in “My probability of existing in any branch of the wavefunction depends on the amplitudes at each branch.” • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Tom– This is an excellent question, the subject of much current research. It is, at least, not perfectly obvious how to get probabilities out of the many-worlds interpretation. See e.g. There are promising approaches (e.g. from Deutsch, Wallace, Zurek, and others), but I don’t think there’s a consensus. Of course many other approaches “get” probability, essentially by putting it in by hand; MWI doesn’t have that freedom, since it’s just supposed to be the Schrodinger equation and nothing else. • Colin Bisset I used to argue about free will, but now I don’t. Whether the universe is deterministic or not is, as pointed out, pretty much irrelevant, because the possession of free will is a subjective state. You either feel free, or you don’t. It’s interesting to speculate on this though: If the universe is deterministic, then the conclusion from human behavior is that doing nothing is, evolutionarily speaking, advantageous. If choice is a non-concept, it follows that the time my ancestors spent making decisions, i.e., not doing anything when confronted with a stimulus, increased their reproductive success. This thought pleases me more than freedom. • https://twitter.com/#!/sprawld sprawld Bizarrely I was just reading David Wallace, a philosopher with Oxford Flu (symptoms include feeling you keep splitting into multiple copies). His entry in this years Oxford Handbook on Quantum Mechanics is a good look overview of some of these issues. He covers Many Worlds (Everett) generally, and the preferred basis problem, which he has a strong view on (spoiler alert: it’s emergent). For probability – which has been, and remains a hotly debated issue – he sketches out the main arguments on various sides. Probability is far from ‘solved’, but Deutsch, Wallace et al. have shown pretty convincingly that you can derive probability amplitudes (mod squared) from MWI + decision theory or symmetry arguments. To answer Tom’s question, branches are a continuum in Quantum Theory, so irrational weightings are just as easy as 50:50. The fact that probability amplitudes (the Born Rule) can be derived from Everett mean that no further postulates are required (if the derivation is valid of course!) As Sean pointed out (and Wallace agrees) there are a lot of questions with probability at determinacy, in Many Worlds or not. Arguably Deutsch’s derivation has put probability on a much firmer footing under Everett, than we had before. Giving a principled argument for uncertainty in a deterministic physics. • Katherine Regarding lawfulness and free will, does anyone have any thoughts on the following quote from Chapter One of Stephen Hawking’s “A Brief History of Time”: “Now, if you believe that the universe is not arbitrary, but governed by definite laws, you ultimately have to combine the partial theories into a complete unified theory that will describe everything in the universe. But there is a fundamental paradox in the search for such a complete unified theory. The ideas about scientific theories outlined above assume we are rational beings who are free to observe the universe as we want and to draw logical deductions from what we see. In such a scheme it is reasonable to suppose that we might progress ever closer toward the laws that govern the universe. Yet if there really is a complete unified theory, it would also presumably determine our actions. And so the theory itself would determine the outcome of our search for it! And why should it determine that we come to the right conclusions from the evidence? Might it not equally well determine that we draw the wrong conclusion? Or no conclusion at all? So that is Hawking’s fundamental paradox…how is science possible if we aren’t “free rational beings”? Conway and Kochen raise a similar point in their paper, The Free Will Theorem: “It is hard to take science seriously in a universe that in fact controls all the choices experimenters think they make. Nature could be in an insidious conspiracy to ‘confirm’ laws by denying us the freedom to make the tests that would refute them. Physical induction, the primary tool of science, disappears if we are denied access to random samples.” • Rich Thanks to Matt for the clear explanation of why free will can’t be “hiding” in quantum probability. The necessity for such free will to leave a detectable signature in the distribution of results is indeed a strong constraint. I wonder however if we can still hide free will there if we assume that it exists only in the waveform and disappears when the wave is collapsed upon measurement. Uncollapsed waves still have a real effect on the world. If free will also disappeared upon measurement then it could not be detected in the distribution pattern. • Ray Gedaly If I learned there was no such thing as free will, I would live my life very differently. :) • Baby Bones I think that the idea of determinism is not well defined. Furthermore, the idea of randomness is even less well defined. On the other hand, I think that uncertainty is something that we can easily quantify, and we deal with it daily in many ways. One thing I hate to think of is a point particle. Or rather, I hate even more thinking of many point particles. The chance of two point particles colliding is precisely zero unless you assume that they are capable of attracting each other to an infinite extent. But that infinite extent makes points just as nasty as naked singularities. I can accept that there are point-like phenomena associated with a finite force field and that picture breaks down on some scale, but I think that any subsequent renormalization only confirms that the picture is going to be wrong on some scale. Another thing I hate to think of is plane waves that go on forever. I hate even wave packets that are clumpy but have to be defined way out at infinity to make the math work out right. It is really hard for me to think about how one bit of one real-world wave correlates with another bit of itself, but it must do so to some extent or else it wouldn’t be much of a wave. I bet that the extent is limited and that its limited nature shows up in detectors as imprecise arrival times of peaks and troughs (or imprecise photon arrival times that cannot be attributed to the source). What I find odd is that the hydrogen atom behaves in ways that suggest mathematical things like Hilbert Space or a space of all possibilities exist in some way. I am certain these things are useful mathematical tools but I fear that they are otherwise physical nonsense. They cannot exist in this world any more than a point particle can. • Physicalist does anyone have any thoughts on . . Hawking’s fundamental paradox…how is science possible if we aren’t “free rational beings”? My thoughts: There’s absolutely no reason to think that physicalism (or determinism) in any way implies that we aren’t free or rational. Hawking asks whither the laws of physics might “not equally well determine that we draw the wrong conclusion.” Sure, they might, in one sense. But only in the sense that the laws of physics might determine that a mouse will walk backwards to the edge of a cliff and leap backwards to its death. Such a process is physically possible — and it might be determined by the laws of physics — but as a matter of fact, it doesn’t fit with the emergent structure (e.g., biological structure and psychological structure) of the actual world. Rationality and freedom (of the compatibilist sort) are emergent physical features, and the development of these features makes it more likely that if we keep working at it, we’ll get an account of physics that’s more or less right. • marshall “The relevant point is that the behavior does follow the physical laws.” The thing about chaos (or, more generally, situations that are arbitrarily sensitive to initial conditions) is that you cannot show that, at least, not fully. I think that that is why it is improper to dismiss chaos here. (In other words, physics is at its heart about the prediction of consequences of initial actions. If the knowledge of the initial conditions is lost through chaotic evolution, so that you cannot calculate effect from cause, then you cannot be sure that that the physical law is in fact fully being followed. Maybe it is some other law, which also leads to a loss of knowledge of initial conditions.) Note that this is a fundamental failure. No matter how accurate your measurements are, and how many digits your calculations carry, I can make you lose all precision. If there is a “ghost in the machine,” I suspect that’s where it would enter in. • Mitchell Porter sprawld says There seems to be a certain amount of credulous hype surrounding the decision-theory “derivation” of the Born rule for MWI. Sean Carroll describes it as “promising”, this commenter says it’s “convincing”. So I would like to point out a few things. First, if you are going to derive the Born rule from a multiverse theory, then the obvious thing to expect is that Born probabilities correspond to frequencies in the multiverse. If quantum mechanics says that outcome A is twice as probable as outcome B, that should mean that outcome A is twice as common in the multiverse, compared to outcome B. As things stand, MWI does not offer anything like this. Suppose we pick a basis and decompose the wavefunction, what do we get? *One* copy of each “world”, each of which has a complex number associated with it. If we decompose a reduced density matrix, instead of a full wavefunction, we at least get real numbers that look like probabilities, but so far, they’re still just numbers. Just because you now have a number 2/3 associated with the A-branch, and a number 1/3 associated with the B-branch, does not yet explain why we actually see outcome A twice as often as outcome B. In my opinion, the logical thing to do would be to bite the bullet of duplicated worlds, and say that there are 2 copies of the A branch, and 1 copy of the B branch. You could get this by having an ontological axiom, that the coefficient of all branches must be equal, so a branch with coefficient 2/3 is actually a sum of two identical state vectors, each with a coefficient of 1/3. Finally this gives you a multiverse with the right multiplicities: outcome A now really does exist twice as often as outcome B. However, the ideology of MWI advocates is usually that “the wavefunction is everything”, “the theory interprets itself”, etc., so the idea of a special axiom to (1) define what a world is (2) make sure that multiverse frequencies do match the Born rule, is unappealing to them. I can only think of one version of MWI which explicitly talks about duplicated or near-duplicated worlds in order to obtain the Born rule, and that’s Robin Hanson’s “mangled worlds”. (Zurek seems to be edging close to this option, but he doesn’t want to sign on to MWI, instead taking the absurd line that “existence requires redundancy”, so something only exists if it exists several times over.) Hanson’s mangled worlds, as I understand it, involves a dynamically determined preferred basis in which the required multiplicities are obtained by treating a world that is e.g. 99% |dead cat> + 1% |live cat> as a “dead cat” world. So Hanson’s individual worlds are themselves superpositions; a solution to MWI’s problems which might itself be regarded as problematic. But returning to the mainstream of MWI – if mainstream is defined by public visibility and excited advocacy – that does appear to be defined by this “decision-theory derivation” of the Born rule. So allow me to point out what’s going on here. This perspective involves an explicit repudiation of the idea that Born probabilities correspond to multiverse frequencies. In one of his papers, David Wallace says there is just no answer to the question “how many copies of a given world are there?” Instead, probabilities are to be obtained from decision theory. Let me sketch how this works. A common decision-theoretic concept is that you are to maximize your expected utility – the benefit you can expect to obtain, given an action – and this is equal to a weighted sum over the various possible outcomes. Each outcome has an intrinsic benefit (its “utility”), and it also has a probability. Winning $1 million in the lottery would be highly beneficial to you, but also highly improbable, which is why buying lottery tickets is not a way to maximize your *expected* utility… Maximizing your expected utility, for a decision theorist, defines rational behavior. So here, finally, we reach how the Deutsch-Wallace derivation of the Born rule is supposed to work. We will examine *rational behavior in the multiverse*, e.g. we will look at quantum game theory. The prescription, be rational, will tell us how we should act in quantum games; we know the intrinsic utilities of the various outcomes; so if we “divide out” the rationality ranking by the intrinsic utilities, the probabilities of the outcomes will be left over, and here we will recover the Born rule. I fear that in describing this procedure, I have failed to convey the utter absurdity of it. So let’s go back to the big picture. MWI advocates have failed to find a satisfactory way to demonstrate that their multiverse contains two times as many copies of “A” as it does of “B”. So rather than conclude that there is a problem with their theory, they instead conclude that there is a problem with the concept of probability, and cleverly propose to do away with the idea that probabilities have something to do with how often an event occurs. Instead, they shall argue that being rational in the multiverse will require you to act *as if* A has twice the probability of B… I think I’m still not conveying how absurd and desperate a dodge this is. In any case, I see many people talking about how the Deutsch-Wallace “derivation” is “promising” or “convincing”, and yet I don’t think they really understand what is being proposed, at a fundamental level – this logical inversion which makes probability dependent on rationality, rather than vice versa. Hopefully I have managed to enlighten a few people as to what’s really going on in their arguments. • Physicalist @ 29. marshall: “. . . you cannot be sure that that the physical law is in fact fully being followed . . . Typically our reasons for thinking that we have the relevant laws in hand don’t rest on generating an absolutely precise prediction of a final state from an absolutely precise specification of an initial state. Instead, we get close enough, and run things many times, and we eventually decide that the best explanation of the data is the claim that the system follows certain simple laws. What is often most important for our deciding whether some physical law holds is our knowledge of the domain of applicability of those laws. (Sean has discussed this in several places, e.g., here.) This allows us to say that even though a system (e.g., a double pendulum) might be chaotic — thus making it impossible to predict its exact behavior — it nevertheless is obeying the laws of classical mechanics. (And the fact that the mechanics tells us that the behavior will be chaotic gives us all the more reason to believe that we’ve got the laws right.) • Arun The usual reason for wondering if there is free will, is that the common notion of morality requires it. If there is no free will, the argument goes, how can we hold people responsible for their actions? Of course, if there is no free will, then our choice of whether people can be held accountable for their actions also vanishes. Our choice, pro or con, is predetermined, and there is no point worrying about it. • Axel ** What happens to you in the future is a combination of choices you make and forces well beyond your control — make the best of it! ** Forces beyond my control would be for example to die in an earthquake .. But that that happened to me was because I was born on earth .. That’s the point of view of Buddhists .. So control it and try to not be born (again) … ;-) • Cosmonut Free will, in any real sense, is dead if you accept determinism. Regardless of whether you can *predict* the future or not, the path of your life and the fate of humanity, is already laid out as surely as the orbit of the moon around the earth. You can *pretend* that you are making free choices that determine your future, but what choices you make are also determined by the laws of physics, as well as their consequences. • http://jbg.f2s.com/quantum2.txt James Gallagher Thanks to Mitchell Porter (#30) for the critique of the (attempted) Decision Theory based derivation of probability in MWI. Personally I think that any such derivation is doomed, for the simple reason that you cannot get (fundamental) probabilities out of a model unless you put (fundamental) probabilities in. This seems so trivially obvious that I am amazed that so many educated people believe the purely deterministic MWI is a sensible idea, unless they really believe in a (super)deterministic universe – and in that case it is not even possible to conclude that logic is correct – so the whole scientific enterprise would be pointless. But I think this debate always starts with the wrong emphasis – that determinism seems natural (due to Laplace argument etc) – whereas I would say it’s actually much more reasonable to accept that free-will is an obvious feature of the universe, at least since conscious life evolved. Is it not so staggeringly obvious that the behaviour of physical things on our planet is different from the deterministic behaviour on lifeless planets? A Poincaré recurrence cycle of the entire universe would probably happen more often than the Schrödinger equation would produce the works of Shakespeare. (Super)Determinism is clearly not how the universe works once conscious beings have evolved, and I don’t need an intensive study of tedious theological or philosophical works to deduce that (although I have been unfortunate enough to have wasted time studying some of these in the past) • http://www.naturalism.org Tom Clark On the block universe view, which I think you accept, the future (like the past and present) is fixed in 4D spacetime. I imagine many folks would suppose the block universe obviates any notion of real choice, since choices too are equally fixed in spacetime. “Real” choices, “real” freedom, they might suppose, require us to exist outside spacetime, exerting control over it. But since we exist within spacetime, freedom and control can only consist in our participating in certain sorts of fixed patterns in the block universe, those in which the outcomes we want follow from the actions we take in service to our desires. This gets elaborated at http://www.naturalism.org/spacetime.htm but I’d be interested to get your take on it. • Dave I think you are a little bit too hasty in dismissing the idea of simple robust future boundary conditions applying to our universe affecting the fate of “rational” agents. • Richard D. Morey How is free will “a useful theory of macroscopic human behavior models people as rational agents capable of making choices”? What constraint does free will make on predictions of behavior? Actually, free will is not a theory of behavior at all, since it (by definition) has no constraint. Unless you are using the words “useful” and “theory” in ways that they are not typically used in science, I see no way that your claim can be true. And if it were true, why not describe other complex systems as having “free will”, like say, the weather? Certainly the ancients thought the weather was driven by will. Why are humans any different than any other complex system that makes “free will” a “useful theory”? “It wanted to rain today, but it couldn’t quite make the decision. Weather here is so indecisive.” • Physicalist You’re assuming a libertarian notion of freedom, which is rejected by Sean (and by the majority of people who have thought carefully about this topic). What Sean is advocating is a compatibilist account of freedom, which can indeed be seen a result of constraints. • Pingback: Sean Carroll on free will « Why Evolution Is True • gr8hands Cosmonut @34 is correct to point out that if hard determinism is correct, then all of existence is as pre-determined as a movie on a DVD — from any frame of the movie, there is only one possible next frame. No choices are possible. I believe free will (and consciousness) is an emergent property — like “solid”. You can’t show me under the microscope anything solid. In fact, the better the microscope, the less “solid” something is. Yet, you will hurt yourself walking into the wall trying to act on the knowledge that it is 99.999999999+% unsolid. • Richard D. Morey No, I’m not. Free will has no constraint. That doesn’t mean that it isn’t *compatible* with constraint (which is the compatibilist position) but rather that it doesn’t, as a “theory” *offer* any constraint, which means it cannot be a useful theory of human behavior. • Physicalist Re: 42 The compatibilist account of freedom usually claims that actions are free just in case they are caused by an agents desires, commitments, personality, etc. and they are not the result of external coercion or force. I don’t see that as a “theory of behavior” this differs importantly from other psychological features. • Richard D. Morey Re: 43 That is not a useful theory of behavior, that is a definition. • Chris W. Katherine (#24), From pretty early on in his career, the philosopher of science Karl Popper emphasized precisely this point. (I’ll ignore Conway and Kochen’s use of the term “induction”. :) ) It is a deep issue; I would argue that it is the central issue. Another way of putting it is this: Does the notion of seeking and discovering (usually provisional) solutions to problems really mean anything in a universe that allows no room for making choices that are not predetermined by its past state plus the laws of physics? • Eric Smith Obviously as physical beings our actions are “determined” by the laws of physics. Nevertheless I would argue that we have free will in any practical sense, namely: (1) Given a choice between two alternatives, we are in fact capable of choosing either one; and (2) No outside observer is able to predict with 100% confidence which of the alternatives we will choose. As evidence I offer the following experiment, which you can do yourself: prepare two breakfast beverages (for convenience we will label them T and C). Also prepare some quantum mechanical system so that it is in a superposition with two equally likely outcomes upon measurement (e.g. an electron in a superposition of spin up and spin down). Perform the measurement in secret, and based upon the result drink one of the beverages (e.g. if the measurement shows spin up, drink T, otherwise drink C). At first blush this doesn’t seem to say much about free will, since you’re letting an outside event (the state of the electron) “determine” your choice. If you prefer, you can consider the system (you + electron) to be the agent, In any case, if you perform the experiment you do show that property (1) is true, namely that you are in fact capable of choosing either alternative. Moreover, if the measurement is secret and if our current understanding of quantum mechanics is correct, no outside observer can predict which beverage you will drink, so property (2) is also true. One can argue the metaphysics either way, but I think properties (1) and (2) together amount to free will in any practical sense. • Richard D. Morey Eric, your definition of free will is circular, given that you used the concept of “choosing” in point 1. Or perhaps you should clarify what “choose” means – can you define it in such a way that it doesn’t apply equally well to the quantum system you’ve described in your thought experiment? Does that system “choose” the state? • Pingback: Free will and determinism – the greatest show on earth | The Heretical Philosopher • Pingback: Determingasm. | Hooray Reality • Eric Smith Re: 47 You bring up a good point. I don’t see any circularity in the definition, but I can see that one might think the word “choose” implies more than it ought to. I simply meant that of the two possible outcomes (I drink C) or (I drink T), either outcome is possible. Yes I see that could also equally well apply to a quantum system, so it’s probably not a good choice (:-)) of words. Perhaps “choice” = “outcome” + “consciousness”. I have no problem with materialism, and am perfectly happy to agree that my “choices” are the product of the states and transitions of all the particles that make up my brain. Nevertheless I think it is an interesting thing that “I” (some system of particles) can act in very complicated ways which seem not to be predictable even in principle. I know that some people prefer to avoid the term “free will” because it seems to imply dualism, but we already have the perfectly good words “materialism” and “dualism”, and I’d hate to simply define “determinism” as “materialism”. • Pingback: I was compelled to post this | Pharyngula • http://n/a Frank Williams I am curious why nowhere in the several discussions of determinism & freewill is there any consideration of the idea that if all our behavior, including our thoughts, are the result of determinism – i.e. are predetermined to be whatever they are by brain structure, evolution, whatever – then all our reasoning pro and con about determinism or anything else is pointless, because we then think whatever we think not because we have correctly (or incorrectly) judged evidence etc., but merely because we are predetermined to think whatever we think. Seems to me that this means that IF determinism is true, then all reasoning is illusory. This doesn’t show that determinism is wrong – only that IF it is true, then we can’t have good reasons for anything (including determism). • http://qpr.ca/blog/ Alan Cooper Frank of#52, I think the Katherine’s quote from Stephen Hawking in #24 is an example of something pretty close to what you say is missing. • Chris W. Re #52, 53: Also see comment #45. • Richard D. Morey Re: 50 ‘Perhaps “choice” = “outcome” + “consciousness”’ If consciousness is only observing the outcome after the fact, as some of the neuroscientific evidence suggests in at least some cases, it seems strange to call it a “choice”. After all, I can observe other peoples’ actions after the fact too (or the measurement of the spin of an election, after the fact) but that doesn’t mean I had any “choice” about these outcomes. So, consciousness of an outcome would not seem to be sufficient, unless your are comfortable in calling everything you observe the result of your “choices”. One might object that there’s something special about the fact that you’re observing yourself, but I’d argue that’s just an illusion; consciousness is just one part of a complex system “observing” the other parts. There seems to be no particular reason why this couldn’t just as well be said about any of the other complex systems we are a part of (such as social, or physical). • Pingback: A bit about free will « The Official MU SASHA Blog, Updated Daily • Haelfix Suppose I were to tell you that right now, I decided to post on this website, b/c my photomultiplier (with a very high filter) just clicked! Chances are, it was going to click at least once in the next 10 seconds, but I decided I would post if it was anytime in the next 5 seconds starting now. Now, it is completely clear that this isn’t just about classical Poisson statistics, even though I have no detailed knowledge about the state of the system emmitting the photon and how my measuring apparatus observed the state. The experiments have been done in full detail, and there is a irreducible measure of probability within such a system that is simply due to the nature of quantum mechanics The point is, it is very easy to make an arbitrarily large macroscopic change (posting or not posting on a website), based upon the details of a microscopic experiment, where we have reduced to a minimum all classical notions of measurement error. Thus, I mantain the case is pretty clear, nature is nondeterministic unless you believe in something like hidden variables (which are almost ruled out entirely) • Eric Smith Re: 50 We often use the word “choice” in contexts where free will or consciousness aren’t an issue, e.g. we will say that a computer playing chess will “choose” to make a particular move, or will “choose” a particular element of a set in a sorting algorithm. I didn’t mean anything more by “choose” than that. • Richard D. Morey Re: 58 Then you didn’t mean ‘Perhaps “choice” = “outcome” + “consciousness”’, but rather, ‘Perhaps “choice” = “outcome”’? • http://jbg.f2s.com/quantum2.txt James Gallagher #46 Eric I don’t think that demonstrates free-will, since in a large sequence of “trials” you would drink T 50% of time, free-will would be more illustrated by you drinking T 100% of the time even when it seems (to an outsider) that either T or C could be drunk with equal probability. The signature of free-will is the regular appearance of statistically unlikely outcomes. So if you found the works of Shakespeare in written form on Mars you could conclude that it was put there by something exercising free-will – since the statistical likelihood of macroscopic pages forming with written text by random is so unlikely as to not be expected to occur in several lifetimes of the universe. In the very unlikely event that such a ‘miracle’ occurs – then unlucky us we may make a false attribution of free-will, but it is so unlikely as not to concern proper scientifically minded people. (In the same way poincare recurrence doesn’t contradict the 2nd law of thermodynamics in any scientifically relevant way) • Jon H I don’t see any reason to assume awareness would be instantaneous. When a computer CPU makes a “decision”, that isn’t instantly reflected outside of the CPU (on the screen, in RAM, etc). That’d take few more nanoseconds. After all, in the brain the decision would be made probably by some process of weighing alternative activation patterns, in a manner that isn’t necessarily tied into the verbal or visual centers. If that’s the case, it’d make sense for there to be a slight delay while the decision result was translated into language, or into a visualized image, or into an action. The mind may not be dualistic in the mind/body sense, but there’s also no reason to assume that it all works together as an atomic unit, with a decision being instantly reflected throughout the brain and available to be expressed. • Richard D. Morey Re: 61 As an experimental psychologist, I can tell you that 300ms is an eternity in behavioral response time studies. If it really took 300ms to turn a decision into a verbal response, then it would be really hard to get verbal responses in any paradigm less than, say, 350ms. But that’s not the case. You can’t wave it away that simply. • Pingback: Lacking “free will” does not negate moral responsibility | coelsblog • http://knotsinmythinking.wordpress.com/ Tom, Knots Is it possible that this is a discussion about whether physics can (even in theory) understand human freedom? It just seems to me that, within the context of the argument, what would resolve the issue in favour of the existence of free will would be a theory that explains it. Wouldn’t genuinely self-determined behaviour be exactly not this, ie not anything that you could explain, that wouldn’t conform to laws, that wouldn’t look like any form of behaviour in the inert universe. I think the argument over Free Will is actually an argument over the limits of the scientific method. The reason it continues to go around and around is that it is impossible for physics and biology to explain the human capacity for voluntary action, and yet that thought means science cannot answer every question, and that is unthinkable (from within the system) so it returns to the begining again. These endless iterations leave plenty of time for coming up with quasi-technical terms that make it all look like a very important discussion. As far as I’m concerned, we learn how to be free. You can’t learn a law of physics. Therefore, human freedom doesn’t have anything to do with physics. • David I would really like to see Sean respond to Mitchell Porter’s post about the deep problem of probability in MWI • Maurice Actually, this argument of determinism and free will is almost exactly analogous to a theological discussion stirred up by John Calvin in the 16th Century. His issue was with the apparent contradiction between the omniscience of a Christian god (his Laplace’s Demon) and the claim that human beings have free will (absence of predestination). The ensuing discussion about predeterminism, predestination and whether or not these contradict free will, were fascinating anticipations of Laplace’s dilemma, and ahead of their time philosophically. A key revelation from these discussions is (as indeed you observe) that it is possible for an omniscient being (or supercomputer) to be able to predetermine the outcome of a person’s choices without violating their free will (i.e. there is no predestination). This was the view held by the Catholic Church and opposed by Calvin (who concluded that there must be predestination). It appears that this makes one philosophical point on which you, Sean, and the Catholic church’s agree — even if you disagree that the existence of the Church’s particular “Laplace Demon” ;) • steven johnson Is there really any meaning to the word determinism that does not imply that whatever is being determined takes some particular value? I don’t think so. Thus, if the free will is not determined by something else, it doesn’t have any particular values. In coin tosses, the values are either heads or tails, and in physics there are measurable observables. It’s not quite certain what a will, free or otherwise consists on, but if whatever it is made of has no values, it is doubtful we can meaningfully say it exists. If it has self-determined values, we are positing a metaphysical entity which is pretty much indistinguishable from the soul. But if we postulate the will is not an entity but a process, we are left either with deterministic processess or interministic, i.e., probabilistic ones. By definition a deterministic process forming the “will” is unfree. But a probabilistic process means that the will is random. It seems inescapable to me that there’s a real problem in associating freedom of the will with the randomness of the will. You could get around this by treating probabilistic processes as determinate, while acknowledging the plain truth that individual trials are not. Fair coins come up heads 50% of the time, a very specific value, which as the opening question highlighted, is in fact a key aspect to determinism. (I like to think of determinism coming in three varieties: mechanism, stochasm and history.) But it sppears this is not an option. This seems to be a shame, because if any individual act of will is an outcome of a probabilistic process, the peculiar determinateness of probabilistic processes can provide the bias predictability we associate with personal character, while the inherently probabilistic nature of individual outcomes, specific acts, account for the equally real unpredictability. If the many clauses of the last sentence left it too obscure, think of it this way. The will plainly cannot be unconstrained. If Sean has an embarrassing need for latex for sexual fulfillment, he cannot will that he will be aroused in more socially acceptable. This is not reflection on Sean. I myself cannot reliably will myself to remember facts that I know! For both of us, I suppose exercising the will to decide to cultivate good habits would constitute freedom of the will, while the subsequent habits (should we be so fortunate as to succeed in our endeavors,) would not constitute will, but, well, habit. And for both of us, delaying gratification is in no sense a defiance of needs or desires imposed upon us by deterministic processes, even though it is entirely volitional. As for alarm at the notion that a scientific understanding of the mind will leave old ideals of morality shamed, I’d say that’s because it’s true. On the one hand, miscreants who would simply be condemned as bad stand relieved of full responsibility. The insistence upon treating them as sinners would not just seem, but be, barbarous. And those of us fortunate to have met social expectations (publicly, anyhow) could not honestly congratulate ourselves upon our probity. All this would change society and undermine religion of course. The old joke is that hell was created so that heaven would have some entertainment. How could we be religious when delight in God’s justice is philistine backwardness? • BobC I made it all the way through the comments! Clearly, an act of free will. Does free will exist if there is no brain to ponder it? Free will would seem to require life, minds, intelligence, and probably more than a single instance (to encourage interaction, socialization, the creation of civilization, culture, philosophical thought, science, and science blogs with comments). Epistemology and existentialism, anyone? Let’s assume free will does exist. Did it always exist? If not, then when, why and how did it come into existence? What manner of things possess free will? Does a chimp have free will? A snake? A fruit fly? A worm? A nematode? An amoeba? Does free will require a complex nervous system? Or self-awareness? Consider entropy and Time’s Arrow. Our own limited existence, including our free will, is merely an eddy of local, temporary order in the rush toward the heat death of the universe. It is highly localized, extremely constrained. What is free will within the context of the evolution of the universe? Is it nothing more than an temporary emergent property, possessed by only an insignificant number of small clumps of matter? Is free will nothing other than a rounding error in the statistics of the universe? My brain hurts. Can we stop now? • http://juanrga.com Juan Ramón González Álvarez It is truly fascinating how early non-scientific ideas of Laplace (who apparently never fully understood classical dynamics) are being re-branded for forcing a fabulous fitting into many-worlds and similar post-modern metaphysical stuff. First, many-worlds is not another interpretation of QM, as one reads sometimes, but a well-known misunderstanding of QM that cannot reproduce what we observe at our labs Against Many-Worlds Interpretations 1990: Int. J. Mod. Phys. A 5, 1745–1762 by Kent, Adrian. see also Note that Kent article is titled Interpretations, in plural, because there is not one MWI but a collection of mutually contradictory MWIs. The MWI by Deutsch (who you cite) is not the same than MWI by Everett, which is not the same than MWI by Hartle… Second, science is an enterprise with no room for the kind of supernatural observers G introduced in many-worlds for justifying the kind of metaphysical process associated to deterministic knowledge Therefore it would be a good idea to keep in mind the limits of the scope of science when discussing about science. and third, I am not surprised that when you write about chaos you only cite deterministic chaos (where uncertainty of final states is due to our a small uncertainty in our knowledge of the initial state of a deterministic system), whereas you omit to cite the case of nondeterministic chaos, where the uncertainty about the system remains although you know the initial state with infinite precision. This omission of fundamental results is still more glaring when a famous Nobel laureate wrote several popular books (including bestsellers) about the current state of the science of chaos. • Justin Loe I simply don’t believe free will is a tractable problem, scientifically (at this time). From an everyday standpoint we all act as if free will is true, whether it is or not. In the same fashion, our everyday actions and ambitions are based on the sense that they are meaningful. Just as we cannot know scientifically (at this time) whether our lives are meaningful, in my opinion, we cannot know whether free will exists or not based on current science. Arguably, we’re no closer to resolving the free will debate than we were 100 years ago or in the time of Newton. Most of us act, then, on the assumption that we have free will. Whether we can answer that definitively, at some future time, remains to be seen. • http://protagoras.typepad.com Aaron Boyden I’m quite astonished that nobody has clearly made what seems like the most relevant point here; if determinism rules out freedom, indeterminism almost certainly does as well. At least, randomness is no help at all; how can a roll of the dice constitute an exercise of agency, a person choosing for themselves what to do? But the interpretations of physics are only arguing about whether there are dice, so they aren’t talking about anything that’s relevant to the real questions of freedom. • http://theoperspectives.blogspot.com/ James Goetz Regardless of free will or no free will, if determinism is true, then all scientific theories based on empirical observation of cause and stochastic effect are illusionary. Determinism would ultimately invalidate most science, and it would be completely ridiculous then to appeal to scientific discoveries to support determinism. Nobody can disprove determinism, but accepting it is an implicit rejection of most empirical observation. James Goetz • Mitchell Porter James, you seem to be assuming that cause and effect in the intellectual sphere is necessarily our enemy – that it can only have the role of forcing our thoughts down a path which has no a-priori relationship to the truth. But reasoning is itself a causal process; being caused is part of why it works. Natural selection – or even just the simpler truth that survival is not guaranteed – dictates that the intellectual processes of an organism must have some capacity to represent the world correctly, or else it will swiftly die. Meanwhile, the modern theory of computation (due to people like Turing) tells us that a relatively simple set of symbol manipulations is “computationally universal”, capable of doing anything that a modern computer can do. So above a very elementary threshold of computational ability, cognitive dispositions selected merely for compatibility with survival will also give rise to open-ended powers of rationality, bounded only by restrictions on memory, sensory bandwidth, etc. In other words, the argument is: 1) The need to survive dictates that cognitive processes have some fidelity to reality. 2) Turing universality tells us that it’s a short step from “cognition with some fidelity to reality” to “cognition with an open-ended capacity to analyse data and draw correct conclusions”. It is absolutely true that we may be caused to make mistakes. I mean, I believe in cause and effect, and I believe that people make many mistakes, therefore I believe those mistakes have causes! But I don’t believe that determinism implies the uselessness of science or of thought. Cause and effect can be our epistemic ally too. • http://juanrga.com Juan Ramón González Álvarez I understand James point, if determinism ruled universe, then giving people Nobel Prizes would be as giving gifts to a rock when falling from a terrace roof. Both the Nobel laureate and the rock would be merely following rules (deterministic laws) established even before them existed, without any fundamental difference. In a purely deterministic world, fraudulent scientists would have the same respect than Nobel Prize winners. Because none of them would have the most minimum possibility to chose their own actions (good or bad). In a world where a nondeterministic evolution is possible, we would be giving Prizes to people, because the creation of a scientific theory is not a deterministic process, but the outcome of a wise mixture of human intelligence, perseverance, and personal choices; whereas no prize would be given to a falling rock, because the rock is merely following a law in a passive way. We would recriminate fraudulent scientists, because they had the option to chose their actions and decided to make the fraud. • http://www.cthisspace.com Claire C Smith No chaos, only random chance? • Doc-2 Gee, when free will was passed out, no one asked me if I wanted it or not!..at least in this MWI… • martenvandijk There is no space for determinism in the universe. • Pingback: Ontological Determinism, Epistemological Indeterminism, Laplace’s Demon « Ramblings • Chinahand I realize this thread is now very old and so this question will probably never get answered, but when Dr Carroll says: Does that mean you will be able to predict if any given Turing Machine will halt or not? Does this lead to a contradiction? • Gene Venable Well anyway, I think we don’t have more free will than a flipping coin does, but we certainly don’t know the ultimitate consequences of any action we take, so we can’t very well pat ourselves on the backs for taking actions that have ‘good’ results, can we? • Bangar when I was passed out, Free Will didn’t ask me if I “wanted it…” :( DISCOVER's Newsletter Cosmic Variance Random samplings from a universe of ideas. About Sean Carroll See More Collapse bottom bar Login to your Account E-mail address: Remember me Forgot your password? Not Registered Yet?
29d3ddc15c916a30
“Can Science Fill the Spiritual Void? (Science Doesn’t Want to Take God Away From You)” Below is a neat little opinion piece on the role of science in spirituality that was originally posted on NPR. I have always firmly believe that science does not take anything away from spirituality, that they are like two eyes looking out at the world. Like when you are laying in bed on your side, and you close one of your eyes… the shift in perspective is quite dramatic at that angle. Walking around believing either will be able to answer all questions is like closing one eye (myanalogy stops there). Science is the pursuit of truth, and so is spirituality. They are just different approaches, and should be harmonized. As Carl Sagan said, “Science is not only compatible with Spirituality, it is a profound source of Spirituality.” (Reading the comments, I think it’s all too clear that spirituality is often confused with religion, so let’s leave that notion aside for now. All spirituality is not religious.) What are your thoughts and ideas? From: http://www.mi2g.com/cgi/mi2g/frameset.php?pageid=http%3A//www.mi2g.com/cgi/mi2g/press/140207.php Photo credit: ACTA Science Doesn’t Want To Take God Away From You by Marcelo Gleiser “I was once invited to give a live interview on a radio station in Brasília, the capital of Brazil. The interview took place at rush hour in the city’s very busy bus terminal, where poor workers come in from rural areas to perform all sorts of jobs in town, from cleaning the streets to working in factories and private homes. The interviewer asked me questions about the scientific take on the end of the world, inspired by a book I had the just published (). There are many ways in which science can address this question. We can see, from , that the forces of nature are beyond our control, even if we pride ourselves on “taming” the world around us. But the focus of my book was on cataclysmic celestial events and how they have inspired both religious narratives and scientific research, past and present. In particular, note the many instances that stars and fire and brimstone fall from the sky in the Bible, both in the old (e.g., Book of Daniel, Sodom and Gomorrah) and the new testament (e.g., Apocalypse of John), or how the Celts believed that the skies would fall on their heads to mark the end of a time cycle. It was then that the hand went up. A small man with torn clothes and grease stains on his face asked: “So the doctor wants to take even God away from us?” I froze. The despair in that man’s voice was apparent. He felt betrayed. His faith was the only thing he held on to, the only thing that gave him strength to come back to that bus station everyday to work for a humiliatingly low minimum wage. We must fill that education with the wonder of discovery. We have to take the same passion people direct to their faith and use it to fuel curiosity about natural world. We have to teach that science has a spiritual dimension; not in the sense of supernaturalism, but in the sense of how it connects us with something bigger than we are. I also realized how completely futile it was to stand up there and proudly proclaim the value and wonder of science to someone who’s faith is the main drive behind all that he or she does. They would naturally ask: “Why should I believe what you are saying about the universe being 13.8 billion years old more than I believe that Jesus is God’s son? How do I believe your truth?” The man smiled. He didn’t say anything. But I am sure that he saw in the scientific drive for understanding the same passion that drove him toward his faith. (Post originally found on NPR’s blog: http://n.pr/1apBkG0) A Petition for the Field Museum of National History in Chicago A Petition for the Field Museum of National History in Chicago The Field Museum of Natural History (FMNH) in Chicago is a world-class institute that I have had the personal pleasure of enjoying numerous times over the past few years (I am lucky enough to have a partner doing his Ph.D. research in collaboration with the museum). They are facing cuts to their collections and scientific research staff, which has already been pared down greatly. This will undoubtedly affect the quality of outreach, education, and research. Additionally, it will limit the vast potential of the researchers to continue to analyze and study the fantastic collections housed in the museum. For those of you not familiar with a museum, close to 90% of the actual objects in the museum are NOT displayed. That means behind the beautifully designed, educational exhibits that the public sees when they go in, are millions of fossils, specimens, and artifacts that scientists and research staff study. This knowledge (both realized and potential) is of incalculable wealth, as scientists toil daily behind the scenes – analyzing the bits and pieces of nature that make up the very world we live in. Additionally, visiting scientists come from all the over world visit to study collections, especially at a world class institute like the FMNH. Please take a moment to look at this petition, and let the current president know whether or not you value the building bricks of science and our natural world. And if you are ever in Chicago, you must stop in – you can see the world’s most complete T-Rex skeleton, watch scientists analyze DNA in their labs in front of your very eyes, wander in the hall of gems, lose yourself in an Egyptian tomb, and so much more!* And don’t forget to subscribe to my blog (that little box on the right that says ‘Satisfy your Inbox’, for the latest news in science and our natural world. *My apologies to subscribers that were sent multiple links – the WordPress bots ate my first post for a snack. In Solidarity with Canadian Nature In defense of Canada’s environmental legislation. A new website dedicated to fighting the closure of Experimental Lakes Area – an internationally renowned facility. Please check it out, see what they have to say, and help take action. ELA has an astounding 50+ year history as an incredible site for ecosystem and large-scale aquatic research, and should not be closed on a whim. Aerial Image of some of the ELA lakes Aerial Image of some of the ELA lakes A Follow Up to an Open Letter Hello everyone, So, no I am not posting my personal information so you can look me up. That’s creepy and unnecessary. I was never a big-wig, I don’t have any science publications, etc. If someone wants to take me to court, I’ll gladly bring along all my contracts I signed for my positions, proving that I worked there. Otherwise, there’s no need to question my credibility because this is an opinion piece on publicly available information. As a follow up, I am not currently employed by Environment Canada, and speak only for myself. I lost any potential for continued work with them over 2 years ago, and am over it. What I am not over, is the continued decimation of environmental regulation and protection. And that is the point of this article. C) Oil is important, so your argument is pointless. Yes, we all use oil. I’m not saying we shouldn’t have the tar sands – I’d like it if we spent way less money on it, and invested way more money in sustainable energy research and projects, but obviously oil is part of our current society. That’s not my point. The point is that the tar sands are one of the major drivers behind environmental deregulation, and the cuts to funding envionmental research in Canada. The government obviously has money, since we can apparently make room in the budget for billions of dollars worth of fighter jets, but saved a little bit decimating DFO and EC. This means that our environment is being regulated only when it serves a political purpose, and that is absolutely detrimental to the long-term sustainability of both our country and our planet as a viable, living being. We cannot accept environmental policies that are entirely driven by political/corporate/capitalist motives. They must be stand-alone initiatives, that serve to protect the environment for its intrinsic value. Because no matter what TV and your car salesman tells you, nature is the only reason you are alive. You will die without clean water, non-toxic food, and a healthy environment. That’s not a radical idea, that’s a fact. If you can’t see that, than this is just lost on you anyway. A Canadian that still cares about the environment Environmentalists are radicals according to the Conservatives: www.cbc.ca/news/politics/story/2012/01/09/pol-joe-oliver-radical-groups.html Scientists are muzzled: www.bbc.co.uk/news/science-environment-16861468 Cuts to EC in 2011: www.greenparty.ca/media-release/2011-08-03/deep-cuts-environment-canada ELA Closure: www.theglobeandmail.com/news/politics/ottawa-notebook/tories-shut-down-groundbreaking-freshwater-research-station/article2436094/?utm_source=facebook.com&utm_medium=Referrer%3A+Social+Network+%2F+Media&utm_content=2436094&utm_campaign=Shared+Web+Article+Links Bill C-38/Environmental Destruction Act: http://thetyee.ca/Opinion/2012/05/10/Bill-C38/ Dear Everyone, While I was working there, scientists were effectively muzzled from speaking to the media without prior confirmation with Harper’s media team (http://tinyurl.com/7bnsqp4) – usually denied, and when allowed, totally controlled. Scientists were threatened with job loss if they said anything in an interview that was not exactly what the media team had told them to say. This happened in 2008. The public didn’t find out for years. Since then, the Conservative government has been laying off thousands and thousands of full-fledged scientific employees that have been performing research for decades at Environment Canada, Department of Fisheries and Oceans, and Parks Canada (e.g. http://tinyurl.com/8xtkaro , http://tinyurl.com/7gvzc7r, http://tinyurl.com/clgn97u ), shutting down entire divisions and radically decimating environmental protection and stewardship in a matter of a couple years. The Conservative leadership have admitted to shutting down environmental research groups on climate change because “they didn’t like the results” (http://tinyurl.com/7kpqk7d), are decimating the Species at Risk Act (our national equivalent of the IUCN Red list), are decimating habitat protection for fisheries, are getting rid of one of the most important water research facilities in the world (Experimental Lakes Area – has been operational since 1968, and allows for long-term ecosystem studies [http://tinyurl.com/cdygbdk] ), are getting rid of almost all scientists that study contaminants in the environment, have backed out of the Kyoto protocol – and the list goes on and on and on. We are depressed, and frustrated, and mad, and need all the help we can get to protect the value of science and our environment. In the age of globalization, intentionally non-progressive leadership is going to affect everyone. We share our waters, air, and cycles with all of you. Science IS a candle in the dark, and we cannot let greed extinguish that flame. What happens in Canada – will happen everywhere. Thank you. A Canadian that cares about science and the environment **Update (May 22, 2012). There has been a huge overwhelming response to this letter. Over 40,000 people have viewed it, with hundreds of comments. There are a lot of different organizations that want to be part of a larger movement. There are also quite a few scientists who may want to speak out, but still cannot. I encourage anyone who wants to contribute and organize, and may desire to do it more discreetly (ie: anonymous and or/not as a public comment), to email me at . Please let your colleagues know as well. I will never publish your information unless you want me to, and will be organizing interested parties somehow, so that we can effect greater change – for ourselves, our freedom, and our beautiful planet. **Update (May 25, 2012). An excellent opinion piece by a DFO scientist on the axing of the pollution programs at the Department of Fisheries and Oceans. http://www.environmentalhealthnews.org/ehs/news/2012/opinion-mass-firing-of-canada2019s-ocean-scientists Tyson Tells It As It Is, As Usual Tyson Tells It As It Is, As Usual Just a Sunday night thought :) Water, Water Everywhere. But Not a Drop to Drink? (Part 1) We need it, we like it, we are it. Water. It comprises over 70% of our bodies, and over 70% of the earth’s surface. H2O – two hydrogen atoms, one oxygen atom. Hydrogen-Oxygen Bonding in Water Chemical Structure of H2O It is one of the most unique compounds in our known universe – the only one to have a solid that is lighter than its liquid form, it is the universal solvent, it is tasteless, odorless and invisible, etc.. It is both the environment life was likely born in, and the compound that all forms of life require to exist. It’s the refreshing liquid after exercise, the stunning architecture behind a snowflake, and the burning steam from the kettle. Every part of Earth and Earth’s history has been directly influenced and affected by this marvelous mystery of nature. Well, where did water come from? No one actually knows. Our earth is 4.6 billion years old, and water appears in our earthly record at roughly 3.8 billion years ago, when the world was a fiery, chemical world with rocks just precipitating. The most common theory is that it formed from the gases released from the volcanoes that covered the world over. However, as water exists in many other places in the universe, as evidenced by meteors and other celestial bodies (Jupiter’s moon Celestia contains is covered by 160 km thick shell of frozen and liquid water – more than on all Earth!), so perhaps it arrived as a stowaway on the comets and asteroids that were constantly blasting the earth. While we can’t say for certain, we do know that the amount of water on earth has been the same throughout our geologic history. The best estimate is roughly 1.4 billion cubic kilometres of water, as projected by the Russian scientist Igor Shiklomanov. It’s not being created or destroyed, but just shifted around in varying compositions in a process known as the ‘hydrologic cycle’. The image below describes the beautiful cycle of water as it evaporates from oceans and plants, condenses in the air as clouds, falls back on earth, and is returned to the sea. There’s a myriad of offshoot processes, but that’s basically it. All rivers run to the sea, but all rivers are the sea, just at a further point in the cycle. And that’s the beauty of it. Our beautiful blue planet has this incredible system that desalinates, stores, and provides water in potable format to all living beings – free of charge. Hydrologic Cycle Hydrologic Cycle Water is stored in many places: glaciers, lakes/rivers/streams, aquifers (natural underground storage chambers), permafrost, and the ground water (the water found deep underneath us in the earth, saturating the layers of rock). It’s all around us, abundant and plentiful. So what’s the problem? Why are many scientists projecting a bleak and scary future for water? If it’s the same amount, and always will be – que pasa? The problem can be divided into three major issues: distribution, demand and pollution. I’ll discuss them in a 3 part series over the next week. 1) Distribution. Yes, there is water all over, and it is abundant and plentiful. But it is not distributed evenly across the planet. Some places (think Brazil, China, Russia, Canada) have more than enough water to meet local demands, even when they are outrageous. Other places (think the Sahara, southwestern USA, parts of Chile) are so dry they simply cannot meet local water demands. It’s not a simple case of uneven distribution either; even in water rich countries like China and Canada which have roughly the same amount of freshwater, we have to keep in mind China’s population has 1 billion more people than Canada. And in the case of the southwestern USA being heavily populated and demanding water, well that’s just not natural at all. They have basically irrigated a desert, and expect people to live there in the same hydrated comfort they do say, in the moist North Pacific. So some places have a lot, and other places have little. The problem comes into play even moreso with water bodies that are shared by different nations, or worse – when the headwaters (where the water source is based) are in one country, and the outlet to the sea is in another. Think of the infamous, oft-ignored case of the Colorado River. For 6 million years, this powerful river poured forth from its birthplace in the Rocky Mountains, draining 2,250 kilometres south and west through deserts and canyons, lush wetlands, and into the Gulf of California in Mexico. In the 1920′s U.S. states began diverting water from it for irrigation, dams, and to support the booming populations of cities springing up in dry areas: Los Angeles, Phoenix, San Diego. In 1944, Mexico and the US came to an agreement about sharing multiple water bodies, including the Colorado, but since then the water quantity and quality (these waters are entering Mexico highly salinated after being irrigated in the US) has steadily declined – to the point that Mexican farmes are no longer able to grow their crops as before. Additionally, there’s even many inter-state conflicts in America itself over who gets how much water from the river. All the while, water levels all along the Colorado are sinking steadily, having severe ecological and anthropological health impacts. Even this superficial exploration of a single river demonstrates the complexity of distribution issues in regards to water. Who gets it? Who owns it? A consumption heavy nation with an incredibly large military and nonchalance for world resource consumption can certainly outweigh a smaller, poorer, less organized nation in water rights – but we all still need potable water to drink, sanitize, and grow food. Even when you have people like the Texas Commissioner Susan Combs who have made public outcries to just flat-out dam the Colorado and prevent any of the water from reaching Mexico over other water disputes, calling the natural flow of a river ‘giving’ Mexico water. Who’s going to monitor these situations for the greater good of the planet and the welfare of all? **anecdote-based rant interjection** The Southwestern USA has to be one of the most wasteful regions of water I have ever been in. In high noon last year in June, I vividly recall dry, sandy cemeteries being watered by hoses pointed straight in the air, and ‘cooling’ water being sprayed outside of every shop – both evaporating almost instantly (not even mentioning Vegas and its pools and lawns). Yet they complain about droughts, and completely absorb all water that would naturally head for Mexico, depriving an entire region of their agriculture while wasting such a precious resource constantly and completely for…. nothing? Now for the elephant in the room – global warming. Without entering the debate, IF global warming occurs at projected rates, then so will higher rates of evaporation. There will also be higher rates of melting in the snowcaps and glaciers, and we are already seeing this happen at both poles. We are also approaching a dangerous threshold where the icecaps may not be able to regenerate throughout the winter, and thus speed up the collapse of the polar ice caps. Change in Ice Sheet Mass from the GRACE Satellite data Change in Ice Sheet Mass, as generated by data from the GRACE satellite, found on http://www.skepticalscience.com What does this mean in the context of water usage? More than 2/3 of the available freshwater on earth is frozen. As this ice melts, it goes directly into the ocean, making less and less of it available for usage. It also encourages higher temperature, and thus higher rates of evaporation – again, making less of it available for usage. Not just for us, but for all other forms of life too. There’s a final compounded problem that is the direct result of urbanization. Everywhere humans go, we love ease of transport. This first translated into dirt roads, then gravel, now paved. Paved sidwalks, roads, parking lots, houses. Less grass, plants, trees, and other absorbing features of the earth. When the rain falls, instead of sating the earth, saturating the soil, and percolating downards to replenesh the ground water to come out potable elsewhere (a very important sustainable source of water), it now directly runs off the pavement and often directly into other water sources or the ocean, where it is lost into the system that takes a longer time to regenerate the same amount. This problem leads us into the next part on demand. With a growing world population and growing demands on water, all the ecological issues that the previous populations have led us to are becoming magnified and compounded. So, feel free to leave comments, opinions, and discuss this topic, while I work steadily on the next one. And don’t forget to suscribe! *Much of this information was taken from the excellent text “Water – The Fate of Our Most Precious Resource” by Marq de Villiers, as well as a host of other awesome internet resources. The Bee’s Needs Bees have been buzzing around our planet for almost 100 million years. That’s 999,800,000 years before we Homo sapiens showed up on the planetary bio-map. Related to wasps (yellow jackets, hornets, etc.), there are over 20,000 known species worldwide, and are entirely herbivorous, unlike their carnivorous cousins. I’m sure we have all been lucky enough to spot a bee on a beautiful summer day, humming happily at any given flower (that is of course, if a fear of bees hasn’t been implanted in us, something I have observed and find rather tragic – fear of nature by her own children). I myself have spent quite some time trailing beautiful bumblebees across the dandelion-rimmed trails in the temperate forests Washington, stalking the bright orange Patagonian bee through the mountain ranges of the Andes – even tending to them in an organic acacia apiary (bee-farm) under the lip of the Alburni mountains in Southern Italy. Their constant dedication to their tasks at hand, the fascinating mathematical structures of the honeycomb, and of course, the dripping sweetness of fresh honey, has captivated my interest and admiration for humble bee. Their presence across the world, and their importance as pollinators and providers of honey, has likewise attracted attention and praise throughout recorded history. Ancient Egyptian Relief of a Bee Hieroglyphic Ancient Egyptian Relief of a Bee Hieroglyphic From dedicated Egyptian hieroglyphics to countless poems (think Aesop’s Fables, the Upanishads, Virgil, Shakespeare, E.O. Wilson, Emerson, Goethe, Thoreau, G.B. Shaw, Emily Dickinson, and many more), the complex nature and seemingly perfect social balance of bees has fascinated and inspired us for thousands of years. The bee was revered and played a central role in the mythologies and worship of the ancient Mayans, Greeks, Indians, Minoans, Celtics, and many more. Honey was the mystical ‘nectar of the gods’, and the bee seen as a goddess and creator of divine mathematical proportions. And not without just cause. The bee is a majestic being. In simply preparing for this article, I ended up reading over 3 books and 20 articles – not because I needed to present that much information, but because their social structure and biological development is so utterly fascinating I was drawn into it completely. While we think of them as social creatures, in reality, less than 4% of all bees are actually social. The rest are solo fliers, digging nests in undisturbed areas of ground and trees. This small percentage however, contain some of the more commoner ones: sweat bees, carpenter bees, and the lovable bumblebee. It is this smaller percentage of the world’s bees that have drawn most scientific research and interest, as their social structure, so unlike ours, continues to inspire and draw the curiosity of all. We’ve all heard of the “Queen Bee”. Almost 2000 years ago, Pliny the Elder was so amazed at the attention the worker and drone bees would lavish on this bee, that thought it must have been a male (A woman? In charge? No….), and referred to it as a King Bee. We know now that it is a female, and she has a majestic hold over the rest of the colony, though she is not the director; a grander collective good is what governs a bee colony, and remains not entirely understood. Prior to setting up her colony, she mates with up to 20 different males, and stores a lifetime supply of sperm in a special sac called a spermatheca. The male bees, called drones live lazy lives prior to this: begging for food from the female workers, living in dirty conditions, and generally performing no duties until mating time. At the mating time, the new queen meets a gathering of about 20 males, at their sexual prime at about 12 days of age. They have cleaned themselves, in preparation for coitus and present themselves for the queen. Many are unsuccessful in their attempts to mate at speeds up to 20 miles/hr in the air (!) and return to their cells, where they are eventually kicked out of the hive or murdered by all the female worker bees (they have been known to have been killed with the stinger of many of the females, as the soft flesh of the bee doesn’t hook and remove the stinger like it does in animal flesh). The successful ones have a short lived victory, as the queen bee flies off quickly after mating, ripping off the penis and viscera in her flight, and leaving the male tumbling to the ground in death.. The hive is obviously a very efficient factory – once your task is over, so is your welcome! The old monarch, and a good subset of the bees from the colony (roughly 10,000) start house-hunting when the hive is over-crowded. The manner in which this happened was discovered by Martin Lindauer, a renowned scientist, who noticed that the bees had begun returning covered not in pollen, but in soot and dirt. When it’s finally too crowded to live comfortably, the scouting bees (roughly 5% of the hive, or a few hundred) will begin searching around for a new home – knotholes of trees, cracked windowsills, etc. Using an intricate step-measurement system, the bee will explore the space for up to half an hour, to determine if the house is suitable for the hive. She then returns to report her findings. Communicating with an incredibly detailed dance and vibration of her body, the bee reports the size and details of the potential location. If her dance is enthusiastic enough (firm selling pitch!), other scout bees will head out and investigate the location. This of course is incredible, as the bee manages to communicate the location and distance as well with this amazing dance – I’ll get more into this fascinating aspect in a bit. The fact-checkers return, and if in fact agree that it is a suitable location, begin performing the same dance. Ultimately, dancing scouts that aren’t attracting a lot of fact-checkers to their team drop off. The dance with the most dancers win, and the bees soar off as one to start a daughter hive in their new home. This is in itself fascinating for many reasons, and after investigation and analysis, prompted Thomas Seeley, a biologist at Cornell, and his colleagues to create a set of rules from this social communication and interaction that could greatly benefit humans in their collective reality: 1. The decision making process is broadly diffused among ALL the scouts. Rather than having a small group of bees that make decisions for all the bees, all scouts have equal opportunity to discover a new home and convince the hive of it’s worthiness, thus being open to the broadest possible input of knowledge and ideas. 2. Each individual has her own opinion and doesn’t have to conform to the pushiest bee. All bees that return and report their findings have their opinions second checked by a non-biased bee. The non-biased bee does not have to agree if they find the home not suitable, and in fact this is how homes are selected, as the bee will return and NOT mimic the same dance if they disagree. 3. “The quorum-sensing method of aggregating the bees’ information allows diversity of opinion to thrive, but only long enough to ensure that a decision error is improbable.” This means that all opinions are considered and given equal weight, until all the bees come to a coalesced decision – not a compromise, but the best possible outcome as considered by all. These 3 social rules mean that all bees can make the decision that will be chosen, all options are given equal weight and carefully considered, and the best possible outcome is chosen by all. If only Harper would take a clue! Once the colony is set up, the worker bees immediately start preparing the famed and beautiful hexagonal honeycomb structure. Honeycombs and Worker Bees Worker Bees preparing Honeycombs Cells are carefully prepared for the queen, with wax layers for her egg deposits. The queen roams the colony and will select and inspect a cell, using her forelegs to judge size. If it meets her requirements, she deposits an egg. These eggs vary in their diploid and fertilized status – the queen makes a decision of whether or not to fertilize the egg with the sperm she carries, and this determines the sex of the bee. If a male is chosen, the cells are noticeably larger, allowing them to grow into fat, reproductively-purposed larvae. The food of the hive is provided by the foragers, and this is where we pull our lens of observation back and start to view a larger picture. The foragers leave the colony and begin searching for sources of nectar – flowers. Upon successfully collecting the pollen, they return with a full load to the entrance of the hive, where worker bees collect their harvest. In this exchange is another fascinating aspect of the bee communications – monitoring and control of food intake. If the colony is in need of food when the foragers arrive at the door, they are met eagerly and their harvest isimmediately unloaded. If the colony isn’t in need of much food at the moment, the foragers often have to wait at the door for up to a minute, buzzing around for a worker bee to take their load. If this begins to happen, their nervous system notes the anxiety and the bee begins agitatedly bumping into other returning bees, letting them know the harvest isn’t greatly needed. When their harvest is taken immediately, a nervous system ‘reset’ takes place and they know it’s alright to go back and collect. There is also an intricate dance that takes place at the door if a great source has been found. The bee will come back and begin excitedly wiggling. Through many years of careful observation, the Austrian biologist Karl Von Frisch (who won a Nobel Prize for his work with bees) discovered that the foragers actually denote direction exactly with their dance, and the frequency of their wiggles indicate the distance of the source! (Check out this incredible YouTube video). Other bees read the message and excitedly fly off to harvest from this more lucrative source. Bees have evolved a linguistic communication system that is incredibly precise, adaptive and flexible, based entirely on the motion of dance. This intricacy and evolution just blows my mind. Macro Image from Nasa's Earth Observatory Macro Bee Pollen Image - from Nasa Over a hundred million years, flowers and bees have evolved a brilliant symbiosis. The bee forages at each flower, where pollen clings to the numerous hairs all over their body. When the bee moves on to the next flower, some of the pollen from the first flower is deposited, and so the bee acts as the go-between in the sexual mating of plants. This seemingly simple, yet incredibly glorious relationship between pollinator and pollinated is filled by several other animals, and has been a contributing factor in all the flowers you see (like the flowers you just received for Valentines Day!). While seemingly simple and small, the role of a pollinator is absolutely essential in a healthy ecosystem. Our global plant life depends on this act of feeding and sharing, and without protecting this fragility, the biological health of the planet is greatly endangered. With increasing urbanization and mono-cropping of agricultural areas, the disappearance of our forests, meadows, grasslands, and biological life make the bees beautiful existence a fragile one. In addition to the loss of habitat Globalization is allowing bee pests and diseases to spread rapidly around the world, wiping out populations internationally. The United Nations Environment Programme (UNEP) released a report calling the decline of populations a global phenomena – see here. The reports tells us that of over 100 crop species providing over 90% of the worlds food, 71 of them are bee-pollinated. Where will the food come from if bees die? I inwardly cringe at the idea of an all factory-produced diet, and hope anyone with half a sense of ‘you are what you eat’ does as well. “Well, that’s alright”, you think. “I eat mostly meat”. But what are those animals going to eat? Synthetic factory grains? The plight of the honeybee is a dangerous reality that we would do well not to ignore. In addition to their incredible structure that we could learn so much from, they literally provide us with most of our foods. Not to mention our gorgeous flowers. So what can we do to save the bees? First and foremost is habitat conservation . This is important for so many other reasons beside just the bees. Don’t buy the oversized house in the suburbs, decrease your land imprint, and increase the natural, native plant life found on your property. Plant wildflowers around the margin of your property, giving bees more food and brightening up your property as well. Next, alternative agriculture. Again, important for many other reasons. Buy organic and local, and/or grow your own food. Lower purchases of pesticide heavy crops mean less growth (supply and demand), effectively lowering the input of dangerous pesticides and toxic chemicals into our environment. Corporations often spray at pollinating times of the year, killing off these precious and valuable bees as they do their work for a healthy planet. Every purchase of a trusted organic product saves a bee! (No math behind that one, just a concept :) ). Finally, buy honey from a good, trusted local farmer. Local bee farms (apiarys) are havens for many bees – places where the farmer does their best to ensure their health and reproduction in large numbers. Supporting these farmers gives them motivation to keep on taking care of their bees. Additionally, the health benefits of local honey are vast – especially if you suffer from seasonal allergies. Local honey often contains low doses of that which you are allergic to, contributing towards your general immunity. Not to mention – it is absolutely delicious. Heaven can be found in a teaspoon of fresh honey. Believe me. So don’t be afraid of the bees. Show ‘em some love – they’ve evolved into incredible managers of our plants and food. If conservation efforts fail, the decline of the bees immediately impacts over 20,000 plant species. And each of those plant species will go on to affect huge networks of our interlinked living web – turning the world into a devastated place. They are an important, non-negotiable linkage in the ecosystem of our planet. As UNEP eloquently states, “Pollination is not just a free service but one that requires investment and stewardship to protect and sustain it.” More info on bees, their goodness, their decline: Bee Products and Love NASA Special on Bees Ontario Bee Info for Kids Death of the Bees – GMO’s Go to the bee, then poet, consider her ways and be wise -George Bernard Shaw Notable Names: Richard Feynman What defines genius? Real genius, not just the smart kid in the back of the class with all the answers. People like Galileo, Da Vinci, Einstein. The brilliant minds that take standard concepts, turn them upside down, and show us exactly why it never made such sense to us before. They take two dimensional images, and show us three dimensional truths. Feynman, explaining something cool. Or in the case of Richard Feynman, they take the most basic bits of the universe, and give us quantum electrodynamics. Feynman was a brilliant mathematician and physicist, and arguably one of the greatest science lecturers of all time. Let’s delve for a bit, via Feynman, into the wacky, weird world of energy: the stuff everything you have ever known or interacted with (including yourself, and this computer screen!) is composed of. Now, I’m no physicist, but listening to Feynman’s lectures and interviews motivates me to learn more about the big majestic mystery of our physical universe. Born in 1918 in New York, Feynman was an intelligent student who had mastered differential and integral calculus by the time he was 15. He was turned away from Columbia University before being accepted at the famed MIT in Boston. After completing his bachelor’s, he then went on to Princeton, excelling constantly in physics, mathematics, and computational sciences. Indeed, his reputation for unprecedented thinking, clarifying lectures, and charming genius was so great that Albert Einstein himself attended his first graduate lecture. He was on his way to revolutionizing the field of physics, generating theories that are still being studied as our technology advances enough to measure it in laboratories. Feynman’s reputation even led him to the Manhattan Project, at the tender age of 24. If you’re not into atomic or war history, the Manhattan Project was a secret project developed by the American government, that led to the creation of the first atomic bomb. The Manhattan Project operated from 1942-1946 in Los Alamos, New Mexico, and Feynman was a major contributor in the theoretical and computational division. Feynman has said that his idea of assisting on the project with the purpose of defending the US against Germany and Japan (who were supposed to be racing to develop the bomb first), should have dissipated when the threat did. He continued on with the work, stating that he was driven by solving the problem, not thinking deeply about the moral complications. He was also present at the Trinity Bomb test – the first atomic explosion, and the official inception of the Atomic Age. Shortly after, and despite the pleading of Robert Oppenheimer (head of the Los Alamos lab) to stay and continue contributing, Feynman took a post at Cornell briefly. He claimed he was uninspired by the atmosphere and close to burning out intellectually there, so he took a post at Cal Tech, where he ended up doing some of his best research. This includes: • a model of weak decay: The ‘weak’ interaction is one of the four fundamental forces of the universe, along with the strong nuclear, electromagnetic, and gravity. The interactions of these forces control all the little bits of our universe that cannot be broken down any further; the rules that regulate our most basic building blocks (that we know of). According to the Standard Theory, these are known as quarks, leptons, gauge bosons and higg boson (You may have heard about the Higgs-Boson, as it has been appearing quite frequently in the news. It is the only undiscovered particle of these, and scientists are quite close to finding it, thanks to the Large Hadron Collider’s incredible technology). While gravity is most commonly known force to us regular folks, the weak force controls quarks and leptons – known collectively as ‘fermions’ because they are the two particles of matter, not light. Weak force controls both radioactive decay, and hydrogen fusion – the force allowing the sun to shine, and all life to live. You may not think it’s that important, but without the weak force, there is no you, because there would be no universe, no sun, no energy to get that tan in the summer! A classic example of weak decay is when a neutron breaks down into a proton, electron, and anti-neutrino. Feynman ultimately developed a new and succinctly described model for this decay factor, incorporating ideas that had been lacking before. • physics of the superfluidity of supercooled liquid helium: Helium is the second most abundant particle in the observable universe, and its behaviour is amongst the strangest of all. It also has the unique property of having one of the lowest boiling and melting points: -269°C and -272°C respectively. In liquid form, helium had been observed to behave rather bizarrely when it was cooled slightly below the boiling point (Check out this excellent video for a visual representation). Feynman didn’t solve the whole problem, but applied the Schrödinger equation successfully to display the quantum mechanical behaviour on a macroscopic scale (I’ll try to briefly explain quantum mechanics in a moment). • quantum electrodynamics: This is the work Feynman is best known for, and for which he won a joint Nobel Prize in 1965. The quantum world itself is a section of physics that deals in the tiniest part of matter we know about – atoms. It’s a bizarre world that breaks down all the other rules that govern our everyday life. The five main ideas behind quantum theory are: A) Energy is not continuous, but moves in small, discreet bundles. B) Elementary particles move like matter AND waves (excellent video explaining this crazy phenomena here). C) This movement is intrinsically random. D) It is impossible to know the location and momentum of a particle at the same time – the more precisely one is known, the less precise the other measurement is. E) The quantum world is absolutely nothing like the one we live in. Feynman was one of the founding father of the Quantum Electrodynamic Theory. While complicated, it basically describes (through mathematics) all interactions of light with matter, and of charged particles (a subatomic particle or ion with an electric charge) with one another. It was important because it was the first theory to cohesively integrate Einstein’s special relativity theory into each equation, as well as satisfying the Schrödinger equation (a problem that Paul Dirac and Norman Wiener, two scientists that had developed the theory previously, were unable to solve). The three main concepts of Feynman’s QED theory is that: A) a photon goes from a location and time to another location and time, B) an electron goes from a location and time to another location and time, and C) an electron emits or absorbs a photon at a certain place and time. OK – what does that mean? To help explain these, Feynman came up with the self-named Feynman diagrams. Feynman Diagram Elements Feynman Diagram (simple). The first image shows us the symbols of parts A, B, or C of his theory. The second shows us an example of a Feynman Diagram – an ‘electron-positron annihilation’. Not to be mistaken for a Star Trek battle, this is when an negative electron (e−), and it’s opposite, a positive electron (positron [e+]) collide. This results in the annihilation of both, and photons are sent shooting out from the collision. Feynman’s theories and his well-known diagrams make ideas like this clearer, and more accessible visually to a large portion of the mathematically-disinclined population. Keep in mind, these diagrams are not set paths – just simplified suggestions representing potential quantum relationships symbolically. It’s important to note that QED theory doesn’t tell you what will happen, but predicts the probability of what will happen. In quantum mechanics, this means that you add up the sum of all possibilities, to any given endpoint, and predict the probability of the end result based on this total sum. We can loosely think of this as taking a random walk. You’ve had a bad day at work and want to clear your mind. Without knowing your final destination, you decide to cross the road to the other side, which happens to be infinite. Your brain is (hopefully!) measuring where potholes in the road you may have to avoid are, and the probability of whether or not you will get hit by a car. Your brain then tells you when to finally move, and on what path. Your exact footsteps are not predictable, nor is where or when you will step onto the sidewalk, but your brain has calculated the possibilities. And if you were a quantum particle participating in the theory, you would end up with a path and endpoint that were the sum of all possibilities. This computational method was referred to by Feynman as the path integral formulation , and stands in contrast to previous theories that predicted a single, unique trajectory. This formula helps us to understand (or at least diversify) our understanding of the movement of the very tiny little building blocks of our universe. Phew. If I have confused you, I’m sorry. I’m a bit confused myself at this point! Particles here, mathematics all over the chalkboard, what does that mean when I need to drag myself out of bed and go to work to feed the kids? The quantum world is difficult to grasp, and I would suspect that it’s still somewhat difficult even for the most brilliant of minds like Feynman. But that doesn’t mean its existence is irrelevant. It in fact informs everything about our lives, our composition, our beautiful planet tucked away here in this tiny corner of the universe. If our goal is to know ourselves, understanding the smallest bits is surely important, difficult as it may be. I’m sure this was one of Feynman’s motivating factors. While working on all of these ideas and more, Feynman also dedicated a large portion of his career to teaching. While still at Cal Tech, he was asked to get the undergraduates really involved and appreciative of physics. After several years of work, this resulted in the extremely accessible, beautiful, and inspiring Feynman’s Lectures on Physics which I highly recommend if you have the remotest interest in physics. Perhaps it will clear up any confusion I may have left you floundering in today! Now, I barely understand a percent of the incredible problems that Feynman naturally intuited, thought about deeply, and solved. However, the reason I appreciate him and his success as a physicist is due not only to his inherent genius, but also to his understanding of human nature. He was always open to new ideas and subjects, and constantly engaged his whole brain with love, academics, and artists – even creating some art himself under the pseudonym of ‘Ofey’. Watching his interviews and documentaries is always a pleasure, as he somehow manages to circumvent the common way of thinking, and present what have otherwise been very difficult concepts as clear and simple. Feynman has always managed to grasp the type of mind required to appreciate the universe – curious and humourous. As one of his colleagues best described, when you hear Feynman speak, you understand clearly the science behind physics. Once you leave the room however, you find yourself struggling to follow the same pathway that Feynman drew in your brain. I’d suspect it’s because few of us have ever taken that path before, and were so amazed by the beautiful things Feynman was showing us, that we forgot to remember the path. If we were to work hard enough though, we may be able to figure out the average probability to get back (A Feynman pun!). Richard Fenyman continued to revolutionize and bring physics to light (another pun!) for the rest of us. He worked on the Challenger disaster of ’86, and raised awareness of the huge discrepancies between the NASA management teams and their poorly informed understanding of physics. In his rather stark review, he says quite truthfully, “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.” Feynman died from several forms rare cancers at the age of 69, in Los Angeles. His last words, in true humourous form, “I’d hate to die twice. It’s so boring.” In memory of true genius, Richard P. Feynman 1918-1988. What is necessary “for the very existence of science” and so forth, and what the characteristics of nature are, are not to be determined by pompous preconditions. They are determined always by the material with which we work, by nature herself. We look, and we see what we find, and we cannot say ahead of time -successfully- what it is going to look like. The most reasonable possibilities turn out often not to be the situation. What is necessary for the very existence of science is just the ability to experiment, the honesty in reporting results -the results must be reported without somebody saying what they’d like the results to have had been rather than what they are- and finally -an important thing-, the intelligence to interpret the results but an important point about this intelligence is that it should not be sure ahead of time what must be. • « Older entries Get every new post delivered to your Inbox. Join 112 other followers
058a467fdf9d8534
over born Born-Oppenheimer approximation In quantum chemistry, the computation of the energy and wavefunction of an average-size molecule is a formidable task that is alleviated by the Born-Oppenheimer (BO) approximation. For instance the benzene molecule consists of 12 nuclei and 42 electrons. The time independent Schrödinger equation, which must be solved to obtain the energy and molecular wavefunction of this molecule, is a partial differential eigenvalue equation in 162 variables—the spatial coordinates of the electrons and the nuclei. The BO approximation makes it possible to compute the wavefunction in two less formidable, consecutive steps. This approximation was proposed in the early days of quantum mechanics by Born and Oppenheimer (1927) and is still indispensable in quantum chemistry. In basic terms, it allows the wavefunction of a molecule to be broken into its electronic and nuclear (vibrational, rotational) components. Psi_{ total} = psi_{ electronic} times psi_{ nuclear} In the first step of the BO approximation the electronic Schrödinger equation is solved, yielding the wavefunction psi_{electronic} depending on electrons only. For benzene this wavefunction depends on 126 electronic coordinates. During this solution the nuclei are fixed in a certain configuration, very often the equilibrium configuration. If the effects of the quantum mechanical nuclear motion are to be studied, for instance because a vibrational spectrum is required, this electronic computation must be repeated for many different nuclear configurations. The set of electronic energies thus computed becomes a function of the nuclear coordinates. In the second step of the BO approximation this function serves as a potential in a Schrödinger equation containing only the nuclei—for benzene an equation in 36 variables. The success of the BO approximation is due to the high ratio between nuclear and electronic masses. The approximation is an important tool of quantum chemistry, without it only the lightest molecule, H2, could be handled; all computations of molecular wavefunctions for larger molecules make use of it. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. The electronic energies, constituting the nuclear potential, consist of kinetic energies, interelectronic repulsions and electron-nuclear attractions. In a handwaving manner the nuclear potential is an averaged electron-nuclear attraction. The BO approximation follows from the inertia of electrons to be negligible in comparison to the atom to which they are bound. Short description The Born-Oppenheimer (BO) approximation is ubiquitous in quantum chemical calculations of molecular wavefunctions. It consists of two steps. In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron-nucleus interactions are not removed and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.) The electronic Schrödinger equation H_mathrm{e}(mathbf{r,R} ); chi(mathbf{r,R}) = E_mathrm{e} ; chi(mathbf{r,R}) is solved (out of necessity approximately). The quantity r stands for all electronic coordinates and R for all nuclear coordinates. Obviously, the electronic energy eigenvalue Ee depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains Ee as a function of R. This is the potential energy surface (PES): Ee(R) . Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the adiabatic approximation and the PES itself is called an adiabatic surface. In the second step of the BO approximation the nuclear kinetic energy Tn (containing partial derivatives with respect to the components of R) is reintroduced and the Schrödinger equation for the nuclear motion left[T_mathrm{n} + E_mathrm{e}(mathbf{R})right] phi(mathbf{R}) = E phi(mathbf{R}) is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. Derivation of the Born-Oppenheimer approximation It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms. It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated: E_0(mathbf{R}) ll E_1(mathbf{R}) ll E_2(mathbf{R}), cdots for all :mathbf{R}. We start from the exact non-relativistic, time-independent molecular Hamiltonian: H= H_mathrm{e} + T_mathrm{n} , with H_mathrm{e}= -sum_{i}{frac{1}{2}nabla_i^2}- sum_{i,A}{frac{Z_A}{r_{iA}}} + sum_{i>j}{frac{1}{r_{ij}}}+ sum_{A > B}{frac{Z_A Z_B}{R_{AB}}} quadmathrm{and}quad T_mathrm{n}=-sum_{A}{frac{1}{2M_A}nabla_A^2}. The position vectors mathbf{r}equiv {mathbf{r}_i} of the electrons and the position vectors mathbf{R}equiv {mathbf{R}_A = (R_{Ax},,R_{Ay},,R_{Az})} of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as r_{iA} equiv |mathbf{r}_i - mathbf{R}_A| (distance between electron i and nucleus A) and similar definitions hold for r_{ij}; and R_{AB},. We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the Coulomb interactions between the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA—the atomic number and mass of nucleus A. It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows: T_mathrm{n} = sum_{A} sum_{alpha=x,y,z} frac{P_{Aalpha} P_{Aalpha}}{2M_A} quadmathrm{with}quad P_{Aalpha} = -i {partial over partial R_{Aalpha}}. Suppose we have K electronic eigenfunctions chi_k (mathbf{r}; mathbf{R}) of H_mathrm{e},, that is, we have solved H_mathrm{e};chi_k (mathbf{r};mathbf{R}) = E_k(mathbf{R});chi_k (mathbf{r};mathbf{R}) quadmathrm{for}quad k=1,ldots, K. The electronic wave functions chi_k, will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions chi_k, on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although chi_k, is a real-valued function of mathbf{r}, its functional form depends on mathbf{R}. For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, chi_k, is an MO given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of mathbf{R}, the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO chi_k,. We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider P_{Aalpha}chi_k (mathbf{r};mathbf{R}) = - i frac{partialchi_k (mathbf{r};mathbf{R})}{partial R_{Aalpha}} quad mathrm{for}quad alpha=x,y,z, which in general will not be zero. The total wave function Psi(mathbf{R},mathbf{r}) is expanded in terms of chi_k (mathbf{r}; mathbf{R}): Psi(mathbf{R}, mathbf{r}) = sum_{k=1}^K chi_k(mathbf{r};mathbf{R}) phi_k(mathbf{R}) , with langle,chi_{k'}(mathbf{r};mathbf{R}),|, chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k' k} and where the subscript (mathbf{r}) indicates that the integration, implied by the bra-ket notation, is over electronic coordinates only. By definition, the matrix with general element big(mathbb{H}_mathrm{e}(mathbf{R})big)_{k'k} equiv langle chi_{k'}(mathbf{r};mathbf{R}) | H_mathrm{e} | chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k'k} E_k(mathbf{R}) is diagonal. After multiplication by the real function chi_{k'}(mathbf{r};mathbf{R}) from the left and integration over the electronic coordinates mathbf{r} the total Schrödinger equation H;Psi(mathbf{R},mathbf{r}) = E ; Psi(mathbf{R},mathbf{r}) is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only left[mathbb{H}_mathrm{n}(mathbf{R}) + mathbb{H}_mathrm{e}(mathbf{R}) right] ; boldsymbol{phi}(mathbf{R}) = E; boldsymbol{phi}(mathbf{R}). The column vector boldsymbol{phi}(mathbf{R}) has elements phi_k(mathbf{R}),; k=1,ldots,K. The matrix mathbb{H}_mathrm{e}(mathbf{R}) is diagonal and the nuclear Hamilton matrix is non-diagonal with the following off-diagonal (vibronic coupling) terms, big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = langlechi_{k'}(mathbf{r};mathbf{R}) | T_mathrm{n}|chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})}. The vibronic coupling in this approach is through nuclear kinetic energy terms. Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born-Oppenheimer approximation. Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal. If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of T_{textrm{n}} as mathrm{H_n}(mathbf{R})_{k'k}equiv big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = delta_{k'k} T_{textrm{n}} + sum_{A,alpha}frac{1}{M_A} langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} P_{Aalpha} + langlechi_{k'}|big(T_mathrm{n}chi_kbig)rangle_{(mathbf{r})} The diagonal (k'=k) matrix elements langlechi_{k}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} of the operator P_{Aalpha}, vanish, because this operator is Hermitian and purely imaginary. The off-diagonal matrix elements satisfy langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} = frac{langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})}} {E_{k}(mathbf{R})- E_{k'}(mathbf{R})}. The matrix element in the numerator is langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})} = iZ_Asum_i ;langlechi_{k'}|frac{(mathbf{r}_{iA})_alpha}{r_{iA}^3}|chi_krangle_{(mathbf{r})} ;;mathrm{with};; mathbf{r}_{iA} equiv mathbf{r}_i - mathbf{R}_A . The matrix element of the one-electron operator appearing on the right hand side is finite. When the two surfaces come close, {E_{k}(mathbf{R})approx E_{k'}(mathbf{R})}, the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down and a coupled set of nuclear motion equations must be considered, instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected and hence the whole matrix of P^{A}_alpha is effectively zero. The third term on the right hand side of the expression for the matrix element of Tn (the Born-Oppenheimer diagonal correction) can approximately be written as the matrix of P^{A}_alpha squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well-separated surfaces and a diagonal, uncoupled, set of nuclear motion equations results, left[T_mathrm{n} +E_k(mathbf{R})right] ; phi_k(mathbf{R}) = E phi_k(mathbf{R}) quadmathrm{for}quad k=1,ldots, K, which are the normal second-step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born-Oppenheimer approximation breaks down and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. Historical note The Born-Oppenheimer approximation is named after M. Born and R. Oppenheimer who wrote a paper [Annalen der Physik, vol. 84, pp. 457-484 (1927)] entitled: Zur Quantentheorie der Moleküle (On the Quantum Theory of Molecules). This paper describes the separation of electronic motion, nuclear vibrations, and molecular rotation. Somebody who expects to find in this paper the BO approximation—as it is explained above and in most modern textboooks—will be in for a surprise. The reason being that the presentation of the BO approximation is well hidden in Taylor expansions (in terms of internal and external nuclear coordinates) of (i) electronic wave functions, (ii) potential energy surfaces and (iii) nuclear kinetic energy terms. Internal coordinates are the relative positions of the nuclei in the molecular equilibrium and their displacements (vibrations) from equilibrium. External coordinates are the position of the center of mass and the orientation of the molecule. The Taylor expansions complicate the theory and make the derivations very hard to follow. Moreover, knowing that the proper separation of vibrations and rotations was not achieved in this paper, but only 8 years later [by C. Eckart, Physical Review, vol. 46, pp. 383-387 (1935)] (see Eckart conditions), one is not very much motivated to invest much effort into understanding the work by Born and Oppenheimer, however famous it may be. Although the article still collects many citations each year, it is safe to say that it is not read anymore (except perhaps by historians of science). External links Resources related to the Born-Oppenheimer approximation: See also Search another word or see over bornon Dictionary | Thesaurus |Spanish Copyright © 2013, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
ff53456bc25c1d3e
Statistical physics From Wikipedia, the free encyclopedia Jump to: navigation, search Statistical physics is a branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic nature. Its applications include many problems in the fields of physics, biology, chemistry, neurology, and even some social sciences, such as sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[1] In particular, statistical mechanics develops the phenomenological results of thermodynamics from a probabilistic examination of the underlying microscopic systems. Historically, one of the first topics in physics where statistical methods were applied was the field of mechanics, which is concerned with the motion of particles or objects when subjected to a force. Statistical mechanics[edit] Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, therefore explaining thermodynamics as a natural result of statistics, classical mechanics, and quantum mechanics at the microscopic level. Because of this history, the statistical physics is often considered synonymous with statistical mechanics or statistical thermodynamics.[note 1] One of the most important equations in Statistical mechanics (analogous to F=ma in mechanics, or the Schrödinger equation in quantum mechanics ) is the definition of the partition function Z, which is essentially a weighted sum of all possible states q available to a system. Z = \sum_q \mathrm{e}^{-\frac{E(q)}{k_BT}} where k_B is the Boltzmann constant, T is temperature and E(q) is energy of state q. Furthermore, the probability of a given state, q, occurring is given by P(q) = \frac{ {\mathrm{e}^{-\frac{E(q)}{k_BT}}}}{Z} A statistical approach can work well in classical systems when the number of degrees of freedom (and so the number of variables) is so large that exact solution is not possible, or not really useful. Statistical mechanics can also describe work in non-linear dynamics, chaos theory, thermal physics, fluid dynamics (particularly at high Knudsen numbers), or plasma physics. See also[edit] 1. ^ This article presents a broader sense of the definition of statistical physics 1. ^ Huang, Kerson. Introduction to Statistical Physics (2nd ed.). CRC Press. p. 15. ISBN 978-1-4200-7902-9.  Thermal and Statistical Physics (lecture notes, Web draft 2001) by Mallett M., Blumler P. BASICS OF STATISTICAL PHYSICS: Second Edition by Harald J W Müller-Kirsten (University of Kaiserslautern, Germany) Statistical physics by Kadanoff L.P. Statistical Physics - Statics, Dynamics and Renormalization by Kadanoff L.P.
fac9e26f4620e34f
Simulating Hamiltonian evolution with quantum computers In 1982, Richard Feynman proposed the concept of a quantum computer as a means of simulating physical systems that evolve according to the Schrödinger equation. I will explain various quantum algorithms that have been proposed for this simulation problem, including my recent work (jointly with Dominic Berry and Rolando Somma) that significantly improves the running time as a function of the precision of the output data. Event Type:  Scientific Area(s):  Event Date:  Mercredi, Octobre 16, 2013 - 14:00 to 15:30 Time Room
14f836df827a1fcf
Science Is What Works Science Is What Defines Our Species Best. Science is industrial strength truth, and that works. Science, well done, teaches wonder, and humility. We are all, or we should all be, scientists (those who are paid for that, therefore, ought to spare the public who finance them arrogance, sarcasm and appearing certain of what they ought not to be certain of). Let me wax lyrical on this theme (suggested by an essay of Matthew Francis). Some of these skills could disappear, as artificial intelligence becomes ubiquitous: the driver of a car instinctively learn some rudiments of mechanics. Yet, when automatic cars appear, those rudiments will go away. This happened before: a Neanderthal equipped with a spear-thrower (atlatl) had to know, instinctively, quite a bit of physics about dynamics, aerodynamics, angular momentum, inertia, etc. Astute and cynical commenters will no doubt observe that this is how dogs learn calculus… Instinctively. So what? One hopes to build “Boson Sampling” computers. They will be just something that works, just as spear throwers did. Don’t ask why: nobody knows, not anymore than Neanderthals “knew” all this physics to send a dart 100 meters away. Science is just what works. Some revere equations, and feel they differentiate “science” from what was before. Illusion. Equations just depict ideas. Equations can be very hard. Some we have no …idea how to handle them (Navier-Stokes, the most useful equation supposed to depict fluid flow). It’s hard to find new ideas. However, some, once found and accepted, can be amazingly simple. The invention of Non-Euclidean geometry just amounted to admit a pre-Euclidean idea: one could make geometry on a sphere, or a saddle, not just a flat surface. Inventing Non-Euclidean geometry was more of a philosophical change of perspective than anything else. It took 21 centuries to make it. It was not a question of equations. Actually, there are no equations in Euclidean geometry. Similarly Einstein took Poincare’s observation that the constancy of the speed of light should be viewed as a physical law, and got the Lorentz group from it. Modulo some mathematics so trivial, Poincare’ had not bother to make them explicit, when he talked about the “Principle Of Relativity”. Again a philosophical change of perspective. Or Einstein (again) took Planck’s idea of quantified emission of light, and decided that was proof enough that there was such a thing as light quanta. Planck disapproved. Planck was not impressed that this outrageous idea “explained” the photoelectric effect discovered 80 years earlier. When he recommended Einstein for jobs, Planck asked the would-be employers to overlook that silly mistake of an exuberant young man (Einstein got the Nobel for that simple “lichtquanten” [light quanta] idea in 1923). Philosophical change of perspective, again. The discovery of Dark Matter and Dark Energy were as unexpected as that of Quantum Theory. However there is an important philosophical difference. Planck’s quantified emission of radiation “explained” right away two well-known, yet baffling, experimental facts: the non-occurring “ultraviolet catastrophe”, and the Blackbody Radiation. In the present situation, we are not even completely sure that Dark Matter and Dark Energy are really observed facts. The philosophical perspectives, let alone the physical ones, are vast. Breakthroughs will come, first, from simple ideas. Complicated equations will follow. We appreciate the brutal beauty of the universe as our judge, because we evolved that way. We evolved to find those elements of reality we call the truth. Our glorious survival blossomed that way. Science is what we do, as a species. And philosophy is our oracle. We evolved into thinking that we are. We are what we think. Patrice Ayme 17 Responses to “Science Is What Works” 1. Matthew R. Francis Says: Matthew R. Francis January 11, 2014 at 07:03 What you say sounds reasonable on its face, but there are number of problems with your arguments. We use equations in physics because they are effective. The Navier-Stokes equation helps us describe physical phenomena successfully; it doesn’t matter whether you understand it philosophically or not. To cite the most important example of all: people still debate over the proper way to interpret quantum mechanics, but everyone uses the Schrödinger equation and the other mathematical tools because those are the way to do quantum physics. That’s not to say the interpretation isn’t important, but the equations are essential. Also, you get the cosmological issues backward. Dark matter and dark energy are observed phenomena (“facts” if you will, though I dislike using that term). “Dark energy” in particular is just the name we give to the observed accelerated expansion of the Universe, for which we currently don’t have a good theoretical explanation. “Dark matter” similarly is the name we give to the simplest explanation for a wide variety of astronomical observations, from the rotation of galaxies to the sound waves in the cosmic microwave background (see the detailed discussion in http://galileospendulum.org/2013/03/21/planck-results-our-weird-and-wonderful-universe/ for more on that second point). These are observations for which we need more theory and observation, not philosophical perspectives. Conceptual breakthroughs happen, but they follow hard work. Newton didn’t spontaneously come up with gravity, and Einstein didn’t spontaneously think of relativity. Both of these breakthroughs came after long strenuous efforts, and were built on ideas, experiments, and observations from many others who came before them. When we figure them out, dark energy and dark matter will be no different. After all, we’ve known about dark matter since the 1930s and dark energy since 1998 (with inklings of its existence before then). If all it took was a philosophical perspective, we’d have solved it by now. To reiterate, physics is hard, but worth it. • Patrice Ayme Says: Dear Matthew: I did not say the Navier-Stokes equation had to be understood “philosophically”. I just alluded to the fact that, although it depicts fluid flow, the general existence and smoothness solutions of this non linear PDE have not been proven (I actually don’t believe they exist). Newton did not come up with the gravity law, by the way. He exploited it further. The French astronomer Ismaël Boulliau suggested that Kepler was wrong about the gravitational force. Kepler had declared that the gravitational force holding the planets in place decreased inversely to distance. Boulliau held instead that the force decreased as an inverse square law. He deduced this in analogy to light. Isaac Newton acknowledged Boulliau’s discovery. Nobody dares to suggest the equations related to Quantum Theory are not essential. To a great extent, they are all what defines the theory. QFT is all about guessing the Laplacian, aka the equation(s). The situation with Dark Stuff is not similar. They are not directly observed phenomena (just ask LHC people). The “observations” of both Dark Matter and Dark Energy are the fruits of (philosophical) pruning. The former depends, among other things, upon the hypothesis that gravity holds at galactic scales (some employed astronomers claim gravity does not work beyond the Solar System… as seems to be the case, at face value!) It’s hard to evaluate things we don’t know, such as galactic mass (the Milky Way has grown in astronomers’ minds recently) to make further guesses about something else. In the case of Super Novae studies, outliers explosions are removed from the sampling. I could not read a clear enough description of what was found (I read the original literature) to see if my pet theory survives. Boldly supposing that something is really going on (I know a Nobel was attributed), we are very far from being able to describe the thing (whether, for example it’s a Cosmological Constant or Quintessence field description). Physics is what we do, it did not start with Newton. Or Buridan, who discovered inertia, or Aristotle, who got that wrong. Physics, finding new physics is desperately hard, but so worth it, our lives depend upon it. They always have. • Patrice Ayme Says: What I am driving at, is that just reducing physics to equations is too reductive. • Paul Handover Says: I’m sure this is familiar to Matthew but for me this recent item on the BBC News website had me spellbound: http://www.bbc.co.uk/news/science-environment-25663810 Universe measured to 1% accuracy Astronomers have measured the distances between galaxies in the universe to an accuracy of just 1%. This staggeringly precise survey – across six billion light-years – is key to mapping the cosmos and determining the nature of dark energy. The new gold standard was set by BOSS (the Baryon Oscillation Spectroscopic Survey) using the Sloan Foundation Telescope in New Mexico, US. It was announced at the 223rd American Astronomical Society in Washington DC. Continue reading the main story Start Quote “I now know the size of the universe better than the size of my house” Prof David Schlegel BOSS principal investigator “There are not many things in our daily lives that we know to 1% accuracy,” said Prof David Schlegel, a physicist at Lawrence Berkeley National Laboratory and the principal investigator of BOSS. But the aspect that really generated that spellbound feeling was this: The latest results indicate dark energy is a cosmological constant whose strength does not vary in space or time. They also provide an excellent estimate of the curvature of space. “The answer is, it’s not curved much. The universe is extraordinarily flat,” said Prof Schlegel. “And this has implications for whether the universe is infinite. While we can’t say with certainty, it’s likely the universe extends forever in space and will go on forever in time. Our results are consistent with an infinite universe,” he said. “it’s likely the universe extends forever in space and will go on forever in time.” I find that utterly beyond imagination! Matthew, PLEASE help me out! 😉 • Patrice Ayme Says: Hi Paul! There is a whole culture of people out there who view scientists, the way priests used to be seen. This is very wrong. Matthew makes it clear on his site that he does not take it lightly to those who do not use proper reverence. This way he reminds me of Mr. Lack. I’m a mathematician, and, he, clearly is not (he thought I said something philosophical about Navier-Stokes!). All research mathematicians know the Navier-Stokes is one of the seven “Millennium” problems of the Clay Institute. There is a one million dollar prize for it. I don’t believe it can be always solved, because it neglects QUANTUM effects. What you reported there is very interesting. I vaguely saw come across, and took it tongue in cheek. It’s nevertheless striking to see this in print, a few weeks after my own: By coincidence I was writing something about Dark Matter (I have had the same theory for decades; one can say it predicted Dark Matter!) Here is a little help to give you. The Big Bang theory is brutal, definitive, on a limited time span, and based on naïve assumptions. In one word: Biblical. The problem you have is that it seems to conflict with: Well, not really. Anyway this cat can help you more than Mr. Matthew…. Methinks. • Paul Handover Says: Thank you for your length reply. Yes, I understand, and share, your criticism of the Big Bang theory. I wasn’t in conflict, per se, with the idea of an infinite universe. It was just that I couldn’t understand, in a scientific sense, a universe that is boundless. I.e. it has no start or end. The reason I have such trouble in understanding is that everything material that I am aware of, from the atom to the solar system, has a start and an end. Therefore, if the universe has NO start or end then somewhere between the fabric of our solar system and the universe there must be a boundary where the rules of matter change. Not even sure if I’m making myself clear! • Patrice Ayme Says: Dear Paul: The universe we see now is about 30 billion light years across. That defies understanding. In practice, it’s infinite. I have actually argued that the very notion of infinity, in MATHEMATICS, is defined by the size of the universe itself. I don’t know for sure that there is one mathematician besides myself who understand what it means. • Paul Handover Says: Almost the philosophy of mathematics! Or is it the mathematics of philosophy? 😉 Thanks Patrice. 2. Alexi Helligar Says: Alexi Helligar Truth is what works. 3. Paul Handover Says: Sorry about this but becoming fixated on this universe size thing! I see that 1 light year = 9.4605284 x 10 to the 15 meters Ergo, 3 light years = 28.3815852 x 10 to the 15 meters, or 28.3815852 x 10 to the 12 kilometres. (28.38 trillion kilometres) Thus 30 BILLION light years is: 28.3815852 x 10 to the 21 Have I done that correctly, Mr. Mathematician? • Paul Handover Says: Sorry should have included the measure: Thus 30 BILLION light years is: 28.3815852 x 10 to the 21 kilometres. • Patrice Ayme Says: Dear Paul: Look, I just called apes apes, in my latest post, so I am tired, and I’m sure you handle the math OK. I think the pictures of billions of galaxies are more telling than powers of ten, anyway. There is a new Hubble Deep Field, showing huge, very bright galaxies. Two things: 1) it seems to show the universe evolved. 2) seems to me like another Big Bang headache, as it’s not clear how such big things could have evolved 500 million years after the alleged BB. • Patrice Ayme Says: The size of the universe is an excellent thing to be fixated about. Men used to be the measure of all things, it may be wiser to prefer the universe itself to confer scale. 4. DIVING INTO TRUTH | Patrice Ayme's Thoughts Says: 5. Nature Physical Law From Galileo’s Pendulum Xchge | Tyranosopher Overflow Says: […] Physics, finding new physics is desperately hard, but so worth it, our lives depend upon it. They always have. https://patriceayme.wordpress.com/2014/01/11/science-is-what-works/ […] WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
ac36c779ce1ad9f2
Coronavirus (Covid-19): Latest updates and information Skip to main content Skip to navigation 2DEGs and 2DHGs Two-dimensional electron and hole gases (2DEGs and 2DHGs respectively) can be described as having quantised energy levels for one spatial dimension, but being free to move in the other two.  They can be produced by adjoining semiconductors with differently sized band gaps creating what is known as a Heterojunction. Figure 1: An energy level schematic of a heterojunction and the resulting square well. The wells created in the conduction and valence bands can be occupied by electrons and holes respectively, as shown. It is possible for an electron hole pair to combine and create a photon, the energy of which can be used to infer the structure of the energy levels within the wells. Potential Wells Depending on how heterojunctions are manufactured, the wells in which the 2DEGs and 2DHGs exist may have different forms such as square, triangular or parabolic. For wells of infinite potential, the energy level solutions are as follows: E_{n}=\frac{\pi \hbar^2n^2}{2 m^\star a^2} E_{n}=\left(\frac{\hbar^2}{2m^\star}\right)^{1/3}   \left( \frac{\frac{3}{2}\pi eF}{2m^\star}\right)^{2/3}   \left(n+\frac{3}{4} \right)^{2/3}  twell.png pwell.png where F is the magnitude of the electric field at the interface and m^\star is the effective mass of the electron or hole. Triangular well Parabolic well   Figure 2: An energy level schematic of a triangular and parabolic well. The triangular well approximation (shown in Figure 2) is widely used to describe electron energy bands at single heterojunctions. It allows the exact analytical solution of the Schrödinger equation (which must be solved self-consistently along with the Poission equation in order to determine the electronic structure at a heterointerface) in terms of Airy functions, \zeta_{n}\left(z\right)=An\left[\frac{2m^\star eF}{\hbar^2}\left(z-\frac{E_{n}}{eF}\right)\right] the eigenvalues of which describe the energy levels given above. In practical calculations however, simpler, approximate analytic wave functions make calculations much more convenient. The simplest of these is the Fang Howard wave function. where z&gt;0 and b is determined by minimizing the total energy. This function however, does tend to overestimate the energy for the ground sub-band by around 6%. A more accurate wave function is given by Takeda and Uemura, which yields a ground sub-band energy only 0.4% larger than the exact Airy function value. Solving the coupled Schrödinger and Poission equations (1D) The Poisson equation shows that potential is directly related to charge density. Schrödinger's equation is also related to the charge density, though not directly. Firstly we have the Schrödinger equation itself.  \left[-\frac{\hbar^2}{2}\frac{d}{dz}\left(\frac{1}{m^\star(z)}\frac{d}{dz}\right) +V(z)\right]\Psi(z)=E\Psi(z) where \Psi(z) is the electron wave function.  We note that the occupation of electronic states k is given by the Fermi-Dirac distribution which then allows the spacial density of electrons to be calculated using the solution of the Schrödinger equation (or an approximate solution, such as those mentioned above). Where m is the number of bound states. We must now ask 'How does the charge density \rho(z) relate to the electron density n(z)?' The answer is fairly straight forward once you consider the electron donors, which become positively charged and have density N_{D} This enables the Poisson equation to be re-written as where \epsilon(z) is the permittivity of the material. In order to solve Schrödinger and Poisson self-consistently, one starts with a trial potential \phi(z) and solves Schrödinger's equation. n(z) is then calculated from the obtained wave functions and their corresponding eigenenergies. A second value of \phi(z) may then be found from Poission's equation (using n(z) and N_{D}). This second potential is then fed back into the Schrödinger equation and more iterations take place until |\phi_{i}(z)-\phi_{i-1}(z)| is less than a certain criteria. A second example In the following section we will consider a Si/SiGe heterojunction and the resulting 2DHG in oder to demonstrate a second method of describing the junction in terms of energy parameters and dopant concentrations. Figure 3: A simple schematic diagram of the valence band in a particular SiGe heterojunction. E_{A} is the activation energy and l is the acceptor depletion width. One can clearly see that electrons at the interface have occupied the adjacent acceptor sites, leaving behind a 2DHG in the potential well. The sheet carrier density is obtained from the two-dimensional density of states for a single sub-band, and is given by (1) The electric field in the well is assumed to be uniform, allowing the triangular well approximation to be used, with the ground-state energy given by (2) E_{0}=\left(\frac{\hbar^2}{2m^\star}\right)^{1/3}   \left( \frac{\frac{3}{2}\pi eF_{0}}{2m^\star}\right)^{2/3}   \left(\frac{3}{4} \right)^{2/3} where F_{0} is the magnitude of the electric field at the interface just inside the SiGe alloy layer and is given by with N_{Depl} as the charge arising from background donor depletion within the SiGe alloy, the Si buffer later and the Si substrate. Allowing for the possibility of negatively charged impurities with sheet density n_{i}  at the Si/SiGe interface, the electric field may also be written as where N_{A} is the acceptor concentration.  The set of equations can now be closed by adding all the potential variations from the bottom if the well up to the top of the valence band offset (3) \Delta E_{V}=E_{0}(n_{S})+E_{F}+E_{A}+\frac{e^2}{2\epsilon_{0}\epsilon_{r}}N_{A}l(n_{S})^2+\frac{e^2}{\epsilon_{0}\epsilon_{r}}N_{A}L_{S}l(n_{S}). n_{S} can then be found by choosing an initial value for E_{F} and solving (3) for n_{S}. This value is then compared to that given by equation (1) for this same Fermi energy. This method is repeated, moving along a range of E_{F} until the two yielded values of n_{S} are within a certain tolerance of each other, in much the same way as the previous example. 1. S. M. Sze, Semiconductor Devices, John Wiley & Sons, 1985. 2. J. H. Luscombe et al., Physical Review B 46 (1992). 3. C. J. Emeleus et al., J. Appl. Phys 73 (1993).
5c5fb1eef8ebe5ec
Molecular modelling is changing chemistry – could it change education too? Ida Emilie Steinmark takes a closer look at computational chemistry methods Chemistry isn’t what it used to be. There was a time when things were, some might argue, simpler. It was practical, wet, occasionally dirty and sometimes smelly. Now, a large group of chemists are exchanging the 8-hour lab sessions for screen marathons, not working with chemicals but simulating them. The rise of molecular modelling is changing the face of chemistry in exciting new ways, which could present both opportunities and challenges to the chemical sciences as a whole and to chemistry education. Molecular modelling has been called the fourth axis of chemistry – it lies somewhere between theory, observation and experiment. At its core are attempts to describe the state and behaviour of molecules through computer simulations – a fundamentally different approach to the rest of chemistry. For Carmen Domene, a computational chemist at King’s College London, it is also a way to see chemistry from a completely new angle: ‘The computer is a microscope for things that other techniques can’t see,’ she says, paraphrasing the great computational biophysicist Klaus Schulten, who passed away in 2016. Swift and simple or slow and steady? Simulations are like calculated pictures of atoms and molecules, sometimes at a moment in time, sometimes over a dynamic range. This time dimension, along with size, mostly determines what type of simulation you can use. ‘If you want to try to model some particular situation, you have to bear in mind the [size] of your process, and the timescale,’ says Carmen. ‘The system might have one atom or one million atoms. You will have to choose which kind of technique is appropriate for what you want to model and what you want to understand.’ This means we can roughly divide the field into two halves: molecular mechanics and quantum chemistry. Molecular mechanics is the classical physics approach and is therefore arguably more intuitive. ‘The atoms are represented like balls with a particular mass and a particular charge and they are linked to each other with springs,’ Carmen explains. A computational chemist will then calculate the energy of a molecule based on a function that includes energy terms related to bonding and non-bonding interactions – a function futuristically named the force field. ‘Then you can use Newton’s laws of motion and Hooke’s law to study a dynamic system,’ Carmen says. ‘You can generate movies of how the particles evolve with time. But with that approach, you can’t understand chemical reactions.’ This is where the shortcomings of molecular mechanics start to show: it is a quick and efficient method but it is simplified and lacks accuracy. The other approach, quantum mechanics, makes up for this with its rigour. Leaving the simplistic ball-and-spring system behind, quantum mechanistic methods are aimed at solving Schrödinger equations, allowing scientists to study electronic structure and the details of bond making and bond breaking processes. Unfortunately, it is also very computationally intensive. ‘If we go to really large systems, like reactions taking place within proteins, you have a system with one hundred to one million atoms,’ Carmen says. ‘You can’t really use quantum chemistry with those large systems.’ As computers continue to improve, they’re capable of calculating even larger parts quantum chemically, but this isn’t necessarily the joyride it would appear to be. ‘We have better and better computers, but the problem we have now is the data,’ Carmen says. ‘We don’t have the storage, and when we try to analyse the data, it takes a long, long time.’ This issue isn’t limited to chemical fields. In fact, if anything is going to put a spanner in the works of the computational revolution, data could be it. ‘The bottleneck is the amount of data we produce and how to store it,’ she says. Luckily she adds: ‘But you don’t really need quantum for all the calculations we do.’  Clearly, the optimal method would be one that combined the speed of molecular mechanics and the accuracy of quantum mechanics. A massive step towards this was achieved in 1976 when two scientists, Arieh Warshel and Michael Levitt, published a paper about enzymatic reactions that outlined a new, powerful tool named QM/MM. ‘Imagine you are on top of a bridge over a motorway, and you can see the cars passing. If you use quantum chemistry, you just take a picture of one of these [cars passing],’ Carmen says. ‘But what [Warshel and Levitt] were able to do with this technique, was to see all the cars going past, one after the other. They used Newton’s laws of motion so you can see what happens in time, but the description of the system was quantum.’ Here, the most important part of the system under investigation, where actual chemistry takes place, is handled by quantum chemical calculations while the rest of the system is dealt with by molecular mechanics. By doing it this way, Warshel and Levitt were able to simulate a much larger system in a computationally efficient way. This was so groundbreaking they were awarded the 2013 Nobel prize in chemistry, along with Martin Karplus, who had distinguished himself in almost all types of molecular modelling, most notably in the molecular dynamics of biological systems. According to Carmen, the advantages of the approach are obvious. ‘In some systems, the environment is very important and you have to include it in your calculations and that’s why you need QM/MM,’ she explains. ‘Where the reaction takes place, you describe it in a quantum chemistry way, and then you see the rest of the environment classically.’ Simulating the extra mile Molecular modelling has, since its first implementation, proved to be immensely useful, particularly where chemistry overlaps with molecular biology. Simulation is a great way to study intricate biological problems like protein folding, enzyme reactions and conformational change. But it also finds great use in core chemical subjects. Natalie Fey, computational chemist and lecturer at Bristol University, is using computational methods to study organometallic catalysis. ‘Energetically a lot of catalytic cycles are very, very finely balanced. So what I’m interested in is whether we can use computational chemistry to support that process, but in the longer term also get ahead of it – to be able to predict what the best system is for a given reaction.’ She outlines three main ways in which computational methods can complement synthesis. Firstly, it allows a more quantitative approach. Take characterising and comparing ligands on a transition metal catalyst for example. ‘You might say that chemists do this all the time by just looking at structures, and that’s absolutely true. What we can do by adding the computation is to make it quantitative. Instead of just saying “they look different”, we can actually say “they’re different by x amount”,’ she explains. A second factor is predictability, which can avoid months of trial and error in the lab. ‘We can try to predict barriers to reactions,’ she says, ‘and we can see how a change to the catalyst can change those barriers. That is something that is very, very hard to do synthetically, but it’s our bread and butter computationally.’ And finally, there’s the issue of transition states. ‘Computational chemistry is really the only way to observe transition states in chemical reactions – they cannot be directly observed experimentally,’ Natalie explains. ‘By actually allowing us to plot reaction energy profiles, not only can we compare processes in terms of whether they’re thermodynamically favourable, but also whether they’re kinetically favourable.’ For anyone who’s ever taken a mechanistic organic or inorganic course, that should impress – and chemistry as a field certainly is impressed. ‘We really cannot satisfy the demand people now have for adding computational studies to their synthetic work,’ she says. ‘If you look through the literature now, there are more and more studies, and it’s being demanded by referees as well. They’ll say, “Ok, so you’ve come up with an idea about what the reaction mechanism might be, but can you prove it? Can you support that suggestion by running some calculations?”’ Worth a thousand words It’s clear that computational chemistry has reached maturity within the chemical sciences, but that does leave some questions for educators. Should it be taught in school and in more undergraduate courses, and if so, how? Alan Shusterman, computational chemist and professor of chemistry at Reed College in Oregon, US, says it’s difficult to find the room even if you think it’s important. ‘From a teacher’s perspective, they don’t necessarily see how to make it work. Where would computers go in the lesson plan?’ he explains. ‘It’s a challenge for me, too, to find room in a very crowded curriculum.’ One trick might be to incorporate modelling methods into other topics that are already taught, not unlike how computation is being used outside purely computational research. ‘Computation is permeating all kinds of research now, so many experimental papers have computational components,’ Alan says. ‘Students in my organic chemistry class are working with molecular models and electronic structure models as part of their laboratory experience. They learn how to build models, how to predict the energy of different conformations of molecules and how to predict NMR and IR spectra.’ That way, students are exposed to molecular modelling while practicing their existing chemical knowledge. In fact, using computation and modelling provides a great opportunity for teaching students, according to Alan. ‘Their [gained] skillset directly translates into being a chemist. I emphasise graphics a lot, so they’re seeing things they otherwise would only have explained to them using words or through mathematical equations,’ he says. ‘It’s about trying to make concepts real. How do molecules move? How are electrons distributed? How do molecules stick together? I can show them a picture.’ He also stresses the exposure gives them the ability to interpret computational data and results in a meaningful way. ‘Part of my advanced course is reading papers and discussing them because I know students want to understand that kind of language.’ Despite the advantages of modelling in chemistry education, there are also challenges. In the English AS- and A-level curriculum, while there’s a big focus on the students’ practical and mathematical abilities (with good reason), computer skills aren’t mentioned, despite a big push recently to introduce schoolchildren to computing and coding. As such, for chemistry students starting university, computational methods might feel a little out of place. ‘Most students don’t come to it thinking “oh yes, this is a natural part of chemistry”,’ Alan explains. ‘Some look at it and think, “This doesn’t contain the features of chemistry that drew me to the subject”.’ This bias can of course also work in the opposite direction – students who don’t particularly like laboratory work but enjoy computational work may not realise chemistry could still be for them. Regardless of how the chemistry education field decides to handle computational methods and molecular modelling, it appears these tools are here to stay. A 2012 report from Goldbeck Consulting explains that the number of publications and the citation impact in simulation and modelling have grown faster than the science average – and that was five years ago. Currently, it is hard to imagine that development halting. And while there are undoubtedly challenges to bringing molecular modelling into education, these challenges are hardly unique to modelling, but rather general for all new scientific approaches. And maybe the strengthening of computation in education will result in not just better computational chemists, but better chemists in general.
24ef64b800453080
Skip to main content SearchLogin or Signup Ab Initio Study Of The Photodissociation Of Transient Diatomic C-bearing Molecules Presentation #222.03D in the session “Laboratory Astrophysics Division (LAD): Astrochemistry”. Published onJun 18, 2021 Photodissociation by vacuum ultraviolet (VUV) photons is one key destruction pathway for small C-bearing molecules in the diffuse interstellar medium (ISM) and photon-dominated regions (PDRs). Wavelength-dependent photodissociation cross sections and atomic product branching ratios are essential to accurately simulate the chemical evolution in those environments. However, for transient molecules, considerable uncertainty still exists about those data in modern astrochemical models because studies in the VUV energy range are quite challenging both experimentally and theoretically. Here we present high-level ab initio studies of the highly excited Rydberg and valence states of two molecules, CS and C2, and their photodissociation from the electronic ground state. Both molecules have been detected in space, and predissociation of the C1Σ+ state of CS and the F1Πu state of C2 is considered to be important for their photodissociation based on previous studies. Potential energy curves of CS and C2 electronic states were calculated at the SA-CASSCF/MRCI+Q level using Dunning basis sets with additional diffuse functions. To represent the Rydberg nature of those highly excited states, the active space consisted of several additional σ orbitals for CS and σg orbitals for C2 beyond the valence orbitals. A total of 49 potential energy curves for CS and 57 states for C2 were calculated, as well as related transition dipole moments, nonadiabatic coupling matrix elements, and spin-orbit couplings. Then photodissociation cross sections from the electronic ground state were calculated by solving the coupled-channel Schrödinger equation. The results of these calculations and the implications for the astrophysical photodissociation of CS and C2 will be discussed. No comments here
40fac32ef938d5c8
verified Cite Select Citation Style Thank you for your feedback External Websites Britannica Websites Articles from Britannica Encyclopedias for elementary and high school students. Britannica Quiz Ins and Outs of Chemistry The division of a sample of a substance into progressively smaller parts produces no change in either its composition or its chemical properties until parts consisting of single molecules are reached. Further subdivision of the substance leads to still smaller parts that usually differ from the original substance in composition and always differ from it in chemical properties. In this latter stage of fragmentation the chemical bonds that hold the atoms together in the molecule are broken. Atoms consist of a single nucleus with a positive charge surrounded by a cloud of negatively charged electrons. When atoms approach one another closely, the electron clouds interact with each other and with the nuclei. If this interaction is such that the total energy of the system is lowered, then the atoms bond together to form a molecule. Thus, from a structural point of view, a molecule consists of an aggregation of atoms held together by valence forces. Diatomic molecules contain two atoms that are chemically bonded. If the two atoms are identical, as in, for example, the oxygen molecule (O2), they compose a homonuclear diatomic molecule, while if the atoms are different, as in the carbon monoxide molecule (CO), they make up a heteronuclear diatomic molecule. Molecules containing more than two atoms are termed polyatomic molecules, e.g., carbon dioxide (CO2) and water (H2O). Polymer molecules may contain many thousands of component atoms. The ratio of the numbers of atoms that can be bonded together to form molecules is fixed; for example, every water molecule contains two atoms of hydrogen and one atom of oxygen. It is this feature that distinguishes chemical compounds from solutions and other mechanical mixtures. Thus hydrogen and oxygen may be present in any arbitrary proportions in mechanical mixtures but when sparked will combine only in definite proportions to form the chemical compound water (H2O). It is possible for the same kinds of atoms to combine in different but definite proportions to form different molecules; for example, two atoms of hydrogen will chemically bond with one atom of oxygen to yield a water molecule, whereas two atoms of hydrogen can chemically bond with two atoms of oxygen to form a molecule of hydrogen peroxide (H2O2). Furthermore, it is possible for atoms to bond together in identical proportions to form different molecules. Such molecules are called isomers and differ only in the arrangement of the atoms within the molecules. For example, ethyl alcohol (CH3CH2OH) and methyl ether (CH3OCH3) both contain one, two, and six atoms of oxygen, carbon, and hydrogen, respectively, but these atoms are bonded in different ways. Not all substances are made up of distinct molecular units. Sodium chloride (common table salt), for example, consists of sodium ions and chlorine ions arranged in a lattice so that each sodium ion is surrounded by six equidistant chlorine ions and each chlorine ion is surrounded by six equidistant sodium ions. The forces acting between any sodium and any adjacent chlorine ion are equal. Hence, no distinct aggregate identifiable as a molecule of sodium chloride exists. Consequently, in sodium chloride and in all solids of similar type, the concept of the chemical molecule has no significance. Therefore, the formula for such a compound is given as the simplest ratio of the atoms, called a formula unit—in the case of sodium chloride, NaCl. Molecules are held together by shared electron pairs, or covalent bonds. Such bonds are directional, meaning that the atoms adopt specific positions relative to one another so as to maximize the bond strengths. As a result, each molecule has a definite, fairly rigid structure, or spatial distribution of its atoms. Structural chemistry is concerned with valence, which determines how atoms combine in definite ratios and how this is related to the bond directions and bond lengths. The properties of molecules correlate with their structures; for example, the water molecule is bent structurally and therefore has a dipole moment, whereas the carbon dioxide molecule is linear and has no dipole moment. The elucidation of the manner in which atoms are reorganized in the course of chemical reactions is important. In some molecules the structure may not be rigid; for example, in ethane (H3CCH3) there is virtually free rotation about the carbon-carbon single bond. The nuclear positions in a molecule are determined either from microwave vibration-rotation spectra or by neutron diffraction. The electron cloud surrounding the nuclei in a molecule can be studied by X-ray diffraction experiments. Further information can be obtained by electron spin resonance or nuclear magnetic resonance techniques. Advances in electron microscopy have enabled visual images of individual molecules and atoms to be produced. Theoretically the molecular structure is determined by solving the quantum mechanical equation for the motion of the electrons in the field of the nuclei (called the Schrödinger equation). In a molecular structure the bond lengths and bond angles are those for which the molecular energy is the least. The determination of structures by numerical solution of the Schrödinger equation has become a highly developed process entailing use of computers and supercomputers. The molecular weight of a molecule is the sum of the atomic weights of its component atoms. If a substance has molecular weight M, then M grams of the substance is termed one mole. The number of molecules in one mole is the same for all substances; this number is known as Avogadro’s number (6.022140857 × 1023). Molecular weights can be determined by mass spectrometry and by techniques based on thermodynamics or kinetic transport phenomena. Grab a copy of our NEW encyclopedia for Kids! Learn More!
35723582fa134463
The causal criteria for being real Ethan Siegel addresses a question on whether spacetime is real. But there’s more to the Universe than the objects within it. There’s also the fabric of spacetime, which has its own set of rules that it plays by: General Relativity. The fabric of spacetime is curved by the presence of matter and energy, and curved spacetime itself tells matter and energy how to move through it. But what, exactly, is spacetime, and is it a “real” thing, or just a calculational tool? After going through a quick grand tour of special and general relativity, as well as other physics, he comes to the conclusion that science can’t really provide an answer. This question about whether something is real or merely a mathematical accounting convenience, is one that comes up all the time in science, and has throughout its history. When Copernicus published his theory of the Earth moving around the Sun instead of the other way around, many were willing to accept his mathematics since they made astronomical predictions easier, but insisted it was only a mathematical convenience, not reality. Max Planck, when he first introduced energy quanta into physics, only considered them a mathematical tool. But for spacetime, I think Siegel actually answers the question in the quote above by paraphrasing John Wheeler (a physicist known for coming up with quick snappy terms and phrases): “Spacetime tells matter how to move; matter tells spacetime how to curve.” In other words, spacetime, whatever it is, has causal effects on things we can measure. It can affect and be affected by matter and energy. That, to me, is enough to consider it real in some sense. That doesn’t mean it’s necessarily something fundamental. A number of physicists think it might be emergent from other things, such as time perhaps emerging from entropy, or space emerging from quantum entanglement. But just because something is emergent doesn’t mean it doesn’t exist. If it did mean that, then nothing would exist above quantum fields, and maybe not even them. In this view, all that’s necessary for us to productively consider something real is that it participate in the causal chain that eventually effects what we measure. This is why, in quantum physics, I generally consider the wavefunction to be modeling something real. Something causes the measured interference effects, and the various formalisms for modeling the wave dynamics accurately predict those effects. (Even if they only provide probabilities for particle positions.) That doesn’t mean the wavefunction is necessarily the complete story, or that it’s real in every respect, only that the overall phenomena is something that participates in the causal chain. But this is also why I’m not a Platonic realist, someone who believes that abstract objects exist independently of the mind. In the Platonic view, abstract concepts are supposed to exist outside of time and space, be unchanging, and causally inert. Platonic objects, in and of themselves, do not participate in the causal chain. If we consider them to not exist, there’s nothing about them that forces us to reconsider that judgment. Any actual causal power they might have, only seems to happen through our mental models of them and the relations in the world that encourage us to form those models. So, if something has causal effects, it is, at least in some manner, real. If it has no detectable effects, or at least theoretical ones, we can’t say conclusively that it isn’t real, but it may effectively not be real for us. What do you think of the causal criteria? Does it miss anything real? Or does it include anything we commonly would say isn’t real? 76 thoughts on “The causal criteria for being real 1. I think of reality as being a pairwise relationship. A exists to B if A can influence B. I guess this is the same as your causal chain. It does seem to imply that there could be cases when A exists to B but B does not exist to A, or A exists to B and B exists to C but A does not exist to C. Unless there is a god-like thing that influences everything, not everything is real to everything else, so there is no universal reality. However this does raise further puzzles because we are generally thinking A is something like an object or a person, and so complex and distributed across space and time; so what it means for A to exist to B is actually a bit more tricky than it might seem at first sight. Even whether A continues to exist as the same thing comes into question. Liked by 2 people 1. I almost had a digression in the post about galaxies at the edge of the observable universe. The versions we’re seeing today can affect us by their light reaching us from over 13 billion years ago. However, those galaxies today are now beyond our cosmological horizon. The expansion of space is now moving them away from us faster than the speed of light. So for all intents and purposes, we’re now causally disconnected from them. Do they still exist for us? Do we exist for them? What about for galaxies far beyond the cosmological horizon where we’ll never have any interaction with them at all? What really makes my head hurt is that for those distant galaxies, relative to us, their time should now be going backward? What does that even mean? Relativity avoids paradoxes here because it’s impossible for us to ever interact with them. The same is true for someone falling into a black hole. For us, they slow down as they approach the event horizon, then freezes and become increasing redshifted. But for them, they cross the horizon without incident and continue moving toward the singularity. These are contradictory sequences, but again, physicists say it’s not a problem since us and the person who fell in can never compare notes. The urge to reconcile it into one reality is, apparently, misguided! 1. “What really makes my head hurt is that for those distant galaxies, relative to us, their time should now be going backward?” Wait, what? Why? (FWIW, the redshift we see of objects falling into a BH is just due to how gravity affects the light waves coming from that object. The “freeze” is just the wavelength dropping to zero. It’s not in any way a “real” effect felt by the object, merely a matter of what distant observers see. There is no paradox there.) 1. That’s my understanding of the relationship of something traveling faster than light relative to us. Is that wrong? If not, what is the time relationship between those galaxies and us? (My feeble attempts at the math just make the calculator spit out “ERROR”.) The black hole thing turns out to be something conjectural, black hole complementarity. I took Susskind’s description of it to be a general description of what general relativity said, but it turns out to be a speculative “solution” to the information paradox. My bad. 1. Oh, I see what you mean. SR isn’t defined for velocity due to the expansion of space. SR forbids FTL (in several ways), which is why you’re getting an error. With a velocity faster than c, you end up trying to take the square root of a negative number, so things become imaginary. Liked by 2 people 2. I think this is mostly right. But I think that if you say that A->B->C, where -> means ‘influences’, then you could think of A as existing for C also. Causal influence is presumably transitive. I think what you might be getting at though is that B might exist at different times, and maybe B’s influence on C comes from an earlier time than A’s influence on B. 2. Causality itself is emergent, so if you want to use causality as a criterion of reality, you’d damn well better not deny that emergent things can be real. But it had better not be the only criterion of reality either. (The foregoing depends on reading “causality” as inherently asymmetric: if A causes B, then B doesn’t cause A. If you waive the asymmetry requirement, that allows a different description of the situation.) As for how spacetime affects matter: I think you’re right, but it’s tricky (warning: philosophy of science weeds! bonus: philosophy of science learning!) I have no problem with mathematical Platonism, as long as the Platonist doesn’t try to go all Tegmark about it and insist that every mathematical structure is “physically real.” And as long as they don’t go talking about a “realm” where abstract objects “dwell”. I’m rather fond of this argument: There are numbers between 3 and 7. Therefore, there are numbers. Liked by 2 people 1. Interesting video. I especially like the point that something that has great leverage over the future is a cause while something that great leverage over the past is a record. I generally see information as causation, which fits with this view. What other criteria for being real would you add? That there are numbers between 3 and 7 doesn’t require mathematical Platonism to be true. It can be true in the sense that there are relations in the world involving between 3 and 7 entities, and this pattern occurs often enough that our brains model it. Numbers exist, but they don’t need to have an existence independent of physics. That said, when Platonists talk about realms where objects dwell, I don’t think they literally mean something like another universe with the objections floating around there. They’re trying to express a concept that is hard to put into words. Even Tegmark, I think, is trying to get across an idea that’s very difficult to put into words. The “physical” part I take to just mean “really exists”. Liked by 1 person 1. My criteria for being real would center on being explanatory. Causal explanations are just one form of explanation. Stanford Encyclopedia of Philosophy says –which seems like a pretty modest ask, and doesn’t mention independence from physics. But then the author later says “platonism entails that reality extends far beyond the physical world.” So go figure. I’m not sure what mathematical dependence on the physical would look like. If the universe had only one electron and no other particles, would that invalidate 2+2=4? It would make the equation a moot point, but that seems different. Liked by 1 person 1. What would you see as an example of a non-causal explanation? Maybe mathematical relations? But what bring those relations into being? And what causes us to think about them? On the SEP article, you have to go through the entire thing. From section 1: Platonism is the view that there exist abstract (that is, non-spatial, non-temporal) objects (see the entry on abstract objects). Because abstract objects are wholly non-spatiotemporal, it follows that they are also entirely non-physical (they do not exist in the physical world and are not made of physical stuff) and non-mental (they are not minds or ideas in minds; they are not disembodied souls, or Gods, or anything else along these lines). In addition, they are unchanging and entirely causally inert — that is, they cannot be involved in cause-and-effect relationships with other objects. So if we view abstract objects as mental models constructed based on observed relations in the world, that’s not Platonic. The Platonic version are supposed to exist in addition to the mental models and physical relations. On the electron, if we think in terms of its wavefunction, maybe 2+2=4 would continue to have meaning, since the electron would likely spread out in an ever larger bubble. Assuming no dark energy, the electron would eventually be all over the universe. Of course, there’d be no one around to think about arithmetic. 1. My first example of non-causal explanation would be explaining causality itself. That is, how it emerges from bidirectional physical laws plus the entropy gradient. Other emergence relationships, such as part-whole relationships, can also be explanatory. I don’t think anything brings mathematical relationships into being. Even if you deny abstract things exist, there has to be some physical feature not brought into being, even if it is only the whole sequence of events considered as a unit. I wouldn’t try to reduce mathematical objects to linguistic or conceptual ones, at least not in an asymmetric way. The reason why a word or concept refers to one thing and not another, depends on information theoretic properties of the word to world relations. 2. So what should we call the symmetrical relationship between entities that exist before entropy makes it asymmetrical? Carroll just says “patterns” in the video, which I can understand but these are patterns which have a time sequenced relationship to each other that two other adjacent patterns often won’t have. Maybe that’s what I should be using as my criteria. Or maybe we can just say it’s causal asymmetry which is emergent? Liked by 1 person 3. It seems like “patterns” underdetermines what we’re talking about, since there’s nothing that prevents a pattern existing consisting of elements that don’t have that relationship with each other. And the laws of nature are the rules that govern the time sequenced relationships, but not the relationships themselves. We could say “time sequenced relationships” but that’s a mouthful. In the absence of something better, I think “cause” still makes sense. We just have to understand that, without entropy, it’s fundamentally symmetrical. Liked by 1 person 4. Sure, that’s a reasonable terminological choice. You just have to be careful which audiences you use it with, some will require extra explanation. In our universe, laws of nature govern time sequenced relationships (primarily?) In a logically possible universe – and perhaps in ours, if it turns out that time itself along with space is emergent – there may be other laws. Liked by 1 person 2. I think the problem is that the word “real” is abused. Most people equate it with the physical. A tree in the park is real. It is very solid as I find if I inadvertently bang into it. A tree in my mind is not real in the same sense, although in the context of my mind it is real. The mathematical wouldn’t usually qualify for reality except to the extent that any mathematical truth probably has some representation in physical brain structure or process. However, approaching the mathematical truth at that level entails the loss of the meaning of the truth itself. On the other hand, if you take the view that the physical is really mathematical (it’s mathematical all the way down), then the mathematical would be the only thing that is real. But where does that leave Superman? We can talk about him as if he were real. We can read stories in comic books and see movies. In the context of the world of fiction, Superman could be real. 3. Superman is real James, and so is the spaghetti monster; so are demons, gods, evil spirits, GR and spacetime because real-ness is a context. So at the end of the day, we do not have to endlessly debate what is real and what is not, all we have to do is identify the context in which some “thing” is real and then a consensus can be reached. Liked by 1 person 2. Hi Paul, I’m a Tegmarkian, but I don’t always agree with how he describes the position. The way you describe it seems to suggest that at least part of your problem with it is how it is described. I think Tegmark’s ideas follow pretty much inevitably as long as you are willing to accept some sort of platonism, functionalism or computationalism about mind, and naturalism (by which I mean the idea that everything that happens does so according to natural laws describable mathematically). The way I would describe it, it is not that all mathematical structures are physically real. It is that the concept of objective, absolute physical reality is incoherent — physical reality only makes sense when construed as observer-relative. So, we shouldn’t think of Tegmark as claiming that somehow all mathematical structures are made physical, as if by magic. Rather the idea is that what we perceive as the physical reality of our universe is entirely explained by the fact that we are embedded in it. It is physical real only *to us*. In fact it’s just an abstract mathematical structure like any other. Mathematical structures without embedded observers are not physically real to anyone. I share your dislike of talk of abstract objects “dwelling” in a “realm”. 1. People often state their own positions awkwardly, or they make mistakes which are separate from the main point and shouldn’t be used to impugn that point. So by all means, rephrase Tegmark. Well, I don’t accept functionalism about mind in general. I do accept it about “do this creature’s vocalizations *refer* symbolically to the world?” and “is this creature conscious?” But I don’t accept it about “what are the qualities of its consciousness?” On physical reality as observer-relative, it depends what you mean. Here’s what I do accept: when people say “physical”, that word gets its meaning from interactions between the language community and the rest of the world. So in *some* sense, physicality is observers-relative. Observers with an s. 1. Hi Paul, I do disagree about the qualities of consciousness, but that’s another issue. It would indeed be reason to reject Tegmark if I conceded this controversial point. Just wanted to address the idea that it seems daft to think that mathematical structures somehow are made physically real as if by magic. Hopefully we’re more or less on the same page on this now. 3. You’re speaking here, I take it, about physically real? As a contrast, in some sense, unicorns are real, because when I use that word, you know exactly what I mean. Likewise Sherlock Holmes or any known literary character. Physical reality beyond our horizon, those distant galaxies that can’t affect us, can affect matter in their vicinity, so they would satisfy your causal criteria. I can’t think of any exceptions at first blush, but in some sense the definition is circular. That which is physically real can have causal effects. I suspect a true definition of “real” will remain a philosophical and definition issue, but having causal effects is, at least, a property of what is (physically) real. Now what about someone reads Sherlock Holmes when they’re young and decides to become a detective as an adult because of that. Is that a case of something not physically real having a causal effect? Liked by 2 people 1. I’m not sure if I’m following the circular point. It doesn’t necessarily seem circular to me. But maybe I’m missing something? I do agree that any definition of “real” is inevitably philosophical. (There are some scientists who take the attitude that only what they can directly measure is real, although that seems like a tough stance to hold consistently.) In the case of unicorns and Sherlock Holmes, it seems like they exist as mental models in our brains, the result of sensory impressions from numerous pictures, films, and books. Of course, a unicorn isn’t a real animal, and Sherlock Holmes was never a real person. But as concepts they are definitely real. Similar to Platonic concepts (actually identical to them), it’s often easier for us to just think of them as real in a non-physical sense, but I think that’s because the models don’t map to physical reality in the manner similar models typically do. So our model of a horse maps to reality in a certain way (it’s predictive of potential sensory impressions), but add a horn on its head and that mapping is no longer valid. But we still have the model of the horse with the horn. So it makes sense that a mental model can inspire someone to become a detective, particularly a model of a detective. Or we can just say Sherlock Holmes inspired them to become a detective, but we know that refers to an idealized model of a detective with phenomenal powers of logical deduction. 1. What I’m getting at is summed up in the phrase, “Ideas have the power to move mountains.” Even newly formed ideas, so it’s not the shared history of unicorns and Holmes, but that mental content can have causal power. Maybe “circular” isn’t the right word… it just feels like ‘causal power’ is a necessary property of anything real. Like it’s just another way of saying the same thing, although maybe ‘causal power’ is a larger category since it includes mental content? Or whatever. To be honest, I don’t seem to have much capacity for abstract thought these days. 1. I’m definitely on board with the idea that mental content has causal power. It is caused by incoming sensory information and innate impulses and has both short term and long term motor effects on the environment. So it’s part of the causal chain. So definitely ideas have a lot of power. Maybe instead of “circular” you’re thinking that it’s just trivially true? Could be. Although the fact that people debate whether things like spacetime, wavefunctions, or Platonic concepts are real seems to put pressure on that idea. 1. “So definitely ideas have a lot of power.’ Are you talking about the joule, as in 1 Watt = 1 Joule per second (1W = 1 J/s)? And if so, would some ideas be quantified as having more joules than others? Liked by 1 person 2. Hmmm. Well, all causal power is ultimately the ability to exert changes on things, directly or indirectly, physical changes if we’re operating under physicalism. So I suppose we can imagine in principle trying to measure it that way. Not sure how we’d go about measuring the amount of wattage produced by democracy, even in its effects on a signal individual’s life. 3. Okay, yeah, “trivially true” might be a better phrase. The notion of ‘causal power’ doesn’t seem, at least to me, to have much utility as a razor to cut between real and unreal if it includes unicorns and Holmes and mental content in general as all real. What would it define as unreal? FWIW, spacetime, because we’re clearly embedded within it and move through it, has some sort of physical reality even if we don’t fully understand how it works. All we can say about wave-functions for sure is that they describe an aspect of reality and allow predictions. (Some physics classes start by comparing the Schrödinger equation to Newton’s F=ma, and I think that’s a good comparison.) As I’ve mentioned in the past, I’ve come to resolve the Platonic question as a consequence of existence in a lawful physical reality. The canonical example, a circle or sphere, the concepts of which seem to exist whether we discover them or not, ends up as just an observation of how physical 3D space works. Given space, there is a notion of location and distance, then of equal distance from some location, and thus circles and spheres. All we can really say about math is that it describes physical reality (because it reflects physical reality). Liked by 1 person 4. In all seriousness Wyrd, I don’t see how conversations like this one can be productive if there is no consensus on a fundamental definition of power. Is power an objective state of the world, something that we discoverer; or is power a subjective state of mind, something that we make up like GR, spacetime and the joule? To me, it seems like power is the impetus and the driving force responsible for causation in the natural world, including the dynamics responsible for the motion and form of imaginative thought and yet, nobody wants to investigate it let alone discuss it. Instead, everybody seems absolute content to play in the sandbox ignoring the elephant in the room. Personally, you and others might consider the notion of power as too abstract, and that’s cool. In many ways, the notion of power reminds me of Michael Mark’s latest essay. I think most people appreciate the notion of power from the artist’s aesthetic perspective, a perspective that is clearly repulsed by the pure rationalist approach. I’m just thinking out loud here, so nobody should feel obligated to respond. 5. Well, as I said last time, just substitute “ability” or “capability” for “power” and the confusion goes away. As to the actual physics notion, “power” is a derived quality, not a fundamental one. As you said above, 1 joule per second and has units of kilogram-meters-squared-per-second-cubed. (Electricians know it as the more familiar volts×amps.) I don’t see it as too abstract; I see it as too derived to be a fundamental social or physical property. 6. Clearly Mike; when anyone asks how much “power” the POTUS has, no one is not asking how many joules that he possesses. For all practical purposes, “power” is a mystery, some “thing” that can only be appreciated for its aesthetic beauty and not understood from a rational reductionist perspective. Is that how you perceive it? 7. Can’t resist channeling the wonderful Emily Litella: “What’s all this fuss I hear about how many jewels the POTUS has? Why do we care? This isn’t a monarchy, we don’t have Crown Jewels. Why I doubt the man has any jewels at all. Maybe his wife does, but jewelry just doesn’t…” “What’s that?” “Never mind.” 8. Lee, I wouldn’t describe power that way. Certainly political power is something most people don’t have a good understanding of. But it’s been studied. Richard Neustadt’s “Presidential Power” is worth reading for anyone who wants to understand the POTUS version. In the end, all social power (including business or political power) involves influencing people to do what you want them to do. Sometimes it’s easy, with a legal order for people duty bound to obey. More often it involves persuading people, either directly or indirectly. In truth, it’s always about persuasion. It’s just that when you have line authority, you have extra tools. But fail to understand that people are not mere extensions of your will, that they each have their own values and agendas, and you will eventually flounder. In contrast, power or forces in physics are much simpler, even though social power is ultimately a special case of it. 9. Wyrd, I keep forgetting that your own confirmation bias is rooted in some form of Spinozaism; so my use of the word power would not have the same meaning to you as it does to me. My own conformational bias is similar to that of Kant; with some essential a priori intuitions added which Kant’s ontology did not contain. Spinozaism is a good metaphysical model, one that I agree with to a large degree with one important exception. Spinozaism posits the notion of natural and physical laws as irreducible and fundamental, whereas my metaphysics unequivocally rejects the entire notion of law all together. So for now, we will have to agree to disagree; and I will try to keep your own metaphysical position in mind whenever I correspond with you. 10. Wyrd, I suppose a good place to start would be to ask if you agree with Mike’s manifesto: “that we shouldn’t look to reality for meaning. We have to resolve to make our own meaning, and figure out how to bend reality to it.” I do not disagree that his manifesto is an explicit and succinct definition of subjective experience but personally, I unequivocally disagree with that position. What say you? 11. Regrettably, Lee, this isn’t a discussion I have much strength for. My New Year’s resolution was to swear off “fantasy bullshit” (FBS). Not as innately wrong — I love me some FBS sometimes — but as having become so very problematic in our culture. I’ve been terrified ever since this culture started blithely talking about, and normalizing, “post-factual,” and as of last November my terror seriously ramped up and then blew up on January 6th (and again last Saturday). Our culture has sunk into too many forms of fantasy. I’ve been reading, with growing horror, Aldous Huxley’s essays in Brave New World Revisited (1959) which he penned about 30 years after he wrote Brave New World (a profoundly disturbing novel in the current social climate). Huxley saw it back in 1959 (if not 30 years before). I quote: “A society, most of whose members spend a great part of their time, not on the spot, not here and now and in the calculable future, but somewhere else, in the irrelevant other worlds of sport and soap opera, of mythology and metaphysical fantasy, will find it hard to resist the encroachments of those who would manipulate and control it.” We’ve seen that play out the last decades and culminate in the last months. I feel as someone who has been badly beaten, my mind has been harmed by all this. To try to heal that damage, I’m sticking firmly to the physically real. So it’s hard for me to answer your question. For one thing, what is “meaning”? Is it that New Age thing people are always looking for? “Meaning” can only come from within (or maybe from God if you swing that way). Secondly, I’ve never known what to make of the idea of “bending reality” — does that mean magic or just building a thing with wheels? I’m a hard-core realist, both emotionally and philosophically. I’m just a tiny, tiny piece of a very large physical reality. I define sanity as the degree to which my internal mental model matches the external world I perceive, and life is the process of building and refining that model in an attempt to remove dissonance between it and physical experience. 12. Like you Wyrd, I am dreadfully disturbed by the prevailing trend taking place in our culture and I appreciate you being open and candid about your feelings as well. We do not have to engage in any serious discussions here. I read your own blog from time to time, and if there arises an opportunity for a productive discourse maybe we could collaborate on common ideas and goals such as the origin of meaning and where meaning actually resides; if that is acceptable to you. Your definition of sanity is a very good one as well. Take care my friend 4. I’ve had to determine my approach to this topic for my project (understanding consciousness), so I’ll just put it here and see what you think. I’ve decided it is useful to be very clear what the terms “exist” and “real” mean. These definitions will certainly conflict with someone else’s. All I can do is explain how I use the terms. So, I say something exists if it interacts with other stuff. Interaction is a relation, and so stuff that cannot interact with you (those far flung galaxies) does not exist for you. Patterns are real. (See Dennett.). So, abstractions are real. Numbers are real. Some patterns are discernible in existing stuff. All existing stuff exhibit patterns: specifically, patterns of behavior. A physical thing (system) exists if and only if it exhibits a pattern of behavior, and this pattern determines what a thing “is”. I should point out here that any pattern of behavior is multiply realizable, so even if something “exists” you can’t know what it fundamentally “is”. But you can assign a name, like “electron”, to anything that exhibits that pattern. Re causation: An interaction is best described in the format input->[mech]->output. You can then say the mech “causes” output when presented with input. This pattern (input->output) could be described as a causal power. The mech exhibiting this pattern of interaction has the “causal power”. So a pattern does not have causal power, but a pattern may be the particular pattern associated with a mech and describe the causal power of that mech. Again, more than one mech can exhibit the same pattern, and so have the same causal power. So to rewrite your penultimate paragraph, I would say if something has causal effects, it, at least in some manner, exists. If it has no detectable effects, or at least theoretical ones, we can’t say conclusively that it doesn’t exist, but it may effectively not exist for us. [taking questions] Liked by 1 person 1. That all sounds about right to me. But I’m wondering if I missed something, because it seems very similar to what I said. You did add stuff about multi-realizability, which I don’t have any problems with. Or maybe I should ask, what would you say distinguishes your view from mine? (Assuming something does.) 1. I recognize that we pretty much have the same understanding, but I am, and want you to be, more precise with the term “real”. For example, you said “this is also why I’m not a Platonic realist, someone who believes that abstract objects exist independently of the mind.” You say you are not a Platonic realist, but I say Platonic forms are real things independent of the mind. They just don’t exist, except some of them are patterns detectable in things that do exist.. I say unicorns are real, they just don’t exist. I say philosophical zombies are real, but they cannot exist (as their description requires contradiction). I guess what bugs me is when people talk about “causal power”, and ask things like “does information have causal power?”. “Causal power” seems like it’s intuitive, but is ill-defined and causes misconceptions. Liked by 1 person 1. Thanks for the clarification. I’m trying to see how we can make a distinction between being real and existing, but having a hard time. To me, those terms seem synonymous. (My working title for the post was actually “The causal criteria for existence”. I changed it to “being real” right before hitting Publish.) I do think we can make a distinction between ideas that most definitely exist which are about non-existent things like unicorns. Maybe that’s the sense in which you mean unicorns are real? If so, that seems strange, because it seems to imply concepts like the luminiferous aether or celestial crystalline spheres are real even though they don’t exist. I’m struggling to see how that use of language can be productive. Maybe if we say these concepts are abstractly real but not physically real? It’s all the same ontology, but different ways of talking about it. Liked by 1 person 1. I am making a distinction between being real/existence by fiat. I’m saying, instead of using both words for the same thing, use one word for abstractly real (real) and the other for physically real (exists). This does bring up the question of what you mean when you say an idea of non-existent beings exists. But I translate that as saying an existent system in your brain recognizes the real pattern of a non-existent thing. Make sense? 2. I follow what you’re saying. But I think it would be clearer to use those words with their common meanings and just use qualifiers to make what you’re saying explicit. So just preface with “abstractly” or “physically” for “real” or “exists”. If you use “real” in that fashion, it seems like you are obligated to constantly remind your audience of the special way in which you’re using it. 3. >”I do think we can make a distinction between ideas that most definitely exist which are about non-existent things like unicorns.” I would argue differently. “Ideas” are the product of some specific species (humans) brain activities. They exist within that specific community, and, by extension, on any media they are recorded, if they could be deciphered (by other species?). Outside the mentioned group, “ideas” do not exist. If such species got into extinction, then their “ideas” got to extinction too. It makes sense to broaden this example and make a distinction between “reality” within and outside this specific group. Liked by 1 person 4. It might depend on how we define an “idea”. A lot of mammalian and avian species, particularly social ones, can learn from each other. If a monkey figures out a new way to break open a nut, other monkeys will observe and copy. That troop will then have a cultural practice of how they break open the nuts that other troop lacks. In other words, culture, in the sense of shared concepts, isn’t unique to humans, at least unless we specifically define it to require symbolic communication. 5. FYI In the nineteenth century and even the early twentieth, many scientists considered atoms a useful fiction, indicating that they weren’t real. This is why Einstein won his Nobel Prize for his work on Brownian motion, which was a physical manifestation of atoms/molecules that was definitive. Liked by 1 person 1. Thanks. Didn’t know that. Pretty interesting. It’s amazing how often these useful fictions become real. I thought Einstein’s Nobel was for explaining the photoelectric effect, although I suppose it’s just as tied to atomism as well. 6. Hi Mike, Wyrd, I definitely see the circularity, and it points to the fact that the concept of objective physical reality is empty and meaningless. If we make up an abstract toy universe with its own laws (as physicists will do with constructs such as Anti-de Sitter Space) then things will “play out” in that structure in something analogous to time and causality, albeit timelessly and causelessly from our perspective. But if you imagine a perspective within that universe, objects within that universe will appear to be real because they appear to engage in causality, whereas our universe will appear to be unreal and abstract because it does not. The circularity is that must presuppose that our perspective is objectively privileged, that what we observe is physically real to decide if it is in fact engaging in causality. If you make that assumption, then you foreclose the possibility of there being universes just as real as ours which are entirely causally disconnected. Whether or not such universes might exist, it seems unreasonable to rule them out a priori simply by defining them out of existence. Liked by 1 person 1. Hi DM, You’re getting at why I hedged a bit toward the end, saying something might effectively not be real for us. We don’t even have to bring in other universes, just our own universe far beyond our cosmological horizon. (Which Tegmark actually considers another universe, so I guess I’m converging.) Is a galaxy a trillion light years away real for us? I suppose if cosmic inflation happened we could say we might still feel causal effects from the energy patterns that eventually became those galaxies. But what about galaxies 10^100 light years away? (Assuming such galaxies exist.) And somewhat tying in with the previous post, in a simulation, the simulated objects have simulated causal effects and are effectively real for any simulated entities within the simulation. But for those of us outside the simulation, they’re not. 1. Good question. A naive answer might be something like having direct conscious interaction with a phenomena to determine things about it, particularly quantitative properties. But of course, in modern science that rarely happens. No one has ever seen an electron. Instead we have direct perception of a stand in, like a readout on a measuring device, which we use to infer things about the phenomena. We do this because we trust our theory about how the device works, but ultimately it’s an inference made using theory. However, before we allow ourselves to get too upset about this, it’s worth noting that direct conscious interaction is itself an inference based on preconscious sensory information coming into the brain. Those inference are themselves heavily dependent on our understanding, our model or theory of the world. In the end, we make predictions, note the errors and adjust, and make new predictions. 7. Is there a sense in which our everyday concepts of time, space and causality are secondary, and behind the scenes it is quantum entanglement that is more fundamental, so that what is (potentially) real to us is everything with which we are entangled? 1. It’s a definite possibility, particularly if you subscribe to the idea of a universal wave function, that is, a quantum universe with no Heisenberg cut. Of course, that view implies many worlds, so most people reject it out of hand. 8. Your previous post was on a simulated universe and this one is on what is real. Is a simulation real? I mean real in the sense that it is more than real as a simulation. Would simulated consciousness be real also in the sense of more than real as a simulation? Liked by 1 person 1. I think the contents of the simulation would be real for any simulated entities within the simulation. Simulated wetness for a simulated being would be real wetness. Simulated pain would be real pain. For us on the outside, they would be real in the sense of being a real simulation. Of course, you could arrange for the simulated beings to have access to physical robot bodies in the outer world, which would graduate them from just simulation status to something much more real. Liked by 1 person 1. I don’t know. Would it? I suppose if you reject that a simulation of consciousness can be conscious, it might be. Essentially it would be a philosophical zombie. I personally don’t think p-zombies exist, so for me giving it a physical body makes it as physically real as we are. You could, of course, then argue that it was always physically real, since it was always implemented by some kind of physics. Liked by 1 person 1. Would the simulated consciousness using a simulated body execute the exact steps and processes that the simulated consciousness executes with a real body? If the steps/processes are identical, why would one be more real than the other? They would be indistinguishable. Liked by 1 person 2. Hi James, It wouldn’t be realer from an objective point of view. But I’ve been arguing that what is physically real is a matter of perspective. Putting a simulated consciousness in a robot body may make it physically real to us for this reason. If it’s just running in a simulated world and does not interact with the real world in any way (e.g. the program takes no inputs), then from our perspective it isn’t physically real. I actually think we have no moral responsibility for beings in such a simulation (because I’m a Tegmarkian platonist and I think the worlds we are simulating, no matter how horrific, must all exist out there in the multiverse independently of whether or not we want to explore them with simulations — our simulating them creates no additional suffering). But as soon as you start interacting with a simulated consciousness, then you are a part of the mathematical world it inhabits. You are physically real to it and it is to you. You are in the same relationship to it as you are to any other physical being, and so I think you do have moral responsibility for it. 3. So uploaded minds to a computer wouldn’t be real? But if the uploaded mind somehow instantiates itself in a physical body then suddenly it becomes real. Because perspective. But the only perspective would be our own perspective or the perspective of non-simulated mind. So in the end it is only our mind that makes it real. I don’t know. 9. Hi Mike, I meant to give an overall comment rather than just responding to comments of others but didn’t have time. I think you raise some very interesting issues and so I enjoyed and appreciated the article very much. But I think it is a mistake to get too hung up on what is and isn’t real — a mistake many philosophers have been making for too long in my view. What is real and what is not depends on what you mean by “real”, and different definitions are appropriate in different contexts. As long as you are clear about what you mean there is no problem. I don’t think there is a fact of the matter on which definition is correct and so what is “really real”. The question of whether some concept refers to something real or is just a calculational tool strikes me as entirely meaningless. I genuinely cannot make sense of it. The closest I can come is to take the examples of calculational tools from the past that are since discarded, such as caloric theory or geocentrism/epicycles. From my point of view, to the extent that these theories disagree with experiment (as caloric theory does when it claims that caloric is a gas), they are not physiclaly real and that’s all there is to it. If on the other hand they can be patched and amended (e.g. epicycles within epicycles) and so made to agree with experiment to the point where their predictions cannot be falsified, then they are as real as any other model but fail to be as useful or elegant as simpler theories. So in my view, it is possible for both geocentrism and heliocentrism to be true (i.e. there is no fact of the matter on what is actually at the centre), with the latter only being far more elegant and more useful. A more current debate is which of the Newtonian framework or the principle of least action is the more fundamental description of physical law: In the Newtonian framework, we have objects a certain point in time and space, and laws that describe how they evolve over time. In the framework of the principle of least action, the rule is that some quantity (the “action”) is minimised or maximised, and what happens will be whatever achieves this. The latter ends up being more apparently teleological and less intuitive to humans (but more intuitive to the Aliens in Arrival or Ted Chiang’s original short story “Story of your Life”), despite being very useful mathematically in some circumstances. But the two frameworks are mathematically equivalent, in that you can derive one from the other. So the question arises, which is prior? Which is the “real” one and which is the derived one? In my mind, there is no answer to this question. Each framework is equally real. Causal criteria for what is physically real to us makes sense to me, with some caveats. I think it’s better to require the relationship to be bidirectional. Our far future descendents can exhibit no causal influence on us, but we should consider them to be real in some sense or else we have no more responsibility for them than we would for fictional characters, and that doesn’t seem right. The fact that we can causally influence them makes them real, I think. Adopting this rule has some other benefits. We can trace the chain of causality backwards to the Big Bang and forwards again to parts of the universe that are no longer causally connected to us. So those parts of the universe are also physically real. That also seems right. But I think you should be aware that adopting your criteria might (like me) commit you to the physical reality of epicycles and geocentrism, assuming that epicycles can be used to make correct predictions. If so, then they can be said to have as much of a causal influence as spacetime or the wavefunction or whatever. If you want to exclude them, you may need to introduce an additional criterion that no simpler or less ad hoc model can produce the same predictions. But then we’re back where we started. Perhaps spacetime does not exist because there is a simpler model that can produce the same predictions, and it becomes an open question whether spacetime is real or just a calculational tool. Liked by 1 person 1. Hi DM, I know where you’re coming from. It’s basically why, until recently, I was comfortable calling myself an instrumentalist, although I recently covered why I’ve become uneasy with that label. (Too much baggage, with people projecting positions on me I don’t hold.) But I think it’s important to be able to put on the instrumentalist hat at times and assess theories in that light. I would note that I don’t consider causality to be the only criteria for reality. Aside from making more accurate predictions, parsimony also comes into it. That’s how we can dismiss geocentrism today. It’s just a much more complex theory given all the data now available. It’s always possible to add variables to any theory to make it consistent with observations, but at the cost of increasing complexity. If there is a simpler theory, one with fewer assumptions, then it has a better chance of remaining reliable. I do think asking whether something is real or just a mathematical convenience is productive though. We shouldn’t expect a mathematical tool to necessarily reconcile with other theories, or worry if it outright contradicts them. It’s just a tool, so no one should care. For example, the weak (epistemic) Copenhagen interpretation is basically an anti-real theory, so any concerns about its contradictions with cosmology would be misplaced. (Stronger more ontological versions of Copenhagen are a different matter.) But if it does reconcile with other well established theories, then that seems to increase the chance of it being real. Of course, that’s always with the possibility that a cluster of theories that reconcile with each other may constitute a paradigm that eventually ends up being overturned. In the end, we have theories that make more or less accurate predictions. If it’s the simplest theory, it may be reliable. And if its components are compatible with other reliable theories, they may be “real”. At least until a better theory comes along. It seems to be the best we can do. 10. The problem for me with concepts of “reality” is that the great majority of what scientists talk about can not be “experienced”. At least not by mere mortals such as myself without any skill in mathematics beyond the ten times table. I can readily experience (most) human qualia and emotions. I can understand the colour red in two ways: firstly in the scientific explanation of light waves of a certain frequency but secondly (and very importantly, to me at least) in my qualitative experience of colour. I can not experience the warping of spacetime. Nor the minute particles of matter of which (it is said) everything is made. As a very pedestrian mortal I have five very limited senses and can only really understand and accept what I can personally “feel” and “experience”. Perhaps these shortcomings are limited to just myself. Perhaps once Elon Musk has implanted the necessary extra processing chips in my brain I will be able to feel such matters as easily as I can feel and understand light or heat. Bring it on, Elon. Liked by 1 person 1. I’m tempted to do an appeal to the stone and point out anytime you’ve fallen on to the ground you’ve experience the warping of spacetime, but of course the idea that that is the warping of spacetime is what is so far outside of our experience. And we only experience photons and electrons en mass, such that the idea they were composed of units was controversial for a long time. The thing about Musk implanting extra processing chips is, if it becomes common, we might someday wonder what it was ever like to only experience reality with natural senses. Of course, that could also be when we start experiencing simulated reality. (Assuming we’re not already experiencing it.) 11. Space, time, space-time, particle, laws of physics – All that are physical terms. Our understanding and our discussions are based on variations of that physical view of the Universe. I think, there is a a new sheriff in town, so to speak. Please look at this article “ (New machine learning theory raises questions about nature of science). What we see here? There is a new way to describe the Universe and its contents without a use of physical terms and laws. That has implications on how we define “real”, “existence”, the underlying laws of the world, and so forth. Liked by 2 people 1. I’ll have to check out that article, but making predictions with a black box seems like it will have limited utility. One of the benefits of actually having a theory, a model, is that we can then apply it for various purposes, like technology. Having an oracle just make predictions won’t do that. Although it might provide a useful test for possible theories. Now, if the black box can produce actual theories, then we might be on to something. But then we basically have an AI scientist. Liked by 1 person 2. I think I linked to the paper you reference on another thread. I also had a blog post on this. Quote from paper: “We discuss a possibility that the entire universe on its most fundamental level is a neural network…This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other”. Leave a Reply to SelfAwarePatterns Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
3aa18d115f036743
Unravelling the Quantum Maze DOI: 10.4236/jmp.2018.98106   PDF   HTML   XML   694 Downloads   994 Views   Citations The restoration of philosophical realism as the basis of quantum mechanics is the main aim of the present study. A spontaneous projection approach to quantum theory previously formulated achieved this goal in cases where the Hamiltonian does not depend explicitly on time. After discussing the most relevant flaws of orthodox quantum mechanics, a formulation of the spontaneous projections approach in the general case is introduced. This approach yields experimental predictions which in general coincide with those of the orthodox version and overcomes its main flaws. Share and Cite: Burgos, M. (2018) Unravelling the Quantum Maze. Journal of Modern Physics, 9, 1697-1711. doi: 10.4236/jmp.2018.98106. 1. Introduction The foundations of quantum mechanics were laid in the period 1900-1926. Some of its achievements were introduced and discussed at the Fifth Solvay Congress (1927). Even though the theory seemed bizarre, it was accepted by the majority of participants at this meeting ( [1] , pp. 109-121). In 1930 Paul Dirac published the first formulation of quantum mechanics [2] . Two years later John von Neumann published Mathematische Grundlagen der Quantenmechanik [3] . Quantum mechanics was born. These first versions of the theory share two characteristics: 1) The state vector | ψ (wave function ψ ) describes the state of an individual system. 2) They involve two laws of change of the system’s state: Spontaneous (natural) processes, governed by the Schrödinger equation; and measurement processes, ruled by the projection postulate. This postulate gives an account for projections (collapses, reductions or quantum jumps) caused by measurements. Many other versions of quantum theory followed. Those where | ψ describes the state of an individual system and the projection postulate is included among its axioms are generally called standard, ordinary or orthodox quantum mechanics (OQM), sometimes referred to as the Copenhagen Interpretation. From its inception OQM, and in particular its projection postulate, was the target of merciless criticism. Many scientists denounced what they considered its flaws. Among them, 1) it is incompatible with determinism; 2) it implies a kind of action-at-a-distance; and 3) it renounces philosophical realism. In addition, OQM presents a conflict with conservation laws which has been largely ignored [4] [5] [6] [7] [8] and carries the seeds of incoherence and contradictions [9] [10] . In 1931 Albert Einstein rightfully proclaimed: “the belief in an external world independent of the perceiving subject is the basis of all natural science” [11] . The restoration of philosophical realism as the basis of quantum mechanics is hence worth being pursued. The corresponding change of formalism should be realized, however, keeping as much as possible the experimental predictions of OQM, a theory imposingly successful [12] . This is the main aim of the spontaneous projection approach (SPA), a version of quantum theory previously formulated for cases where the Hamiltonian does not depend explicitly on time. It achieved this goal to a certain degree: it does not modify the Schrödinger equation and recovers a version of Born’s postulate where no reference to measurements is made [13] [14] [15] . But the fact that it cannot account for cases where the Hamiltonian depends explicitly on time was a flaw which became increasingly apparent during our critical review of time dependent perturbation theory (TDPT) and forced us to conclude that OQM weirdness is not limited to the measurement problem [9] [10] . The version of SPA introduced in the present paper is more general than the previous one for it includes cases where the Hamiltonian depends explicitly on time. It keeps, however, the essential traits of SPA first version and yields, as far as we can see, the same experimental predictions obtained from OQM. 2. Philosophical Realism, Quantum Measurements and Scientific Problems We uphold philosophical realism. We did in the first version of SPA and adopt the same epistemology as the basis of our present, more elaborated and general formulation of SPA. Our philosophical starting point can be stated as follows: 1) the things physics is about are supposed to exist, whether they are observed or not; 2) every scientific theory represents things through conceptual models; and 3) the adequacy of a theory (and corresponding models) to the things it refers to must take experimental results into account. In agreement with the philosophical point of view we adopt, “there are no definitive theories or models in (factual) science, because scientific knowledge is always of a hypothetical and never of a final nature” [16] [17] . More on this subject in ( [18] , p. 86). According to Mario Bunge, “the main epistemological problem about quantum theory is whether it represents real (autonomously existing) things, and therefore whether it is compatible with epistemological realism. The latter is the family of epistemologies which assume that a) the world exists independently of the knowing subject, and b) the task of science is to produce maximally true conceptual models of reality…” ( [19] , pp. 191-192). He adds: “The main pillar of the non-realist interpretations of quantum theory is a certain view on measurement and on the projection (reduction) of the state function that is involved in measurement… [Sometimes] ‘measurement’ is misused to denote any interaction of an entity with the environment… However, the worst misconception of measurement is its identification with the subjective experience of taking cognizance of the outcome of measurement” ( [19] , pp. 192-193). For instance, in von Neumann’s view, a complete measurement involves the consciousness of the observer ( [1] , pp. 481-482) ( [20] , pp. 418-421). “By assuming that observation escapes the laws of physics… the orthodox view treats measurement as an unphysical process…” ( [19] , p. 200). In his answer to the question “what can be observed?” Bell quotes Einstein saying “it is theory which decides what is ‘observable’. I think he was right―‘observation’ is a complicated and theory-laden business. Then that notion should not appear in the formulation of fundamental theory” ( [21] , p. 208; emphases added). Bell exposes to ridicule the supposedly necessary intervention of an observer to cause projections when he asks: “What exactly qualifies some physical system to play the role of ‘measurer’? Was the wave function of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system... with a PhD? If the theory is to apply to anything but highly idealized laboratory operations, are we not obliged to admit that more or less ‘measurement-like’ processes are going on all the time, more or less everywhere? Do we have jumping all the time?” ( [21] , p. 209). Some authors dealing with the measurement problem avoid reference to the observer, but assume that measuring devices are macroscopic. Concerning this hypothesis Max Jammer highlights: “as long as a quantum mechanical one-body or many-body system does not interact with a macroscopic object, as long as its motion is described by the deterministic Schrödinger time-dependent equation, no events could be considered to take place in the system… If the whole physical universe were composed only of microphysical entities, as it should be according to the atomic theory, it would be a universe of evolving potentialities (time- dependent ψ -functions) but not of real events” ( [1] , p. 474). A few authors have considered the possibility that projections may happen at the microscopic level, that they are not necessarily the result of the interaction between a quantum system and a macroscopic object [22] [23] . We agree. Collapses are a kind of spontaneous processes occurring in nature. In order to take place, they require neither the intervention of observers nor the interaction of a microscopic (quantum) system with a macroscopic (classical) measuring device [13] . Reductions may also happen in tiny isolated systems. According to Bunge “the question of reality has nothing to do with scientific problems such as whether all properties have sharp values, and whether all behavior is causal” ( [19] , p. 192; emphases added). He adds: “unfortunately the two main controversies, those over realism and determinism (or hidden variables), have often been mixed up―and this by scientists of the stature of Einstein and de Broglie, Bohm and d’Espagnat. Yet the two issues are quite different: whereas the problem of realism is epistemological, that of hidden variables is ontological…” ( [19] , p. 168). We agree. But the list of scientific problems which have nothing to do with the question of reality ought to include at least three additional issues not mentioned by Bunge: the kind of action-at-a-distance pointed out by Einstein in the Fifth Solvay Congress ( [1] , p. 116); the validity of conservation laws [8] ; and OQM incoherence and contradictions introduced through TDPT [9] [10] . Let us briefly consider these three issues. 2.1. OQM Implies a Kind of Action-at-a-Distance The contradiction between the individual interpretation of the wave function ψ and the postulate of relativity was first pointed out by Einstein in the Fifth Solvay Congress. In the case of a particle that, after diffraction in a slit arrives at a certain point of a scintillation-screen, he pointed out that the theory of quanta can be considered from two different viewpoints: I) The de Broglie-Schrödinger waves do not represent one individual particle but rather an ensemble of particles distributed in space. Accordingly, the theory provides information not on an individual process but rather on an ensemble of them… II) Quantum mechanics is considered a complete theory of individual processes. Hence, “each particle moving toward the screen is described as a wave packet which, after diffraction, arrives at a certain point P on the screen, and | ψ ( r ) | 2 expresses the probability (probability density) that at a given moment one and the same particle shows its presence at r…” ( [1] , pp. 115-116). Einstein objected to the second possibility on the following grounds: “If | ψ | 2 is interpreted according to II, then, as long as no localization has been effected, the particle must be considered as potentially present with almost constant probability over the whole area of the screen; however, as soon as it is localized, a peculiar action-at-a-distance must be assumed to take place which prevents the continuously distributed wave in space from producing an effect at two places in the screen… ‘It seems to me,’ Einstein continued, ‘that this difficulty cannot be overcome unless the description of the process in terms of the Schrödinger wave is supplemented by some detailed specification of the localization of the particle during its propagation… If one works only with Schrödinger waves, the [individual] interpretation of | ψ | 2 , I think, contradicts the postulate of relativity’.” ( [1] , p. 116; emphases added). As early as 1927 (during the Fifth Solvay Congress) Einstein proved that the idea that quantum mechanics is a complete theory of individual processes renders inescapable the notion of instantaneous quantum jumps [15] [24] . His conclusion is the result neither of a sophisticated experiment nor of a cumbersome argument. It comes from logical reasoning applied to a very simple though experiment. To our knowledge, nobody has shown him wrong. Eight years later, Einstein et al. published their celebrated article Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [25] . In this paper, best known as the EPR paradox, they referred to a system of two particles in an entangled state. In 1964 John Bell proved that no theory of nature that obeys local realism (and so satisfies certain inequalities) can reproduce all the predictions of quantum theory [26] . The contradiction between Bell’s inequalities and quantum mechanics was submitted to experimental test by Stuart Freedman and John Clauser in 1972 [27] . Many other experiments followed this pioneer contribution. In general they yielded results in agreement with quantum mechanics. We have addressed the EPR paradox and related contributions in previous papers [15] [16] [24] . OQM implies what Einstein named “a spooky action-at-a-distance.” There was a time when this notion was rejected by the majority of physicists. Nowadays it is accepted by almost everybody. This change of attitude can be retraced to the series of experiments aiming to test Bell’s inequalities, in particular that performed by Hensen et al. in 2015 [28] and quantum teleportation obtained quite recently [29] . Let us add that, even though non-locality has been mostly associated to systems of particles in an entangled state, non-locality has been proven to also be present in experiments performed with individual particles. This can be easily verified with experimental techniques accessible to everybody [24] . The experiment performed by Hensen et al. has prompted Howard Wiseman to claim Death by experiment for local realism [30] . Local realism has died. Let us stress, however, that neither realism implies locality nor locality implies realism. These two concepts have been unduly mixed up. Non-locality really happens; the notion that every process is local lacks justification. This does not imply, however, renouncing realism. 2.2. OQM Is at Variance with Determinism and Conservation Laws OQM conflicts determinism. To sample the reaction generated a century ago by such a conflict, let us recall that during the general debate of the Fifth Solvay Congress, its chairman Hendrick Lorentz objected the rejection of determinism, as proposed by the majority of speakers. He concluded with a desperate remark: “Je pourrais toujours garder ma foi déterministe pour les phénomènes fondamentaux… Est-ce qu’un esprit plus profond ne pourrait pas se rendre compte des mouvements de ces électrons? Ne pourrait-on pas garder le déterminisme en faisant l’objet d’une croyance? Faut-il nécessairement exiger l’indéterminisme en principe?” [I could always keep my faith in the determinism of fundamental phenomena… A more powerful mind could not give an account for the motion of these electrons? Determinism could be not kept as believe? Is it necessary to renounce determinism by principle?] ( [1] , p. 114). The relation between determinism and conservation laws was first pointed out by Henry Poincaré. Concerning the law of conservation of energy, he declared: “[cette loi] ne peut avoir qu’une signification, c’est qu’il y a une propriété commune à tous les possibles; mais dans l’hypothèse déterministe il n’y a qu’un seul possible et alors la loi n’a plus de sens. Dans l’hypothèse indéterministe, au contraire, elle en prendrait un…” [this law cannot have more than one meaning, it is that there is a property shared by every possible; but in the determinist hypothesis there is a unique possible, then the law has no sense any more. In the indeterminist hypothesis, by contrast, it would have a sense…] ( [31] , p. 161). This remark is pertinent: since OQM explicitly states that quantum measurements are processes not ruled by deterministic laws, one should suspect that conservation laws are not necessarily valid in such processes [15] . We have dealt with this subject for some time and concluded that, in the framework of OQM, conservation laws are strictly valid in spontaneous processes (ruled by a deterministic law); but have only a statistical sense in measurement processes (ruled by probability laws) [4] [5] [6] [7] [8] . Taking into account Poincaré’s remark, this should not be surprising: in the first case conservation laws are theorems which can be derived from an axiom which is not valid in the second case. 2.3. OQM Is Incoherent and Contradictory OQM marvelous success in the area of experimental predictions requires, in general, the application of TDPT. It is agreed that the method provided by TDPT must be used in all problems involving a consideration of time, including spontaneous time dependent processes; see for instance ( [2] , p. 168). This is the case of absorption and emission of light and of processes occurring in semiconductors. To give an account for such spontaneous processes, however, TDPT requires the application of a law which is not valid in spontaneous processes. This is a flagrant incoherence we have not noticed in the literature [9] . Let us sketch our argument: Consider a system with Hamiltonian ε which does not depend explicitly on time. It will be called the unperturbed Hamiltonian of the system. Its eigenvalue equations are ε | ϕ n = E n | ϕ n (1) where E n ( n = 1 , 2 , ) are the eigenvalues of ε and | ϕ n the corresponding eigenstates. For simplicity we assume spectrum to be entirely discrete and non-degenerate; all the states referred to in this study are normalized. Let us suppose that at initial time t = 0 the system is in the stationary state | ϕ j . A system in a stationary state will remain in that state forever: if for t 0 the Hamiltonian were ε , the state vector at time t would be | ψ ( t ) = e i E j t / | ψ ( 0 ) = e i E j t / | ϕ j (2) Nevertheless, TDPT establishes that by applying a time dependent perturbation, transitions between different eigenstates of ε can be induced and determines the probability corresponding to every particular transition ( [2] , pp. 172-173). If at t = 0 a time dependent perturbation W ( t ) is applied, for t 0 the total, perturbed Hamiltonian will be H ( t ) = ε + W ( t ) (3) The perturbation W ( t ) causes the state | ψ ( 0 ) to change. According to TDPT, the Schrödinger evolution leads the initial state | ψ ( 0 ) = | ϕ j to the state | ψ ( t ) = U ( t , 0 ) | ψ ( 0 ) = U ( t , 0 ) | ϕ j (4) where U ( t , 0 ) is, by definition, the evolution operator corresponding to the Hamiltonian H ( t ) . The probability of a transition taking place from state | ϕ j to state | ϕ k during the time interval ( 0 , t ) is P 0 , t ( E j E k ) = | ϕ k | U ( t , 0 ) | ϕ j | 2 (5) TDPT deals with processes having two clearly different stages. In the first―during the time interval ( 0 , t ) ―a Schrödinger evolution leads the system’s state from | ψ ( 0 ) to | ψ ( t ) given by Equation (4) with certitude; this change is automatic. In the second an instantaneous projection of | ψ ( t ) to a stationary state | ϕ k is ruled by probability laws [9] . According to OQM, the Schrödinger equation governs every spontaneous process; Born’s postulate and/or the projection postulate apply only when measurements are performed, resulting in a quantum jump. “The fact that TDPT requires the application of postulates concerning measurements to give an account for processes supposedly spontaneous (v.g. absorption and emission of light) is at the very heart of OQM incoherence” [9] . A further critical review of TDPT unveiled a contradiction reminiscent of Zeno’s paradoxes concerning motion [10] . The argument can be sketched as follows. Referring to a system in the initial state | ψ ( 0 ) = | ϕ j , Dirac asserts: “at time t the ket corresponding to the state in Schrodinger’s picture will be | ψ ( t ) = U ( t , 0 ) | ϕ j according to Equation (4). The probability of the E n ’s then having the values E k is P 0 , t ( E j E k ) given by Equation (5). For k j , P 0 , t ( E j E k ) is the probability of a transition taking place from state | ϕ j to state | ϕ k during the time interval ( 0 , t ) , while P 0 , t ( E j E j ) is the probability of no transition taking place at all. The sum of P 0 , t ( E j E k ) for all k is, of course, unity” ( [2] , p. 172-173; emphases added). The transition taking place from state | ϕ j to state | ϕ k during the interval ( 0 , t ) involves an instantaneous jump, i.e. a discontinuous change at time t. Since the sum of probabilities corresponding to all possible discontinuous changes at time t is unity, no room is left for a non-null probability corresponding to a process continuous at this instant [10] . Dirac does not impose any particular condition on the instant t. Hence the process cannot be continuous at any instant, the state vector at time t cannot be | ψ ( t ) = U ( t , 0 ) | ϕ j and transitions between stationary states during the time interval ( 0 , t ) as referred to in TDPT cannot take place; the system remains stuck to its initial stationary state. “Paraphrasing Zeno, these kinds of transitions between stationary states are nothing but illusions” [10] . Except Albert Messiah, no other author known to us imposes any particular condition on the interval ( 0 , t ) . By contrast, Messiah explicitly assumes that an instantaneous measurement is performed at time t ( [32] , p. 621). In absence of measurement, the Schrodinger evolution follows and the probability of a transition taking place from | ϕ j to | ϕ k during the interval ( 0 , t ) is null. To avoid the “quantum Zeno contradiction” Messiah pays the price of assuming that an instantaneous measurement is performed every time a transition between two stationary states takes place [10] . Quantum weirdness has been traditionally associated with the measurement problem. To solve it, different authors have suggested several strategies. Among them are statistical interpretation of quantum mechanics [33] , many worlds interpretation [34] , decoherence [12] and continuous spontaneous localization theory [22] . We have addressed these and other proposed solutions to the measurement problem in previous papers [13] [14] [15] . Despite their value, these contributions do not solve the measurement problem, let alone OQM incoherence and the quantum Zeno contradiction just mentioned. OQM weirdness is certainly not limited to the measurement problem. It is much more serious and justifies a radical revision of the theory [9] [10] . An overview of such a task follows. 3. The Spontaneous Projection Approach Two kinds of processes irreducible to one another occur in nature: the strictly continuous and causal ones, which are governed by a deterministic law and those implying discontinuities, which are ruled by probability laws. This is the main hypothesis of SPA [13] [14] [15] . We explicitly discard the observer intervention and the interaction between the quantum system with a macroscopic measuring device as a source of projections. So the question is: what could then induce quantum jumps? SPA answers: the tendency the system’s state has to jump to the eigenstates of operators representing conserved quantities. Let us establish this hypothesis in a formal way. Let α be the self-adjoint operator representing the physical quantity α referred to the physical system ζ. We assume that the Hamiltonian, denoted by ε, does not depend explicitly on time t. Then, if the operator α fulfills the conditions α t = 0 (6) [ α , ε ] = 0 (7) the system’s state | ψ ( t ) has the tendency to jump to the eigenstates of α . We have shown, however, that this tendency is seldom realized [13] [14] [15] . Let us highlight the difference between this hypothesis and that adopted in continuous spontaneous localization theory. In the last approach collapses localize the wave function [22] . As a result, steady states cannot be attained [35] . By contrast, according to SPA in most cases projections lead the system to stationary states [13] . 3.1. The Statistical Sense of Conservation Laws We have previously asserted that the conflict of OQM with conservation laws has been largely ignored [4] [5] [6] [7] [8] . Let us briefly review this issue. The mean value of the physical quantity α is by definition α ( t ) = ψ ( t ) | α | ψ ( t ) (8) In Schrödinger evolutions the validity of Equation (6) and (7) ensures that α ( t ) remains a constant in time for every state | ψ ( t ) of ζ. It is said that α is a constant of the motion and that α is conserved. By contrast, in processes ruled by another, different law from Schrödinger equation, the validity of Equations (6) and (7) does not guarantee that α ( t ) remains a constant in time: if the process starts at t 0 and ends at t f , it can result α ( t f ) α ( t 0 ) [8] . Hence the assertions “ α is a constant of the motion” and “ α is conserved” are not justified. However, the average of the changes δ α = α ( t f ) α ( t 0 ) obtained by repeating the process many times, converges to zero [8] . Let us consider a set of N orthonormal vectors: | u 1 , | u 2 , | u N ( { N u } for short) such that the system’s state at time t can be written | ψ ( t ) = j c j ( t ) | u j (9) where c j ( t ) = u j | ψ ( t ) and j = 1 , 2 , , N . The mean value of α at time t is α ( t ) given by Equation (8); in particular, if | ψ ( t ) = | u j this mean value is u j | α | u j . Then, Postulate I: If Equations (6) and (7) are satisfied, the validity of ψ ( t ) | α | ψ ( t ) = j | c j ( t ) | 2 u j | α | u j (10) is a necessary condition for the state | ψ ( t ) given by Equation (9) may collapse to the vectors of the set { N u } , i.e. for jumps like | ψ ( t ) | u 1 , or | ψ ( t ) | u 2 , or | ψ ( t ) | u N , may occur [13] [14] [15] . Postulate I recovers Poincaré’s assertion: In the indeterminist hypothesis, conservation laws have a statistical sense [13] [14] [15] . 3.2. The Concept of Preferential Set If there is a unique set of N 2 orthonormal vectors: | φ 1 , | φ 2 , , | φ N ( { N φ } for short) such that 1) the state of the physical system ζ at time t can be written | ψ ( t ) = j γ j ( t ) | φ j (11) where 2) γ j ( t ) = φ j | ψ ( t ) 0 for every j = 1 , 2 , , N ; 3) at least ( N 1 ) vectors belonging to the set { N φ } are eigenstates of the Hamiltonian ε (i.e. stationary states); and 4) every self-adjoint operator α for which Equations (6) and (7) are valid satisfies the relation ψ ( t ) | α | ψ ( t ) = j | γ j ( t ) | 2 φ j | α | φ j (12) we shall say that { N φ } is the preferential set of ζ in the state | ψ ( t ) and the members of { N φ } will be called its preferential states. Comment 1: According to this definition, a system ζ in the state | ψ ( t ) can either have a unique preferential set including at least two preferential states or not have a preferential set at all. Comment 2: The concept of the preferential set of ζ in the state | ψ ( t ) adopted here coincides with that introduced in [10] and is different from our original concept of a preferential set of ζ in the state | ψ ( t ) [13] [14] ; the difference being that in the original definition the set { N φ } was not supposed to be unique, and condition (2) was not assumed to be valid. Comment 3: Besides the concept of a preferential set of ζ in the state | ψ ( t ) , in previous papers we introduced the concepts of preferential basis and of maximal preferential set [13] [14] . Taking into account the present definition of the preferential set of ζ in the state | ψ ( t ) , the concepts of preferential basis and of maximal preferential set become superfluous. Hence they will not be referred to in the following. We have so far assumed that the system’s Hamiltonian does not depend explicitly on time. Let us now consider cases where the system’s Hamiltonian depends explicitly on time. It can be written H ( t ) = ε + W ( t ) (13) where W ( t ) includes every term of the Hamiltonian which depends explicitly on time. Then we state Postulate II: The preferential set (and its preferential states) of ζ in the state | ψ ( t ) does not depend on the term W ( t ) . Examples of the determination of preferential states have been given elsewhere [10] [13] [14] [15] . 3.3. The Formalism of SPA SPA includes the primitive (undefined) notions: system, state, physical quantity (or dynamical variable) and probability. Note that except the last one, these primitive concepts coincide with those adopted in Jammer’s axiomatic presentation of the formalism of quantum mechanics due to von Neumann ( [1] , p. 5). Postulate A: To every system ζ corresponds a Hilbert space S whose vectors (state vectors, wave functions) | ψ ( t ) completely describe the states of the system. Postulate B: To every physical quantity α corresponds uniquely a self-adjoint operator α acting in S . It has associated the eigenvalue equations α | a k ν = a k | a k ν (14) (ν is introduced in order to distinguish between the different eigenvectors that may correspond to one eigenvalue a k ), and the closure relation k , ν | a k ν a k ν | = I (15) is fulfilled (here I is the identity operator). If k or ν iscontinuous, the respective sum has to be replaced by an integral. Comment I: The correspondence postulates A and B associate the primitive notions system, physical quantity and state of the system with mathematical entities. The same is true of von Neumann’s quantum mechanics version reported in ( [1] , p. 5). Postulate C: Continuous processes are governed by the Schrödinger equation i d d t | ψ ( t ) = H ( t ) | ψ ( t ) (16) where H ( t ) is the Hamiltonian of the system, Planck’s constant divided by 2π and i the imaginary unity. Comment II: The Schrödinger equation is a deterministic law. The solution | ψ ( t ) of Equation (16) which corresponds to the initial condition | ψ ( 0 ) is unique. The system’s state evolves in correspondence with the equation | ψ ( t ) = U ( t , 0 ) | ψ ( 0 ) (17) where U ( t , 0 ) is the evolution operator corresponding to the Hamiltonian H ( t ) ; more details in ( [2] , p. 109) ( [36] , p. 137) ( [37] , p. 308) ( [38] , p. 41). Postulate D: A discontinuous change of the system’s state occurs if and only if | ψ ( t ) jumps to one of its preferential states. If the system ζ in the state | ψ ( t ) does not have preferential states, the process is necessarily continuous and governed by the Schrödinger equation. Let us assume that the system ζ in the state | ψ ( t ) has the preferential set { N φ } . So we can write | ψ ( t ) = k γ k ( t ) | φ k (18) where k = 1 , 2 , , N . Under these conditions we state Postulate E: In the small time interval ( t , t + d t ) the state | ψ ( t ) can undergo the following changes | ψ ( t ) | ψ ( t + d t ) = | φ k (19) with probability d P k ( t ) = | γ k ( t ) | 2 d t τ ( t ) (20) | ψ ( t ) | ψ U ( t + d t ) = U ( t + d t , t ) | ψ ( t ) (21) with probability d P U ( t ) = 1 d t τ ( t ) (22) τ ( t ) Δ ε ( t ) = 2 (23) [ Δ ε ( t ) ] 2 = ψ ( t ) | ε 2 | ψ ( t ) [ ψ ( t ) | ε | ψ ( t ) ] 2 (24) Comment III: Since | ψ ( t ) is normalized, during a small time interval ( t , t + d t ) the system in the state | ψ ( t ) has a probability d t τ ( t ) to jump to one of its N preferential states. If d t τ ( t ) , the dominant process is the Schrödinger evolution [13] . Comment IV: In general the parameter τ defined by Equation (23) depends on time t. But if is a constant, the state | ψ ( t ) may be considered as an unstable state that can decay to one of its N preferential states [13] [14] [15] . Let P U ( t ) be the probability that the system’s state has not jumped to any preferential state in the interval ( 0 , t ) . The well-known exponential decay law is then obtained: P U ( t ) = e t / τ (25) 4. Concluding Remarks Let us conclude with the following remarks. On the one hand SPA and OQM share several traits: 1) Both theories refer to individual systems, not to ensembles of systems similarly prepared. 2) SPA does not modify OQM in a substantial way: It keeps without changes the Schrödinger equation and recovers a version of Born’s postulate where no reference to measurement is made. So, in general its experimental predictions coincide with those of OQM [13] [14] [15] . 3) Both theories imply a “spooky action-at-a-distance” which is a kind of action-at-a-distance easily verifiable with techniques accessible to everybody [24] . Since this effect actually happens, there is no reason to discard theories which imply it. 4) In SPA as in OQM conservation laws fail in individual processes involving quantum jumps. On the other hand, SPA and OQM exhibit remarkable differences: 1) Unlike OQM, SPA is compatible with philosophical realism. In SPA there is no room for observers placed above the laws of nature. 2) The conspicuous notions of measurement and observation in OQM are alien to SPA. Differing from OQM, SPA fulfills Bell’s requirement: “[the notion of observation] should not appear in the formulation of fundamental theory” ( [21] , p. 208; emphases added). 3) In OQM spontaneous processes are necessarily continuous and ruled by the Schrödinger equation, a deterministic law which yields automatic changes. By contrast, in SPA spontaneous processes are not necessarily continuous and ruled by the Schrödinger equation. If the system in the state | ψ ( t ) has the preferential set { | φ 1 , | φ 2 , , | φ N } , it can either follow a Schrödinger evolution or instantaneously jump to one of its preferential states. 4) In OQM reductions are ad-hoc, in SPA they are not surreptitious but explicitly included in the formalism. 5) OQM is incoherent and exhibits a contradiction reminiscent of Zeno’s paradoxes of motion. SPA escapes these issues thanks to the hypothesis that collapses are natural processes [10] . In sum: while yielding experimental predictions which in general coincide with those of OQM, SPA enjoys a coherence which is absent from OQM and overcomes its main flaws. We are indebted to Professor J. C. Centeno for many fruitful discussions. We thank Carlos Valero for his assistance with the transcription of the manuscript. Conflicts of Interest Conflicts of Interest The authors declare no conflicts of interest. [1] Jammer, M. (1974) The Philosophy of Quantum Mechanics. John Wiley & Sons, New York. [2] Dirac, P.A.M. (1958) The Principles of Quantum Mechanics. Clarendon Press Oxford, Oxford. [3] von Neumann, J. (1932) Mathematische Grundlagen der Quantenmechanik. Springer, Berlin. [4] Burgos, M.E. (1994) Physics Essays, 7, 69-71. [5] Burgos, M.E. (1997) Speculations in Science and Technology, 20, 183-187. [6] Burgos, M.E., Criscuolo, F.G. and Etter, T. (1999) Speculations in Science and Technology, 21, 227-233. [7] Criscuolo, F.G. and Burgos, M.E. (2000) Physics Essays, 13, 80-84. [8] Burgos, M.E. (2010) JMP, 1, 137-142. [9] Burgos, M.E. (2016) JMP, 7, 1449-1454. [10] Burgos, M.E. (2017) JMP, 8, 1382-1397. [11] Einstein, A. (1931) James Clerk Maxwell: A Commemoration Volume. Cambridge University Press, Cambridge. [12] Tegmar, M. and Wheeler, J. (2001) Scientific American, 284, 68-75. [13] Burgos, M.E. (1998) Foundations of Physics, 28, 1323-1346. [14] Burgos, M.E. (2008) Foundations of Physics, 38, 883-907. [15] Burgos, M.E. (2015) The Measurement Problem in Quantum Mechanics Revisited. In: Pahlavani, M., Ed., Selected Topics in Applications of Quantum Mechanics, INTECH, Croatia, 137-173. [16] Burgos, M.E. (1983) Kinam, 5, 277-284. [17] Burgos, M.E. (1987) Foundations of Physics, 17, 809-812. [18] Bunge, M. (1973) Philosophy of Physics. Reidel Publishing Company, Dordrecht, Boston, Lancaster. [19] Bunge, M. (1985) Treatise on Basic Philosophy, Vol. 7, Philosophy of Science & Technology. D. Reidel Publishing Company, Dordrecht, Boston, Lancaster. [20] von Neumann, J. (1955) Mathematical Foundations of Quantum Mechanics. Princeton University Press, Princeton. [21] Bell, M., Gottfried, K. and Veltman, M. (2001) John S. Bell on the Foundations of Quantum Mechanics. Word Scientific, Singapore. [22] Ghirardi, G.C., Rimini, A. and Weber, T. (1986) Physical Review D, 34, 470-490. [23] Primas H. (1990) The Measuremnet Process in the Individual Interpretation of Quantum Mechanics. In: Cini, M. and Lévy-Leblond, J.M., Eds., Quantum Theory Without Reduction, Adam Hilger, Bristol, 49-68. [24] Burgos, M.E. (2015) JMP, 6, 1663-1670. [25] Einstein, A., Podolsky, B. and Rosen, N. (1935) Physical Review, 47, 777-780. [26] Bell, J.S. (1964) Physics, 1, 195-200. [27] Freedman, S.J. and Clauser, J.F. (1972) Physical Review Letters, 28, 938-941. [28] Hensen, B., Bernien, H. and Dréau, A.E. (2015) Nature, 526, 682-686. [29] Wikipedia. The Free Encyclopedia: Quantum Teleportation. [30] Wiseman, H. (2015) Nature, 526, 649-650. [31] Poincaré, H. (1906) La science et l’hypothèse. Flammarion, Paris. [32] Messiah, A. (1965) Mécanique Quantique. Dunod, Paris. [33] Ballentine, L.E. (1970) Reviews of Modern Physics, 42, 358-381. [34] Wikipedia. The Free Encyclopedia: Many-Worlds Interpretation. [35] Ballentine, L.E. (1991) Physical Review A, 43, 9-12. [36] Bes, D.R. (2004) Quantum Mechanics. Springer, Berlin. [37] Cohen-Tannoudji, C., Diu, B. and Laloë, F. (1977) Quantum Mechanics. John Wiley & Sons, New York, London, Sydney, Toronto. [38] Yndurain Muñoz, F.J. (2003) Mecánica Cuántica. Editorial Ariel S.A., Barcelona. comments powered by Disqus Copyright © 2020 by authors and Scientific Research Publishing Inc. Creative Commons License
0310244d924e2dbc
My watch list   Theoretical chemistry Theoretical chemistry involves the use of physics to explain or predict chemical phenomena. In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Theoretical chemistry may be broadly divided into electronic structure, dynamics, and statistical mechanics. In the process of solving the problem of predicting chemical reactivities, these may all be invoked to various degrees. Other "miscellaneous" research areas in theoretical chemistry include the mathematical characterization of bulk chemistry in various phases (e.g. the study of chemical kinetics) and the study of the applicability of more recent math developments to the basic areas of study (e.g. for instance the possible application of principles of topology to the study of electronic structure.) The latter area of theoretical chemistry is sometimes referred to as mathematical chemistry. Much of this may be categorized as computational chemistry, although computational chemistry usually refers to the application of theoretical chemistry in an applied setting, usually with some approximation scheme such as certain types of post Hartree-Fock, Density Functional Theory, semiempirical methods (like for instance PM3) or force field methods. Some chemical theorists apply statistical mechanics to provide a bridge between the microscopic phenomena of the quantum world and the macroscopic bulk properties of systems. Theoretical attacks on chemical problems go back to the earliest days, but until the formulation of the Schrödinger equation by the Austrian physicist Erwin Schrödinger, the techniques available were rather crude and speculative. Currently, much more sophisticated theoretical approaches, based on Quantum Field Theory and Nonequilibrium Green Function Theory are in vogue. Branches of theoretical chemistry Quantum chemistry The application of quantum mechanics to chemistry Computational chemistry The application of computer codes to chemistry Molecular modelling Methods for modelling molecular structures without necessarily referring to quantum mechanics. Examples are molecular docking, protein-protein docking, drug design, combinatorial chemistry. Molecular dynamics Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules. Molecular mechanics Modelling of the intra- and inter-molecular interaction potential energy surfaces via a sum of interaction forces. Mathematical chemistry Discussion and prediction of the molecular structure using mathematical methods without necessarily referring to quantum mechanics. Theoretical chemical kinetics Theoretical study of the dynamical systems associated to reactive chemicals and their corresponding differential equations. Closely related disciplines Historically, the major field of application of theoretical chemistry has been in the following fields of research: • Atomic physics: The discipline dealing with electrons and atomic nuclei. • Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules. • Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc. The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague. • Many-body theory: The discipline studying the effects which appear in systems with large number of constituents. It is based on quantum physics - mostly second quantization formalism - and quantum electrodynamics. Hence, the theoretical chemistry discipline is sometimes seen as a branch of those fields of research. Nevertheless, more recently, with the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics like biochemistry, condensed matter physics, nanotechnology or molecular biology. • Attila Szabo and Neil S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Dover Publications; New Ed edition (1996) ISBN-10: 0486691861, ISBN-13: 978-0486691862 The deepest part of Theoretical Chemistry must end up in Quantum Mechanics. — R. P. Feynman, The Feynman Lectures on Physics This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Theoretical_chemistry". A list of authors is available in Wikipedia.
0dd30f701da22987
Friday, April 12, 2013 ... Deutsch/Español/Related posts from blogosphere Why quantum mechanics can't be any different Classical physics and quantum mechanics are the only two frameworks for physics that are worth mentioning. And it's quantum mechanics that is more true in Nature, that is more fundamental, and that is more legitimate as the starting point. Classical physics may be derived as a limit of quantum mechanics but quantum mechanics can't be obtained by any similarly straightforward, guaranteed-to-succeed procedure from classical physics. And yet, quantum mechanics remains wildly misunderstood and underestimated. Many people, including professional physicists, can't resist their primitive animal instincts and they keep on trying to rape quantum mechanics, insert their prickly objections and modifications into it, and make it more classical. However, quantum mechanics is well protected and it can't get pregnant with bastards. It's just patiently saying "f*** off" to these deluded non-physicists and equally deluded physicists. Even those who realize that quantum mechanics – the framework respected by Nature – is fundamentally different than classical physics and that there won't be any counterrevolution that would make physics classical once again often underestimate the rigidity and uniqueness of the universal postulates of quantum mechanics. They think that many things could be altered, mutated, and quantum mechanics has many possible cousins and it's an accident that Nature chose this particular quantum mechanics and not one of the cousins. They're wrong, too. In this text, I will demonstrate why certain properties of quantum mechanics are inevitable for a consistent theory. Complex numbers are the only allowed number system for amplitudes First, let us imagine a cousin of quantum mechanics where wave functions \(\ket\psi\) take values in a Hilbert space that isn't complex: let's try to replace \(\CC\) by \(\RR\), \(\HHH\), or something else. If we're not allowed to multiply the state vector by the imaginary unit \(i\), e.g. if we try to work with the real numbers \(\RR\), we're immediately in trouble. Schrödinger's equation says that\[ i\hbar\frac{\dd}{\dd t}\ket\psi = \hat H \ket\psi \] and the coefficient is pure imaginary. This pure imaginary character of the coefficient is what is needed to preserve \(\braket\psi\psi\), the norm that is interpreted as the probability. For energy eigenstates, the equation says that only the phase is changing with time. With a real coefficient, the wave function would exponentially increase or decrease with time – and so would the total probability of all mutually excluding properties of the physical system. I will discuss the need for "unitarity of the evolution" momentarily. We began with Schrödinger's equation as a place where the imaginary unit \(i\) appears but as you know, I don't consider this equation to be excessively "superfundamental" in quantum mechanics. One may show – and Dirac has shown – that this equation is equivalent to the Heisenberg picture in which the state vector is constant but the operators evolve according to the Heisenberg equations of motion\[ i\hbar\frac{\dd}{\dd t}\hat L = [\hat L, \hat H] \] Needless to say, the coefficient \(i\hbar\) in this equation may be shown to be the same \(i\hbar\) we had in Schrödinger's equation. In this picture, we may offer many independent explanations why the coefficient has to be pure imaginary. For example, if the operator \(\hat L\) is required to be Hermitian at all times – as appropriate for observables, as we will discuss – its time derivative has to be Hermitian, too. However, the commutator of two Hermitian operators is anti-Hermitian, i.e. it obeys\[ [\hat L, \hat H]^\dagger &= (\hat L \hat H - \hat H\hat L)^\dagger =\\ &= \hat H\hat L - \hat L\hat H = [\hat H,\hat L] = -[\hat L, \hat H] \] where I have used \[ \hat H^\dagger = \hat H, \quad \hat L^\dagger = \hat L, \quad (\hat X\hat Y)^\dagger = \hat Y^\dagger \hat X^\dagger. \] If we want to express a Hermitian operator using this anti-Hermitian commutator – a candidate for the time derivative of a Hermitian operator has to be Hermitian – we have to multiply the commutator by an imaginary constant, one we call \(i\hbar\), which erases "anti-" from the adjective. We don't really need to discuss the Hamiltonian and time evolution at all. Think about Heisenberg's "uncertainty principle" commutator\[ [\hat x,\hat p ] = i\hbar. \] A few paragraphs above, I proved that the commutator of two Hermitian operators is actually anti-Hermitian. So if the commutator of these two particular operators is a \(c\)-number, i.e. a multiple of the unit operator, then the \(c\)-number has to be pure imaginary. Again, it's called \(i\hbar\) using the usual symbols and unit conventions of quantum mechanics. And once you accept that the commutator is a pure imaginary i.e. non-real operator, it follows that there can't be a basis in which both \(\hat x\) and \(\hat p\) would be expressed by real matrices; the commutator of any two real matrices is real as well which is no good to satisfy the relationship above! So the imaginary unit \(i\) is clearly needed. You may try to go from \(\CC\) to the opposite direction than to \(\RR\), i.e. to larger number systems such as \(\HHH\) and \(\OO\). If you pick the quaternions \(\HHH\), it won't be lethal but the non-complex Hamilton numbers will be redundant. There are various ways to see it. For example, Schrödinger's equation or Heisenberg's equations will have one particular pure imaginary unit which we may still call \(i\) without a loss of generality. If we pick some "orthogonal" imaginary unit in the quaternions such as \(j\), the hypothetically quaternionic wave function will effectively split to two complex ones,\[ \ket{\psi_\HHH} = \ket{\psi_\CC}_1 + j \ket{\psi_\CC}_2 \] and these two state vectors labeled by the subscripts \(1,2\) will evolve independently from each other. The only physically meaningful interpretation of the wave function above will be equivalent to a density matrix that is obtained by mixing the two pure density matrices:\[ \rho_{\HHH,\rm equiv} = \ket{\psi_\CC}_1 \bra{\psi_\CC}_1 + \ket{\psi_\CC}_2 \bra{\psi_\CC}_2. \] You don't get anything fundamentally new. The "quaternionic wave function" will be intrinsically "reducible" and you may always study the elementary building blocks that the wave function may be reduced to – and they're complex. At least with a single time coordinate, you can't get anything really new that could be called "quaternionic quantum mechanics". Only the complex numbers are tolerable as the "fair number system" for the coordinates of the state vector. Real numbers are complex numbers that are constrained by an extra condition – one that is lethal for a physical interpretation, as we have pointed out; quaternions can't really show their muscles beyond their being a "pair of complex numbers". This fundamental character of complex numbers holds even in "deep enough mathematics" that is detached from the physical conditions we have discussed. For example, if we talk about representations of groups – and the Hilbert spaces in any quantum mechanical theory are representations of groups and algebras of operators – the "default" character of a representation is always complex, i.e. \(\CC^n\). The real representations \(\RR^n\) and the pseudoreal representations, which include the quaternionic ones \(\HHH^{n/2}\), may be interpreted as ordinary complex representations \(\CC^n\) with an extra "structure map" \(j\) acting on the representation that is antilinear (differing from a linear map by an extra complex conjugation in a defining "scalar linearity" condition) and that commutes with the action of the group. Real representations are those whose structure map obeys \(j^2=+1\) while the pseudoreal (including quaternionic) representations are those that obey \(j^2=-1\). At any rate, the representation may always be viewed as a "complex representation with some extra structure". For \(j^2=+1\), the structure map allows us to prove that there is a basis in which all the matrices are real; for \(j^2=-1\), we may prove that all the matrices representing the group elements may be organized into \(2\times 2\) blocks \(a+ b\sigma_y\) where \(a,b\in\CC\) and these blocks effectively represent \(1\times 1\) quaternionic entries \(a+jb\). Real numbers and quaternions are just "cherries added on a fundamental pie" and the fundamental pie is always complex. It's not smaller and it's not larger. At the end, this fundamental position of complex numbers boils down to the fundamental theorem of algebra: every algebraic equation of \(n\)-th degree has \(n\) roots. But this theorem only holds for \(\CC\). While the quaternions as components of a state vector were just "redundant" but non-lethal, octonions \(\OO\) would be lethal as matrix entries of operators because octonions are not associative (they break the rule \((ab)c=a(bc)\)) while the matrices – something identified with observables and evolution operators etc. – have to be associative e.g. because the evolution is associative. You could try to modify \(\CC\) in a different way – for example, you could try to pick all the "rational complex numbers". This would also be bad, at least in theories with a continuous time coordinate. In some not-quite-physical toy models, the amplitudes could happen to be rational for "rational questions" but it's an extra coincidence, or an "extra structure", and it doesn't hurt if you simply use wave functions in \(\CC^n\). Paradoxically enough, the most tolerable "number system" in which you could try to pick your state vector are deeply esoteric systems such as the so-called \(p\)-adic numbers. Quantum mechanics based on such numbers could obey some consistency rules but it would certainly be very different from the theories we use to describe Nature around us. Linearity of evolution operators Schrödinger's equation is linear in the wave function. This also implies that the finite-time evolution operators are linear:\[ \ket{\psi(t_1)} = U(t_1,t_0) \ket{\psi(t_0)} \] Could we make the future wave function depend on the initial wave function in a nonlinear way? We could try but we would quickly run into some serious trouble. What kind of trouble? Quantum mechanics and any other "at least remotely similar" hypothetical cousin of it describes the state "A or B", with some probabilities, as a superposition\[ \ket{\psi(t_0)} = c_A \ket A + c_B \ket B \] Assume that someone may "perceive" whether the state of the physical system at time \(t_0\) is A or B; the "A or B" information is a legitimate information that may split consistent histories. Without a loss of generality, imagine that she learns that the state is A. Such a state will evolve into \(c_A \cdot U(t_1,t_0)\ket A\) at time \(t_1\). Similarly for B. Now, it's important that her consciousness or the absence thereof remains undetectable. After all, no one has ever experimentally demonstrated whether women have consciousness much like men. ;-) And it's true for men, too. It's important that someone's "conscious" learning about the result of a measurement doesn't modify the system in any further way. The procedure needed to measure may impact the measured physical system of interest; however, the mental processes that this measurement causes remain subjective and inconsequential for the rest of the world. We don't want a qualitative "wall" separating conscious and unconscious objects or subjects. Observers are dull physical systems, too. We're really discussing "Wigner's friend" scenario here. It's important that Wigner is allowed to ignore the "A or B" realization and continue to work with the whole initial state \(\ket{\psi(t_0)}\) above. Because the evolution operator is linear, this state evolves to\[ \ket{\psi(t_1)} = c_A U(t_1,t_0) \ket A + c_B U(t_1,t_0) \ket B. \] That's great because these two terms (and it could work for many terms, too) are sharply separated from one another. Wigner may calculate the probability of a property at time \(t_1\) and there's a chance that the "A or B perception" at time \(t_0\) only has a tolerable impact on Wigner's predictions: it suppresses the history with A and B at \(t_0\) by their probabilities \(p(A),p(B)\), respectively. If the evolution operator were nonlinear, Wigner would get various terms that depend both on \(c_A\) and \(c_B\), e.g. that would be proportional to \(c_A^m c_B^n\) with some positive powers. These terms would be there and nonzero if he used the full wave function with both possibilities; but if he accepted that his female friend made a measurement at time \(t_0\), they would disappear because \(c_A^m c_B^n=0\) if either \(c_A=0\) or \(c_B=0\)! So he would get different predictions depending on the question whether his female friend "perceived" something or not. In other words, souls and ghosts would become physical and they would start to fly everywhere. This is lethal for a candidate theory of mutated quantum mechanics not only because we dislike souls and ghosts. It's lethal because the measurement – that would tangibly affect Wigner's predicted probabilities – could occur at huge distances, at a spacelike separation, and the influence proved above would be a genuine, detectable, faster-than-light signal that would demonstrably violate Einstein's special theory of relativity. We would enable not only souls and ghosts; we would enable superluminal voodoos. You should understand that this would lead to real trouble in the predicted phenomena which is a genuine, objective problem with a candidate theory; your unfamiliarity with a mathematical framework to describe Nature (quantum mechanics) is not a genuine problem, it is just your subjective, psychological problem. So we have to keep the alternatives that may decohere from each other separated, even after some extra evolution in time; linearity is needed for that. Because the evolution operator \(U(t_1,t_0)\) is a linear operator on the Hilbert space, so is its \(t_1\) derivative near \(t_1\to t_0\) – and it's the Hamiltonian that enters Schrödinger's or Heisenberg's equations (up to a factor of \(i\hbar\)). So the Hamiltonian has to be a linear operator, too. Similarly, we may see that all other observables representing Yes/No questions have to be linear operators. These linear Hermitian projection operators \(P\) are operators of the type that Wigner's female friend actually applied at time \(t_0\) to simplify her further thinking about the system (the "collapse" of the wave function). If the operator were not linear, one would get a similar interference between the possibilities that should be mutually exclusive. The Yes/No operators have to be projection operators, \(P^2=P\) – yes, I started to drop the silly hats at some moment, I hope that you survived that (everything in Nature has hats and we should, on the contrary, invent bizarre accents for things that aren't operators, to emphasize that they're not fundamental physical quantities!) – because we want their eigenvalues to be \(0\) and \(1\). Also, we need \(P^\dagger=P\) because we want all the eigenvectors with the \(0\) eigenvalue to be orthogonal to (i.e. mutually exclusive with) those with the \(1\) eigenvalue. Yes/No operators must be represented by linear Hermitian projection operators. Similarly, operators such as \(X\) are linear Hermitian operators because they may be constructed out of the Yes/No operators by the following sums:\[ X = \sum_i X_i P_{X=X_i}^{0/1:\rm No/Yes}. \] Note that this formula doesn't really depend on any conventions in quantum mechanics. It just says that the value of \(X\) is the value of \(X_i\) of the only allowed (eigen)value of the coordinate for which the projector \(P_{X=X_i}=1\); the other projection operators are effectively equal to zero. Fine. We see that all observables with real measurable values are represented by linear Hermitian operators acting on a complex Hilbert space. Probabilities as squared amplitudes Born's rule tells you that the probabilities – the only kind of numbers that quantum mechanics may predict in the most general situations – are calculated from the complex numbers, the amplitudes, by squaring their absolute values. We have\[ p_i = |c_i|^2, \quad c_i\in \CC. \] That's obviously another favorite target of the rapists I mentioned at the beginning. Why wouldn't we use \(|c_i|\) or, more naturally, \(|c_i|^4\) or any other function of the amplitudes (perhaps not necessary a phase-independent function)? If you pick the fourth power, for example, you may surely get an equally good cousin of quantum mechanics – or mutated quantum mechanics – and our Nature has just picked the second power due to some random subjective choices, hasn't it? Not really. When you decompose a wave function into some components that are eigenvectors of \(L\)\[ \ket\psi = \sum_i c_i \ket{\ell_i},\quad L\ket{\ell_i} = L_i \ket{\ell_i}, \] we want to say that the probability that \(L=L_i\) is equal to \(p_i=|c_i|^2\), assuming that the basis of vectors \(\ket{\ell_i}\) is orthonormal. We need it for the total probability of all possibilities, \(\sum_i p_i\), to be conserved. So if it is 100 percent at the beginning, it is 100 percent at the end. This conservation law follows from \(H\) that is a Hermitian operator as we have already demonstrated; the evolution operators are unitary, \(UU^\dagger=U^\dagger U = {\bf 1}\), as a result. And what is conserved is \(\braket\psi\psi\) which may be proved to be equal to \(\sum_i |c_i|^2\) by pure algebra i.e. without any assumptions about physics. There can't be an equally general sum that is conserved in the general situation so the two sums must be functions of one another and \(p_i=|c_i|^2\) follows from that (up to the freedom to insert an illogical universal multiplicative coefficient into this relation). This argument holds for any Hermitian operator \(L\) and the corresponding decomposition of the state vectors into its eigenvectors. The probabilities have to be given by the squared amplitudes, otherwise the "total probability of all mutually excluding alternatives" can't be conserved. You could try to keep on struggling and proposing various creative loopholes. For example, you could say that this whole quantum mechanics is based on "unitary evolution operators" and the unitary groups just happen to have a bilinear (well, sesquilinear) invariant given by the complexified Pythagorean theorem. But there may be other groups that have higher-order invariants, right? Well, there exist groups with higher-order invariants but these invariants aren't guaranteed to be positive so they can't play the role of probabilities. This is enough to kill these possibilities but there are actually many other ways to kill it. We simply want simple enough state vectors – energy eigenstates – to evolve simply. The change of the phase with time is what this change has to look like. There are various other ways to attack this loophole but I don't want to spend too much with it. You should just realize that in proper quantum mechanics – whatever the Hamiltonian is: non-relativistic quantum mechanics, quantum field theory, string theory, whatever you like – pretty much any "physical transformation" of the physical system (evolution in time, translation in space, rotation, parity, and so on) is expressed by a unitary operator on the Hilbert space. If you want to change something about this rule, you are really building an entirely new theory from scratch. Fixing the norm of the state vector along the way Another group of "anything goes" rapists could propose a universal cure for all the non-unitary, nonlinear, and other theories. They could say that the only "constraint" we faced was the condition that the sum of probabilities had to remain 100 percent. Can't we just rescale the wave function – that may evolve according to any non-unitary, non-linear equation of motion – at each moment to manually guarantee that the sum of probabilities remains equal to 100 percent? We may do it but we will run into conflicts with other basic physical or logical requirements that these rapists might be willing to overlook but that are paramount, anyway. What do I mean? Imagine that you start with \(\ket{\psi(t_0)}\) and evolve it to\[ k(t_1,t_0)\cdot U(t_1,t_0) [ \ket{\psi(t_0)} ] \] where I wrote the ket vector as an argument in the square brackets to indicate that the operator \(U\) may be nonlinear. Also, the added coefficient \(k\) is there to keep the total probability equal to 100 percent according to your own formula for the total probability, one that may differ from Born's rule. That may look fine to you but we resuscitate ghosts and voodoo again. The required "renormalization constant" \(k(t_1,t_0)\) actually has to depend on the initial state as well if it's able to preserve the total probability in the general case – it was fraudulent to suppress this dependence. And if the initial wave function describes the "A or B" state, this \(k\) will inevitably depend on \(c_A\) and \(c_B\) again. The possibilities "A or B" will refuse to split in the final expression for \(\ket{\psi(t_1)}\). Again, it will be important whether Wigner's female friend at \(t_0\) "eliminated" the other possible outcomes or not. The eliminated outcomes will still affect the outcomes for the moment \(t_1\) that remain viable; equivalently, consciousness will become physically measurable and it will violate the laws of special relativity again. So it's important not to attempt to "renormalize" the formulae for probabilities by additional ad hoc fudge factors. One may argue that such fudge factors would damage the very logical structure of the theory but even if you were OK with it, you will ultimately see that your alternative theory allows the female observers to send superluminal signals by the "power of her will" (a superluminal form of telekinesis combined with telepathy) and violate the rules of relativity which seem to hold, according to observations and a robust symmetry principle extracted from all these observations. So the probabilities have to be what they are according to the unadjusted formulae and because their sum has to remain equal to 100 percent and because the bilinear invariants are the only universally non-negative (for all states) invariants one may find for general classes of transformations, it follows that all "physical transformations" are encoded by unitary linear transformations on the Hilbert space and the squared complex amplitudes have to be interpreted as probabilities. I feel that I have forgotten some other "popular" ways to rape quantum mechanics. But it's been enough so far and if I recall what I have forgotten, I will update this blog entry. Add to Digg this Add to reddit snail feedback (60) : reader JR said... Hi Lubos, If we think of the wave function as a 2 component spinor for the real and imaginary parts then I can write the Schrödinger eq. as follows: \end{pmatrix}=\begin{pmatrix}0 & -\mathcal{H}\\ \mathcal{H} & 0 which looks real but messy. Presumably the physics should be the same. reader Luke Lea said... "the equation says that only the phase is changing with time" Dear Lubos, A naive question, but does this have anything to do with so-called gauge invariance? I've read (or mis-read?) that changes in phase don't effect anything measurable the same way that rotations or translations in space don't, and that this is what is meant by local symmetry. I've also read that calling it a symmetry is something of a misnomer, that redundancy would be a better word, which (I am guessing) means that information about the phase of a quantum state (elementary particle?) is superfluous when it comes to predicting the probabilities of a measurement of its state. reader Luboš Motl said... Dear Luke, it's hard to deal with such questions. Does the changing phase of the wave function have "anything" to do with gauge invariance? Well, Yes, No, it depends on what you count as "anything". Most importantly, the phase of the wave function is *not* gauge invariance. There are lots of reasons why they're not the same thing in general. First, gauge invariance exists even in classical, non-quantum physics; wave functions and their phases only exist in quantum mechanics. With this being said, the phase of the wave function may emulate the phase of a classical field if the classical field becomes quantum and is used to create a particle, via creation operators. So then these two phases are related. However, this relationship still doesn't mean that the change of the phase of the wave function is a gauge invariance. For example, as you notice, we must physically identify states related by gauge invariance; we must declare that their difference is physically a zero vector of the Hilbert space. But if we identified wave functions differing by a phase in this mathematical way, it would be like identifying each of them with zero. There would be no states left. So while state vectors with different phases are equally good for physics, we must always treat this as a global symmetry, not a gauge symmetry, otherwise we eradicate the whole Hilbert space. Also, the overall phase of the wave function is just one phase. Gauge invariance typically allows us to change infinitely many phases, one phase per each point of the space or spacetime. So they're not the same thing. Please don't take it too personally but the degree of confusion behind your question "is A related to B" is so deep that you should start with learning what the damn A and B mean. If you have no clue what they mean, and all the data suggests that you have no clue, then it's very likely that convoluted questions such as "do A and B have anything to do with each other" end up being completely unconstructive and not leading to any useful answer that you may understand. It's like asking "does this chromosome you talked about have anything to do with stomach?" Yes, no, what of it. There are surely some relationships between chromosomes and stomachs but the question makes it pretty clear that the author of the question probably has no clue what either of these two concepts mean, otherwise he wouldn't ask this strange question. If it's so, why doesn't he start to learn the basic things about the chromosomes and stomachs first, before trying to construct would-be advanced and would-be creative questions involving both concepts - that are in reality completely silly? reader Luboš Motl said... Hi JR, 1/2 is written as \frac 12 in TeX! Your very way how you use pmatrix to write this simple thing, a fraction, suggests that you're using TeX to make your comment look wiser than it is. Why not write it in words? Well, DISQUS doesn't do any TeX, anyway. ;-) Yes, the imaginary unit may be "replaced" by the 2x2 matrix ((0,-1),(+1,0)). This matrix squares to -1 (times the unit matrix), too. It doesn't really look messy. It's the same thing. You just used a different notation for the imaginary unit. What's important is that the components of the wave function will always come in pairs - we call the members of the pair "real and imaginary part of the amplitude" - and the equations will satisfy the complex structure behind these amplitudes which means that all the operators in your equations will commute with the matrix ((0,-1),(+1,0)) because this matrix represents "i" and C is a commuting field. Your way of writing it obeys it which is why you haven't gotten rid of the complex numbers in any sense. You just obscured the structure of the equation but it didn't stop being an equation for a complex wave function. What's critical here is that you can't "relax" the special properties that follow from the complexity - you can't replace the equation above by one that *couldn't be* rewritten in terms of complex components. reader anon said... Dear Lubos, Is it possible to reformulate QM with real numbers only, and Born's rule substituted by p_i=c_i, by introducing unobservable ghost-states with negative amplitudes? Is it true that Dirac tried to introduce such a notion, or maybe this is nothing more than a fairy-tale? reader Luke Lea said... My apologies, Lubos, for asking a dumb question. In my amateurish way and with a rickety old brain I have been trying to better appreciate QM. And I've learned a lot (well, a little bit) by reading you, from Leonard Susskind's series on quantum entanglement, most recently by watching Ramamurti Shankar's video lectures on OM which are on YouTube. Maybe one of your readers -- Dilaton perhaps? -- could point me to a place where Leonard Susskind discusses these issues. I know I'm a fool but at least I'm a humble fool wanting to learn, and I hope you will suffer me in small doses. (Besides -- now, please, don't hit me! -- when it comes to economics I think I may know a thing or two you don't -- which is not to say I don't learn things from you too.) reader John Smith said... Lubos, what about the fact that Bohmian Mechanics exactly reproduces the predictions of quantum mechanics? reader jy said... Hi Lubos, Could you explain why this statement is true? > these two state vectors labeled by the subscripts 1,2 will evolve independently from each other. Because $i$, $j$, and $k$ don't commute and the Schroedinger equation involves an $i$, I thought a state of type 2 would mix with a state of type 1 after evolution. I don't think this affects your conclusions, though. reader Friv 2 said... What the hell makes a gauge theory? Why is Maxwell's electromagnetism a gauge theory but Newton's mechanics is not? reader Luboš Motl said... Why the hells? Why don't you try to find at least one sentence of a definition e.g. at instead of spreading hells? Your question is analogous to the question: What the hell makes a mammal? Why is squirrel a mammal while an eagle is not? reader Luke Lea said... Sorry, I think I'm responsible for that hell. Was just trying to be colloquial. reader Luboš Motl said... Dear Anon, this is kind of the same question, fundamentally, as JR's question below. When we say that quantum mechanics has to be complex, we don't mean that one can't obscure this fact. One may always obscure this fact or any other fact. We may say that [a particular woman] is a woman but one may obscure this by a dildo. But she's still a woman. In the same sense, quantum mechanics' being complex is some well-defined intrinsic property that doesn't depend on makeups, dildos, and reformulations. You may reformulate quantum mechanics in any way but if it's still a physically equivalent theory, one may still show that there is a "complex structure" on the space of possible states. There is a way to multiply the state vectors by complex numbers. You may write their coordinates as 3 horses on one side of a coin and 4 cows on the other instead of 3+4i, but I still know how to use the complex number "reformulated" in this way. It is still complex despite the "makeup" and attempts to obscure it. Now, concerning your particular "new rule", the probability has an invariant physical meaning. One can't redefine it; it is defined by something that may be operationally measured - by repetitions of the same experiment. So "p_i" means what it means. If you say that p_i=c_i, it means that you want to parameterize the states by what we call |c_i|^2 and use a new (redundant, deliberately confusing) symbol for it. Great but then you also need to remember the phase that doesn't affect the absolute value. You may surely write all complex amplitudes in the polar form, r * exp(i.phi). But that doesn't affect the fact that all the numbers will still be complex and quantum mechanics will still be linear although the people using nonlinear parameterizations such as polar coordinates will have a harder time to understand this fact. But it's their problem. The fact that it's harder for them to see that the operators etc. are linear doesn't mean that the linearity is invalid. Concerning negative probabilities, a reformulation that survived because it makes sense is Wigner's quasi-probabilistic distribution which is a way to rewrite/encode density matrices as functions of classical commuting phase space coordinates. This distribution is the closest quantum mechanics' definition of the "probability distribution on the phase space". However, it may be negative in small regions and this allowed negativity effectively enforces the uncertainty principle and all the wonderful new features of quantum mechanics. Dirac and others may have emitted lots of other speculations but nothing else meaningful linked to "reformulations using negative probabilities" came out of it. So we just ignore it. Physics isn't like religion where a Moses can say an arbitrarily bizarre sentence that makes no sense and his followers spend thousands of years by attempts to decode what he meant. He just didn't mean anything that made sense at that moment, otherwise it could have been formulated meaningfully. There are also lots of negative probabilities in physical theories that aren't quite consistent etc. So for example, Dirac and others tried to formulate theories of extended objects such as membranes and they found that they would allow the probabilities to be negative - "ghosts" - which is bad. That's why they abandoned it. But they overlooked string theory along the way - a theory of strings may be completely meaningful because the ghosts may be tamed by new gauge symmetries. Needless to say, they therefore overlooked the whole structure and many extra things they were just vaguely envisioning but something similar turned out to be possible. But many particular claims they did were still wrong. reader Rusty SpikeFist said... You might be interested in Kapustin's recent paper, which addresses some of the same questions in a mathematically precise way: reader Luboš Motl said... Fun paper! reader Luboš Motl said... No prob! Still, sometimes devils need to be regulated. Here it looked like a leftist rally complaining about an essential thing (gauge theory in this case) and it's always right to shoot into such rallies. ;-) reader Luboš Motl said... Dear jy, (e+fi) (a+bi + cj+dk) = ((ea-bf)+i(eb+fa)) + (j(ec-fd)+k(ed+fc)) Note that a,b only appear in the first part of the final result while c,d only appear in the second half. They never mix up and in fact, the rules of multiplication of a quaternion by a complex number (e+fi) obeys the fact that the quaternion is composed of two complex components, one for the 1,i units and the other for the j,k units. It doesn't matter that i,j and i,k don't commute. In this picture, j and k is always on the right side from the "i", so we never see what the product in the other order actually is. That's really the point: the operators are always acting on the (quaternionic) ket vectors from the left side. reader Luboš Motl said... This "fact" is completely false. What is true is that one may write down a classical theory with hidden variables in which the probability distribution for a single particular quantity, the position X, will be the same as predicted by quantum mechanics if it was the same at the beginning. But this isomorphism breaks down if we switch from this toy model of spinless nonrelativistic particle to any other quantum mechanical system - particles with spins; quantum fields, strings; anything else - or if we measure different observables than X and we surely measure different ones most of the time, usually various terms in the energy operator. The need of the Bohmian theory to separate the observables to those that really exist and those that must be faked is really a proof of the inconsistency of the theory, see More importantly, we can't ever make a functioning theory out of "Bohmian Mechanics" because 1) the pilot wave in this picture is a real classical wave and needs to be "swept under the rug" after the measurement. 2) But there is no "broom" to do so that could be compatible with basic principles of physics such as relativity - any "broom" would mean a superluminal action at a distance. de Broglie-Bohm pilot wave theory may have been an admirable attempt to rewrite a toy model of quantum mechanics in a classical framework but it's incompatible with everything that was discovered after the mid 1920s and it's really incompatible with principles that have been known since 1905, too. reader jy said... Thanks for your answer. You are absolutely right. That was a stupid question. reader Mephisto said... This blog explains why current quantum mechanics has to be formulated on complex unitary Hilbert spaces, given the current axioms of quantum mechanics and it is right. But all the arguments given here do not rule out the possibility, that some other axioms and a different theory could be found that would equally describe the world. QM as currently formulated might not be the ONLY POSSIBLE framework to describe the world. There might be others. Maybe we find some new axioms and derive a different theory with different mathematical structure from them. The only constraint is that it has to be consistent with all experimental results. The wave function itself is just an axiom. There might be some other theory that doesnt have the wave function but some other object. You cannot prove that QM is the only possible framework, because you would need to prove that the axioms of QM are the onnly possible axioms. And axioms are by definition unprovable, but rather confirmed by experience and observation reader Jeff Krantz said... I'm not sure how your understanding of QM is (and if someone were to ask me about "guage invariance", btw, it'd be a short conversation), but as a student/layman who has had a 'popular' interest in such things for many years, I would suggest Albert's "Quantum Mechanics and Experience." It's no text book, and it doesn't give you the REAL maths involved, but it explains QM from the ground up, and presents simplified versions of all the key tools/formalism/notation IN MATHEMATICAL FORM, which to me makes it significantly different from your standard '1st-timers' popularization, where everything is explained through ANALOGY, instead. So in other words there are equations, and he takes you through how QM systems are analyzed and manipulated through the methods and conventions of the field. I'm going to assume you know a lot of that stuff if you've listened through stuff like the Susskind lectures, but Albert's book gave me the tools to 'play around with' the ideas mathematically as he (and others since) discuss the more philosophically interesting implications of QM's revelations about the universe around us. [Actually, on a quite ironic note, I'm now remembering that one of the concessions Albert makes for the elementary reader is abandoning use of complex numbers all together when first presenting the state space. Perhaps Lubos or someone else familiar with the book could critique this approach to keeping things accessible and uncluttered.] But on the original subject of bringing oneself up to speed in the sciences, I keep discovering that biting the bullet and engaging some real academic material (textbooks) has again and again ended with me asking myself, "Why the hell did I wait so long?!" and telling myself, "Next time I'll surely be smarter.." ...And then I end up reading blogs/research papers/message boards/etc. for 6 months intensely on complexity theory, or neuroscience on brain metabolism before finally hazarding a crash course on the subject ;-), and suddenly noticing the incredible amount of deserving words/terms/concepts that I was somehow "reading through"--often without even realizing that I was missing anything! [And just a reminder: google is your friend when it comes to textbooks...] reader Mephisto said... reader Luboš Motl said... Right, I introduced the purpose of this particular blog entry in the same way... It "might not be" except that it is. reader Shannon said... My no-math brain seems to draw from analogies systematically. QM vs classical physics looks like accountancy. The whole account must strictly balance with the existing. With QM the Universe Company is safe from any bankruptcy. reader Bonusje said... Imaginary is ONLY possible in math and fantasy! Qt does NOT exist OR ELSE Light of the hubble deepspacefields WOULD HAVE proven any existence of quantummechanics and heisenberg. 13 BILLION YEARS of flight through millions of gravityfields and time WOULD have had AND MUST had an accumulating einsteinian and accumulating quantumdistorting influence on the light from those Galaxies. NONE is seen! Perfect Imagery! So also deepspace Proves einstein is History and hoax! Besides quantummechamics calculates for example orbitdistorsions at satellites BUT WITH Newton ANY of them CAN ALSO BE Explained AND Calculated. STOP whoreshipping old fossiles AND USE YOUR SELF For a Change! Btw NONE of ANY qt claim IS found in Nature AND ALL qt including higgsbosons ARE fantasised AND BASED ON that qt. And higgsbosons are NO particles. Since I had No opposition at Does This Mean You Agree with My Conclusions? reader Mephisto said... The history of physics (or mathematics) shows that axioms are not some absolute truths, although they might be held as such over many centuries. It took 2000 years to realize that the Euclid axioms are not some self-evident truths but that one might modify them to arrive to non-euclidean geometries that can serve as a basis for physical theories. The axioms of Newtonian physics are only 3 (Newton laws) and are pretty straight-forward (intuitive). They were held as sacred truths for over 300 centuries. The axioms of QM are maybe 80 years old and they are not as intuive and self-evident. I mean there is nothing intuitive about complex-valued coefficients that have to be squared to get the probabilities of measurements. There is nothing inuitivie in the Schrödinger equation (Feynman himself wrote in his Lectures, that SE was "born in the head of Schrödinger" and that there is no justification for it) I still consider QM to be the highest achievement of science. It is just brilliant that physicist were able to find these axioms and formulate a theory that describes all the experiments so wonderfully. reader anna v said... Mephisto, in your exposition you are ignoring a very big fact: Euclid's axioms still hold for euclidiean spaces, Newton's axioms still hold for classical mechanics. It is the field of definition that changes, and new fields need new axioms. New theories for different realms should blend with the old ones at the boundaries, and this is true with quantum mechanics and classical mechanics. What is the new realm you envisage for quantum mechanics where there will be data that will require new axioms? At the moment the deeper we go into the particle world as far as sizes go and the higher as energies quantum mechanics reigns. Even with the air showers of cosmic rays which go where the energies reach 3*10^20eV nothing unusual is seen. If in the future, some ingenious setup can deliver us in the lab such energies, and if discrepancies are found with QM, a big if, still any new theory and its axioms should merge smoothly with QM and its axioms in the realm where it has been validated by an enormous amount of data.. reader Trimok said... I think that the linearity in Quantum Mechanics and linearity in Special Relativity (Poincaré transformations) have the same origin, that is the invariance of the non-correlation of systems. More precisely, take two independent systems S1 and S2. I can consider the whole system S = (S1, S2) Thus we consider an additive quantity A (like information, energy/impulsion, angular momentum, etc...) So we have A(S) = A(S1) + A(S2) So we can make time evolution in Quantum Mechanics or Poincaré transformations in Special Relativity, but all these transformations do not change the fact that S1 and S2 are independent, it is a physical fact, independent of the point of view of a particular observer or repository, (including translations in time). So, For instance, after a Poincaré Transformation, you must have A'(S) = A'(S1) + A'(S2) So the only possiblity is that A'(X). is a linear function of A(X), for all sytems X reader Dilaton said... This is a nice reminder about why QM exactly has to be what it is :-) And it contains some cool new to me issues that I have not yet seen before, and that give me something to think about, for example the thing bout real and quaternionic representations being interpretable as "ordinary" complex representations with additional sturcture maps etc ... reader Christoph Gärtner said... I'd like to make the case that while necessary, complex numbers are not as fundamental to quantum mechanics as one might think and I argue that looking beyond quantum mechanics does not necessarily make you a crackpot (though that possibility of course remains): First, the fact that the commutator of Hermitian operators is anti-Hermitian is somewhat misleading: The algebra of observables is a _real_ Lie-algebra, and obviously not a complex one (if H is Hermitian, iH cannot be). It's just that unitary representations highlight the wrong Lie-bracket, ie the commutator instead of Dirac's quantum Poisson brackets. Second, quantum mechanics without complex numbers makes a lot of sense geometrically. Remember, the quantum-mechanical phase space is not the complex Hilbert space, but rather its projective version, which is a (real) Kähler manifold. It comes with three compatible structures - a Riemannian metric, a symplectic product and an almost-complex structure. However, any two of these are enough to define the third one. The symplectic product gives the dynamics via Hamilton's formalism, the metric gives probabilities. The almost-complex structure on the other hand has (as far as I know) no fundamental role, even if it is necessarily present if we require symplectic product and metric to be compatible. Third, using an axiomatic approach like Kapustin's paper is worthwhile to figure out why exactly the quantum and classical world are incompatible, but ultimately the fact remains that classical physics is a perfectly fine theory even if it violates a set of axioms tailored for the quantum world; personally I do not see what makes quantum mechanics the more natural choice except for the fact that reality works this way at a more fundamental level. Fourth, the Pawlowian LOLWHAT whenever someone questions quantum mechanics is a bit premature: Once quantum mechanics has been around as long as classical mechanics has, we can talk. I do not see why quantum mechanics can't have underpinnings that look decidedly non-quantum. Yes, there are some no-go theorems, but some of them might not turn out as severe as one might think (after all, the second law of thermodynamics and the associated arrow of time didn't stop us from coming up with time-symmectric foundations), and there's still the possibility that reality is only approximately quantum, same as the real world only obeys the laws of thermodynamics in the thermodynamic limit. There's a lot you can do if the sub-quantum theory operates at or even below Planck-scale levels. Sadly, I don't expect that we'll find an underlying theory during my lifetime and perhaps even not ever, which of course doesn't imply that such a theory does not exist. reader Luboš Motl said... Dear Christoph, the algebra of observables may be interpreted as an algebra over reals or over complex numbers. The latter is *necessary* if the operation is the commutator because the commutator of two anti-Hermitian operators *is* demonstrably anti-Hermitian, and the blog entry contains the elementary proof. Your questioning of this elementary fact is exactly as dumb as if you questioned 1+1=2. The algebra of observables must also be considered a complex algebra - and not a real algebra - if we mean the algebra with the operation "product" and not a "commutator" because general products of observables, even Hermitian ones (e.g. XP), are neither Hermitian nor anti-Hermitian in general. It's also nonsensical to call the Hilbert space or the "projective space" constructive out of it as the "phase space" of quantum mechanics. Quantum mechanics isn't a classical theory so it doesn't have a phase space - and the quantum counterpart of the phase space is actually neither the Hilbert space nor its quotient but a basis of the density matrices (the density matrices themselves generalize the probability distributions on the phase space in classical physics). There isn't any natural universal Kahler metric on the Hilbert space (or its projection version) except one that boils down to K = . This will never happen. Quantum mechanics was developed 2+ centuries after classical physics, so it will always be historically 2+ centuries younger. Does that mean that we will never be allowed to point out that stupid comments denying basic insights of modern physics such as yours are stupid? It was possible to point out this fact already in the mid 1920s, shortly after QM was discovered. reader Luke Lea said... I find the opening sections of this paper on gauge theories to be within reach -- now that I've viewed Shankar's Yale lecture series on electromagnetism and QM: Hope I'm not being misled by either of these sources. Thanks for your trouble. reader Christoph Gärtner said... It's tiresome, let me just say that pretty much every sentence in your comment is either demonstrably wrong or morally wrong Let's stick to the demonstrably wrong things. I claim the following: * Dirac's quantum Poisson bracket makes the space of observables into a real Lie-algebra * the space of observables cannot be made into a complex vector space without introducing non-observables * the actual phase space of QM (in the Schrödinger picture) is the projective Hilbert space * the projective Hilbert space is a principal bundle * both Hilbert and projective Hilbert space are Kähler manifolds and Schrödinger dynamics on both of them can be realized via Hamiltonian vector fields reader Claes Johnson said... Lubos: You seem to be dominated by magical thinking viewing complex numbers as carrying deep physics, while a complex number is just a pair of real numbers, allowing quantum mechanics to be expressed equally well using real numbers, and the linearity of the Schrödinger equation as reflecting a fundamental aspect of reality, while there is no reason to expect physics to be linear. From where did you get your conviction that the basics of quantum mechanics is given once and for all? reader Luboš Motl said... Dear Claes, I noticed that this concept is extremely difficult for eternal laymen but it's the other way around. Complex numbers are fundamental - they enter the fundamental theorem of algebra; fundamental theorems of everything; representations of groups, as explained in my newest blog entry just posted. Real numbers are just complex numbers constrained by an extra reality constraint which makes the structure "more adjusted" and less universally applicable. From indisputable logical arguments applied to unquestionable empirical evidence. reader Claes Johnson said... The big trouble with QM is the high space dimensionality of the wave function, which defies physical interpretation because physics is 3d. The way out of this paradox, which is the paradox that made Schrödinger abandon QM, is to give the wave function a statistical interpretation. But statistics is not physics, because statistics is what accountants do at an insurance company as business, and business is not physics. reader Luboš Motl said... This is a sequence of prejudices and animal instincts. First of all, physics isn't 3D but 10D or 11D, you forgot to count not only the nice tiny 6 or 7 curled-up dimensions but even time. But this is still just the geometry that decides "where" events take place. One must also describe which events and this inevitably brings many more "dimensions" of new types. The laws of physics makes probabilistic predictions only. It wasn't obvious before the mid 1920s but it's been clear since the mid 1920s. Whether Nature's inner workings resemble insurance companies or bakers or anything else or - most realistically, nothing humans know well - is up to her. reader Alex Indignat Monras said... reader Gordon Wilson said... Anton Kapustin's father is an absolutely great composer who straddled the jazz-classical boundaries, Lubos....listen to some of his stuff-- Anton's paper looks interesting. BTW yet another great didactic blog post that I will have to read more carefully. "They" should get you to teach a Coursera course. reader Gordon Wilson said... P.S. I am constantly amazed at the rate of production and quality of your blog posts. reader Luboš Motl said... Wow, this Kapustin jazz is amazing. I am no true fan of complicated abstract jazz - i.e. jazz except for several well-known melodic themes - and this sounds somewhat close to what I listened to at Klaus' Jazz-on-the-Prague-Castle concerts ;-), but with the same complex music played by a single pianist, it looks even more impressive and controllable, kind of. Quite generally, I think it's a pity that music composers aren't really famous these days. We rare know who is the actual composer of a song, for example - the interpreter is far more well-known. And it's true even for folks like me who would be interested: I don't know current composers, either. Some people who are not just showmen and showbabes, like Lady Gaga or Habera, enjoy my respect but for a vast majority of the music, and some of it is very good, I don't know who composed it. reader Claes Johnson said... I am just applying principles of rational thinking or "instincts" if you prefer that terminology, and 11 space dimensions (with 6-7 curled up) is beyond rationality. It does not help to scream your message into my ear; it only sounds even more distorted. To believe that Nature works like an insurance company computing mean values, is an illusion without other purpose than confusion and rip off of honorable people. If you believe in Einstein, then you cannot adore statistics, and if you don't believe in Einstein, then you have a problem as a physicist. reader Luboš Motl said... No, Claes, there is absolutely nothing rational about your thinking. It's purely about random guesses, rationally unsubstantiated prejudices, and childish demonization and humiliation of all the properties of Nature that happen to disagree with your medieval image of the world - which is pretty much all Nature's properties. reader Claes Johnson said... Very interesting! reader Bonusje said... Hearhear WHO IS banning from Logical Dispute??? reader Knut Holt said... What is needed mathematically depends upon the way mathematics itself has been formulated. Complex numbers are abstract concepts that must be given concrete interpetations when they are applied. reader kizi3 said... good article for learning for this ... thanks reader CharlesJQuarra said... "If the evolution operator were nonlinear, Wigner would get various terms that depend both on cA and cB, e.g. that would be proportional to cmAcnB with some positive powers." What kind of terms are you referring to? You still have two eigenstates. The female friend is the only one that interacts directly with the quantum system, which is a two-state system. When Wigner interacts with his friend, he is interacting with a two-state system as well. Am I missing something? reader Luboš Motl said... My text is absolutely explicit about "what terms I am referring to". I am referring to terms in psi(t1), i.e. U(t1,t0)psi(t0). If the evolution operators U are nonlinear, then the coefficients defining psi(t1) won't be linear in the coefficients defining psi(t0). reader CharlesJQuarra said... Ok the term eigenstate might not have been entirely appropiate, but even if evolution is nonlinear you still can pick a basis. In this case what I meant is that the basis is still given by two possible states A and B. I'm just trying to write the expression where you get those c^m_A * c^n_B terms, but I'm not sure how to do that reader Luboš Motl said... You may pick a basis in a linear space but if the "operator" is nonlinear, the basis is completely worthless and cannot be used for anything - it really shouldn't be called a basis. The c*c terms are there simply because one may Taylor-expand a general function. If it is linear, there are only linear terms in "c". If it is not linear, there will be c*c and all other terms, too. Judging by your bizarre, question, you apparently don't understand what the word "linear" means (it *means* that there are also terms that are not just "c"), a fact that makes me wondered why you are unsuccessfully trying to participate in this discussion at all. reader CharlesJQuarra said... Well, you usually say things that are true, but you just said something completely false, and I need to call you on this. A basis is always possible and is completely unrelated to linearity, just in the same way that a solution to a nonlinear equation can be expanded in a fourier basis just in the same way that a solution to a linear problem. The equation over the expansion might not reduce to a linear matrix just like in a linear equation, but it is far from being 'worthless' as you said. Now, to your other comments; "the c*c terms are there simply because you might Taylor expand a general function of coefficients c" A Taylor expansion requires some variables over where the sucessive derivatives are being performed. But you didn't precise what variables is this Taylor expansion happening. I would dare a guess that you are referring to time t1 as independent variable, and evaluating the Taylor series at t0. Now, if that is what you meant (you didn't specify it) I don't see why would the nonlinear evolution be special in the sense of having nonzero terms of high order, since any linear Hamiltonian (even a free Hamiltonian) when exponentiated, will make an infinite Taylor series on derivatives of the quantum state function (which is *always* expressed as coefficients on a chosen basis, regardless if the evolution is nonlinear or linear) reader Luboš Motl said... Return to the undergrad courses of linear algebra, aggressive hack, "Basis is completely unrelated to linearity"? Holy crap. A basis is a set of vector in a *linear* space such that every element of the *linear* space may be written as a *linear* combination of the basis vectors. The action of an operator on the basis vectors only determines the behavior of the operator on the whole space if it is a *linear* operator. This was your last comment here. reader John Archer said... Banned, eh? I guess that would be the eigenstate resulting from application of the Motl annihilator operator? But Loboš, is it positive-definite? I think we need to know. OK, I can see it's definitely positive for you but is there any risk of quantum-tunnelling back out and stealing your cornflakes? :) These technical questions are important! By the way, are they Kellogg's, or the supermarket's own brand? I'm thinking of changing to Sugar Puffs as precaution against cornflake burglary just in case. reader charles quarra said... For some weird reason you enjoy distorting other people's words as a way to disqualify them, but that doesn't make you any less foolish. One thing is the differential equation which describes the evolution of solutions, which might be linear or nonlinear, and another thing entirely different is the *space* of functions where the solutions exist. The space of functions that solve a nonlinear differential equation is the same space of functions that solve a linear differential equation, which is the Hillbert space. When I'm talking about basis I'm referring to the space of functions where there nonlinear solution is expressed. Saying that a 'basis' do not exist in a space of functions where a solution of a nonlinear diff equation might be expressed makes you sound like you don't believe in Fourier expansion. But of course that is not what you meant, your only intent seems to be keep distorting what I'm saying, and now you moved the subject to a discussion about functional analysis instead of answering the original question about what Taylor-expansion are you talking about. So can we skip this silly debate and go back to your explanation about the Taylor expansion? you were about to explain how the "c*c" terms come to be reader Charles said... Lubos , apart from any experimental input , Is quantum mechanics inevitable ? Classical mechanics can be derived from quantum mechanics but quantum mechanics can't be derived from anything else . reader Luboš Motl said... Charles, according to all the current evidence, quantum mechanics is indeed the deepest framework which is not an approximation of an even deeper one, so the hierarchy from classical physics to QM ends there. I think that the words "apart from any experimental input" is pretty much a contradiction. *Every* proof of correctness or inability of a theory in physics has to use some experimental as well as some theoretical arguments. If your word "apart" means that one may ignore everything about the experiments, including the existence of observables, then it's probably impossible to support QM. But the basic framework of QM really depends on a very small number of facts about the observable world. reader interlinside said... this is perfect. thank reader Dam Ma said... Thank you for for sharing so great thing to us. I definitely enjoying every little bit of it I have you bookmarked to check out new stuff you post nice post, thanks for sharing. reader lovethu said... Many thanks for sharing this, I will share with you their references. Many thanks. reader IdPnSD said... Heisenberg has given the mathematical proof of his uncertainty principle in his book - “Heisenberg, W., The physical principles of the quantum theory, Translated in English, Eckart,C. & Hoyt, F.C., Dover publications, University of Chicago, (1930).” The book shows that the proof is based on Fourier Transform. He makes two assumptions (1) position and momentum are related by Fourier Transforms and (2) Ignores the fact that Fourier Transform uses infinity. There is no reason to believe that position and momentum of a particle in nature will obey the Fourier Transform. There is no experimental evidence that suggests this relationship. For more details of the proof you may want to at the book at
de54be043c3a091b
Quantum potential The quantum potential is a central concept of the de Broglie–Bohm formulation of quantum mechanics, introduced by David Bohm in 1952. Initially presented under the name quantum-mechanical potential, subsequently quantum potential, it was later elaborated upon by Bohm and Basil Hiley in its interpretation as an information potential which acts on a quantum particle. It is also referred to as quantum potential energy, Bohm potential, quantum Bohm potential or Bohm quantum potential. In the framework of the de Broglie–Bohm theory, the quantum potential is a term within the Schrödinger equation which acts to guide the movement of quantum particles. The quantum potential approach introduced by Bohm provides a formally more complete exposition of the idea presented by Louis de Broglie: de Broglie had postulated in 1926 that the wave function represents a pilot wave which guides a quantum particle, but had subsequently abandoned his approach due to objections raised by Wolfgang Pauli. The seminal articles of Bohm in 1952 introduced the quantum potential and included answers to the objections which had been raised against the pilot wave theory. Wholeness is the primary reality! The Bohm quantum potential is closely linked with the results of other approaches, in particular relating to work by Erwin Madelung of 1927 and to work by Carl Friedrich von Weizsäcker of 1935. Building on the interpretation of the quantum theory introduced by Bohm in 1952, David Bohm and Basil Hiley in 1975 presented how the concept of a quantum potential leads to the notion of an “unbroken wholeness of the entire universe“, proposing that the fundamental new quality introduced by quantum physics is nonlocality. See also:
af792a763289079c
My watch list   Computational chemistry Examples of such properties are structure (i.e. the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity or other spectroscopic quantities, and cross sections for collision with other particles. The methods employed cover both static and dynamic situations. In all cases the computer time increases rapidly with the size of the system being studied. That system can be a single molecule, a group of molecules or a solid. The methods are thus based on theories which range from highly accurate, but are suitable only for small systems, to very approximate, but suitable for very large systems. The accurate methods used are called ab initio methods, as they are based entirely on theory from first principles. The less accurate methods are called empirical or semi-empirical because some experimental results, often from atoms or related molecules, are used along with the theory. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include: Linus Pauling and E. Bright Wilson’s 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler’s 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were carried out. Theoretical chemists became extensive users of the early digital computers. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe.[1] The first ab initio Hartree-Fock calculations on diatomic molecules were carried out in 1956 at MIT using a basis set of Slater orbitals. For diatomic molecules a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960.[2] The first polyatomic calculations using Gaussian orbitals were carried out in the late 1950s. The first configuration interaction calculations were carried out in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers.[3] By 1971, when a bibliography of ab initio calculations was published,[4] the largest molecules included were naphthalene and azulene.[5] [6] Abstracts of many earlier developments in ab initio theory have been published by Schaefer.[7] In 1964, Hückel method calculations, which are a simple LCAO method for the determination of electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems, ranging from simple systems such as butadiene and benzene to ovalene with 10 fused six-membered rings , were generated on computers at Berkeley and Oxford.[8] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.[9] In the early 1970s, efficient ab initio computer programs such as ATMOL, GAUSSIAN, IBMOL, and POLYAYTOM, began to be used to speed up ab initio calculations of molecular orbitals. Of these four programs only GAUSSIAN, massively expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2, were developed, primarily by Norman Allinger.[10] One of the first mentions of the term “computational chemistry” can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state “It seems, therefore, that 'computational chemistry' can finally be more and more of a reality.”[11] During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry.[12] The Journal of Computational Chemistry was first published in 1980. The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. Note that the words exact and perfect do not appear here, as very few aspects of chemistry can be computed exactly. Almost every aspect of chemistry, however, can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Schrödinger equation. It is, in principle, possible to solve the Schrödinger equation, in either its time-dependent form or time-independent form as appropriate for the problem in hand, but this in practice is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Present computational chemistry can routinely accurately calculate the properties of molecules that contain up to about 40 electrons. Errors for energies can be less than 1 kcal/mol. For geometries, bond lengths can be predicted within a few picometres and bond angles within 0.5o. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that are called molecular mechanics. In theoretical chemistry, chemists, physicists and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. There are two different aspects to computational chemistry: • Computational studies can be carried out in order to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. • Computational studies can be used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms that are not readily studied by experimental means. Thus computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy hypersurface as the position of the nuclei is varied. • Storing and searching for data on chemical entities (see chemical databases). • Identifying correlations between chemical structures and properties (see QSPR and QSAR). • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design). A given molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (electronic energy plus repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only when efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is assumed. In some ways more importantly it allows the characterisation of stationary points. The frequencies are related to the eigenvalues of the matrix of second derivatives (the Hessian matrix). If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (an imaginary frequency), the stationary point is a transition structure. If more than one eigenvalue is negative the stationary point is a more complex one, and usually of little interest. When found, it is necessary to move the search away from it, if we are looking for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and making use of the Born-Oppenheimer approximation which, based on the much higher velocity of the electrons in comparison with the nuclei, allows the separation of electronic and nuclear motions, and simplifies the Schrödinger equation. This leads to evaluating the total energy as a sum of the electronic energy at fixed nuclei positions plus the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations - being derived directly from theoretical principles, with no inclusion of experimental data - are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). The simplest type of ab initio electronic structure calculation is the Hartree-Fock (HF) scheme, an extension of molecular orbital theory, in which the correlated electron-electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased the energy and wave function tend to a limit called the Hartree-Fock limit. Many types of calculations, known as post-Hartree-Fock methods, begin with a Hartree-Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree-Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is quite inadequate and several configurations need to be used. Here the coefficients of the configurations and the coefficients of the basis functions are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry, in other words the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree-Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density Functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. This means that they could also be called semi-empirical methods. It is best to treat them as a class on their own. In DFT, the total energy is expressed in terms of the total electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. The drawback is, that unlike ab initio methods, there is no systematic way to improve the methods by improving the form of the functional. Some methods combine the density functional exchange functional with the Hartree-Fock exchange term and are known as hybrid functional methods. Semi-empirical and empirical methods Main article: Semi-empirical quantum chemistry methods Semi-empirical quantum chemistry methods are based on the Hartree-Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree-Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the Extended Hückel method proposed by Roald Hoffmann. Molecular mechanics Main article: Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization - (the resulting set of parameters and functions is called the force field) - is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules (eg. [1] and [2]). Methods for solids Main article: Computational chemical methods in solid state physics Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies, therefore they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Chemical dynamics Once the electronic and nuclear variables are separated (within the Born-Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are Molecular dynamics (MD) examines (using Newton's laws of motion) the time-dependent behavior of systems, including vibrations or Brownian motion, using a classical mechanical description. MD combined with density functional theory leads to the Car-Parrinello method. Interpreting molecular wave functions The Atoms in Molecules model developed by Richard Bader was developed in order to effectively link the quantum mechanical picture of a molecule, as an electronic wavefunction, to chemically useful older models such as the theory of Lewis pairs and the valence bond model. Bader has demonstrated that these empirically useful models are connected with the topology of the quantum charge density. This method improves on the use of Mulliken population analysis. Software packages There are many self-sufficient software packages used by computational chemists. Some include many methods covering a wide range, while others concentrating on a very specific range or even a single method. Details of most of them can be found in: See also Cited References 1. ^ Smith, S. J.; Sutcliffe B. T., (1997). "The development of Computational Chemistry in the United Kingdom". Reviews in Computational Chemistry 70: 271 - 316. 2. ^ Schaefer, Henry F. III (1972). The electronic structure of atoms and molecules. Reading, Massachusetss: Addison-Wesley Publishing Co., 146.  3. ^ Boys, S. F.; Cook G. B., Reeves C. M., Shavitt, I. (1956). "Automatic fundamental calculations of molecular structure". Nature 178 (2): 1207. 4. ^ Richards, W. G.; Walker T. E. H and Hinkley R. K. (1971). A bibliography of ab initio molecular wave functions. Oxford: Clarendon Press.  5. ^ Preuss, H. (1968). International Journal of Quantum Chemistry 2: 651. 6. ^ Buenker, R. J.; Peyerimhoff S. D. (1969). Chemical Physics Letters 3: 37. 7. ^ Schaefer, Henry F. III (1984). Quantum Chemistry. Oxford: Clarendon Press.  8. ^ Streitwieser, A.; Brauman J. I. and Coulson C. A. (1965). Supplementary Tables of Molecular Orbital Calculations. Oxford: Pergamon Press.  9. ^ Pople, John A.; David L. Beveridge (1970). Approximate Molecular Orbital Theory. New York: McGraw Hill.  10. ^ Allinger, Norman (1977). "Conformational analysis. 130. MM2. A hydrocarbon force field utilizing V1 and V2 torsional terms". Journal of the American Chemical Society 99: 8127-8134. 11. ^ Fernbach, Sidney; Taub, Abraham Haskell (1970). Computers and Their Role in the Physical Sciences. Routledge. ISBN 0677140304.  12. ^ Reviews in Computational Chemistry vol 1, preface Other references • Christopher J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002) • T. Clark A Handbook of Computational Chemistry, Wiley, New York (1985) • R. Dronskowski Computational Chemistry of Solid State Materials, Wiley-VCH (2005) • F. Jensen Introduction to Computational Chemistry, John Wiley & Sons (1999) • D. Rogers Computational Chemistry Using the PC, 3rd Edition, John Wiley & Sons (2003) • A. Szabo, N.S. Ostlund, Modern Quantum Chemistry, McGraw-Hill (1982) • D. Young Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems, John Wiley & Sons (2001) • David Young's Introduction to Computational Chemistry This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Computational_chemistry". A list of authors is available in Wikipedia.
bd9d145cddbd2b27
Friday, January 31, 2014 Surrender etiquette I surrender .. "Please don't kill me - I have a wife and children back home!" Good answer:  "I accept your surrender. Private Smith, you don't look too busy - guard this captive till relieved." Bad answers: 1. "She'll get over it - find someone new. They always do ..."  (opens fire). 2. "You should have thought about that before now ..."   (opens fire) 3. "Yep, if I had the resources to guard you, and if I trusted you ..."  (opens fire) Avoiding atrocities is a logistical nightmare. Thursday, January 30, 2014 Solar panels are go ..-ish Yes, the bounty of the sky! 30W! It was all over by 2 pm. Twelve solar panels securely fixed to our roof ("if the wind takes 'em, they'll be taking yer roof off too!") and an electricity generator's worth of new dials, meters and weird Wi-Fi stuff in our pantry. It's late afternoon before I've managed to get into the portal, dug out the requisite Wi-Fi OWL monitor MAC number, created an account and finally turned up the display above. At the moment in question we're consuming on the left 0.885 kW and producing from the sky - well .. 30W. So that's half a lightbulb then. I trust from an overcast January afternoon we can only do better. Wednesday, January 29, 2014 The Executioner In response to our little mosquito problem, Clare has retrieved her Christmas present of The Executioner. The Executioner Powered by two AA batteries, this delivers of the order of 3,000 volts to any hapless flying insect and can also be used when they perch in casual fashion on the walls. A brief spark and they fry. I wondered what would happen if you touched the grill yourself (it does say not to do this). Mindful of electroshock torture and tasers, I debated wimping out for a while before finally pressing the button and proceeding to zap the side of my hand (this guy was braver). There was a bright blue spark and a sharp zapping noise - and I have to say it hurt a bit. More like a burn than an electric shock: but I wouldn't fancy prolonged contact. Here's a guy ('Backyard Armory') showing you how to convert The Executioner into a stun gun. This is illegal in the UK. Having watched the video I immediately noted some design improvements: the ends of the steel probes should be filed to points both to improve penetration and to increase the electric field (and therefore the applied voltage); also, his assembly is not very robust and would probably just break if used for real - the high-voltage circuitry should be embedded within a much more rigid container. Well, we talk big but so far not a single flying thing has been harmed by The Executioner! Mosquitoes in Somerset Love that dopplered hum as you lie awake at 2 am. That intimate, penetrating whine as - invisible - it skims past your ear. Why doesn't it attack? There's an exposed wrist, an overheated ankle carelessly protruding from beneath the duvet. Surely it's hungry? Silence now .. but you'll be woken again soon and it'll be lumps in the morning. An unwanted house guest A consequence of the mild weather and the flooded Somerset levels. The mosquito population has begun its exponential expansion and already here in the City of Wells our house has been invaded. Where's the spray? I managed to leave my mobile phone at my mother's house in Bristol yesterday. So no calls or texts for a week: email, skype or the fixed line work. UPDATE: 9.30 am this morning. Splat! the flyswatter clobbers a big, juicy mosquito high up on our bedroom wall, leaving a blood-red smear. Clare's, not mine - and definitely not a room 101 situation ('do it to Julia, not me!!') ... Sunday, January 26, 2014 Weight Loss Ward (ITV) ITV's Weight Loss Ward (about extremely fat people) has proved to be somewhat compulsive viewing. Take Doreen Thomas: age 56, height 4ft 11, and weight 31 stone. Doreen can barely move and has been living downstairs for ten months. Her glasses have been upstairs since March, apparently. "She's tried losing weight in the past, but it's never really succeeded. If we don't do something now she probably will be dead," says her nurse. Doreen Thomas orders treats (from Stoke Sentinel) On-screen, Doreen comes across as crafty and manipulative. She lives on benefits, has a full-time paid-for carer and snacks incessantly on treats ordered online. My puritan reaction is of course outrage: why are we paying this person to waste her life in this way? She even refuses an operation to insert a gastric balloon! What could possibly be the purpose of her parasitic life? On reflection, how would it help if Doreen were transformed into a bland, everyday mediocrity? As a welfare monster she gives millions the pleasure of lip-curling moral superiority. In her own (thankfully) inimitable way she has entertained and instructed more people than many a C-list pop star. Yes, "Weight Loss Ward" has entirely justified Doreen Thomas's life - short as it's likely to be. Frack or Freeze! Dominic Lawson in the Sunday Times today. "... we are now being told by experts on solar physics that we are heading into a period of exceptional inactivity on the surface of our local star — and therefore one of exceptionally cold temperatures. “I’ve never seen anything quite like this,” Richard Harrison, the head of space physics at the Rutherford Appleton laboratory in Oxfordshire told the BBC. And Yuri Navogitsyn of the Pulkovo Observatory is quoted along similar lines by Voice of Russia: “We could be in for a cooling period that lasts 200-250 years.” In other words, we need extra greenhouse effect if we are not to suffer countless more fatalities from hypothermia and permafrosted farms. It’s frack or freeze." We're on the case for our little animal friends ... Our house ... Here's an excerpt from the BBC piece (emphasis added). "During the latter half of the 17th Century, the Sun went through an extremely quiet phase - a period called the Maunder Minimum. Historical records reveal that sunspots virtually disappeared during this time. Dr Green says: "There is a very strong hint that the Sun is acting in the same way now as it did in the run-up to the Maunder Minimum." Mike Lockwood, professor of space environment physics, from the University of Reading, thinks there is a significant chance that the Sun could become increasingly quiet. An analysis of ice-cores, which hold a long-term record of solar activity, suggests the decline in activity is the fastest that has been seen in 10,000 years. "It's an unusually rapid decline," explains Prof Lockwood. "We estimate that within about 40 years or so there is a 10% to 20% - nearer 20% - probability that we'll be back in Maunder Minimum conditions." The era of solar inactivity in the 17th Century coincided with a period of bitterly cold winters in Europe. Londoners enjoyed frost fairs on the Thames after it froze over, snow cover across the continent increased, the Baltic Sea iced over - the conditions were so harsh, some describe it as a mini-Ice Age. And Prof Lockwood believes that this regional effect could have been in part driven by the dearth of activity on the Sun, and may happen again if our star continues to wane. "It's a very active research topic at the present time, but we do think there is a mechanism in Europe where we should expect more cold winters when solar activity is low," he says. He believes this local effect happens because the amount of ultraviolet light radiating from the Sun dips when solar activity is low. This means that less UV radiation hits the stratosphere - the layer of air that sits high above the Earth. And this in turn feeds into the jet stream - the fast-flowing air current in the upper atmosphere that can drive the weather. The results of this are dominantly felt above Europe, says Prof Lockwood. Wednesday, January 22, 2014 Tuesday, January 21, 2014 "The Light of Other Days" - Arthur C. Clarke and Stephen Baxter (2000) Just finished "The Light of Other Days" by Arthur C. Clarke and Stephen Baxter, first published in April 2000. Here's the review at The SF Site by Steven H. Silver. "The Light of Other Days" was the title of a classic short story by Bob Shaw, one of the lesser known stars of science fiction. One of science fiction's biggest stars, Arthur C. Clarke, and one of its rising talents, Stephen Baxter, have combined forces to pay tribute to Shaw with their collaborative novel of the same title. One of the features of the Shaw story was the idea of "slow glass," which would transmit light so slowly that it could be used to view the past. The comparative device in the Clarke & Baxter novel is wormhole technology. "Hiram Patterson, a latter-day Ted Turner/Bill Gates, has found a use for wormholes to broadcast news as it happens from remote locations without the time and expense of transporting a live reporter and camera crew. "He can create a temporary wormhole, point a camera through it, and capture the images from a home office, no matter where it is located. Patterson's development team, headed by his son, David, continues to push the boundaries of this new technology while Clarke and Baxter begin to examine its social aspects. "The spread of wormhole technology seems to be based on the internet. Like the internet, it spreads rapidly and reasonably inexpensively. There can be no interaction between the viewer and the subject of their spying. Most importantly, it completely alters the fabric of society and brings the world even closer together. "The changes to society are continuous, especially since what can be done with wormhole technology and its cost keeps changing. Used to spy on individuals, particularly once the ability to look into the past is discovered, wormhole technology supplants the internet as the primary time-waster. "People can now not only discover what their neighbours are doing at the moment, they can also see what their neighbours were doing at any time. Privacy has ceased to exist as anyone can spy on anyone else, at any time, without any chance of detection. "Although there are some noble endeavours, such as the project to completely document the life of the historical Jesus, most people use the technology for more voyeuristic concerns. Given the attitude Clarke exhibited towards organized religion in such recent novels as 3001: The Final Odyssey, the irreverence paid to religion in The Light of Other Days is very understated. "The Light of Other Days is definitely a novel of ideas. In addition to the primary concept of the wormhole, the story opens with the announcement of the discovery of an enormous asteroid, called the Wormwood, which will impact the earth in 2534, causing the destruction of all life on the planet. The knowledge of the Wormwood inflicts much of humanity with a sense of malaise, adding to the public's need for a diversion like wormhole technology. The authors have also inflicted water shortages on the world which have resulted in several water wars. Many countries have become balkanized and, perhaps the least likely situation, England has become the 52nd state of the US. "With many interesting ideas, few of which are fully explored, and a dearth of exploration of the characters and their relationships, The Light of Other Days feels more like a work in progress than a finished novel. If the authors wanted to pay homage to Bob Shaw, producing a more complete work may have been the way to do it." This review is spot on as regards the style of the novel: it reads to me like a very late pulp novel of the nineteen fifties and sixties - the golden age of Heinlein, Asimov and, of course, Clarke. Though by content I suspect Baxter did most of the writing - Clarke's main contribution seems to have been the overarching plot, a reprise of "Childhood's End". Looking back fourteen years, the novel is remarkably prescient in technological terms. In the novel the dominant global corporation is called OurWorld (a version of Google) with a megalomaniac CEO called Hiram Patterson (father of main characters Bobby Patterson and David Curzon). The characters travel around in automated driverless cars and are surrounded by flying servitor-drones (the novel is set some decades in our present future). There are some stunning set-pieces, such as where David Curzon programs the "wormcam" to follow his female ancestors back through time (via a 'mitochondrial DNA tracker') - the reader is taken on a dizzying ride across 4.5 billion years. As with all pulp novels, dazzling sense of awe and wonder has to be set against pedestrian and unconvincing characterisation. One really has to struggle to believe the interpersonal dynamics - particularly the love interest between journalist Kate Manzoni and Bobby Patterson (Hiram Patterson's cloned son and heir apparent). Quantum wormholes have recently become all the rage too, now linked by some researchers to quantum entanglement in theories of quantum gravity. I was not so impressed with the authors' model of our own physical reality as exhibiting a unique present: a boundary between a frozen block-universe past and a quantum-uncertain and mutable future. We already know that can't be right as our universe does not have a unique observer-independent present . Dr Baxter, of course, knows this consequence of special relativity. So if you like SF from the golden age you'll enjoy this, but you are likely to share David Curzon's qualms as regards humanity's final fate. Wikipedia entry for "The Light of Other Days" Saturday, January 18, 2014 Three Roads to Artificial Intelligence AI pioneer Marvin Minsky once famously stated that “Artificial Intelligence is the science of making machines do things that would require intelligence if done by men.” Such an elegantly recursive cop-out! I’m going to propose three, more operational models of intelligence: competence, search and creativity respectively. Which seems more plausible to you? 1. Competence You meet an expert and watch them work at solving problems. A query comes in and instantly the correct answer rings out. You recall the old saying: “An expert is someone who doesn't have to think, because they know.” This kind of intelligence is algorithmic. In principle you can write a program simulating the expert which delivers answers in a computationally well-behaved fashion (code for polynomial behaviour). In AI we call these programs Expert Systems. This is intelligence-as-instinct: compiled, hard-wired expertise. I've met people like this and some I wouldn't call smart at all – the ones who are flummoxed by an unfamiliar problem. 2. Search Some people have equated intelligence to controlled search. The paradigmatic example is playing games such as chess where there is no known well-behaved algorithm which can take an arbitrary game position and return the optimal next move. The best games programs create a look ahead tree using legal moves and assess the best of the future game-states. They then choose their next move as best-placed to get to that state despite the best attempts of their opponent. A chess lookahead tree Search can look quite intelligent because it’s flexible and adaptive. The AI program doesn’t know what you’re going to do next, but whatever you do it will adapt and continue “intelligently” towards its final goal of winning. I have the same feeling about my sat nav, which implacably directs me to my destination no matter what wrong turns and detours I make. Search is powerful and adaptable (trading competence for bounds on space and time resources) but suffers from a fatal rigidity: it explores just the possibilities defined by the state-space and operations given to it. The chess program just plays chess. Important as the distinction is in artificial intelligence, it’s not clear to me that in humans, search and competence are that much different. Humans are very bad at search, finding it almost impossible to hold a large number of possible future states simultaneously in mind. Trying to solve problems in such a way is pretty much the definition – in humans – of incompetence. Experts differ from novices not by doing more search, but by having a more extensive, refined and sophisticated competency set (or ‘knowledge base’). 3. Creativity The people I find truly, scarily intelligent are those who keep you off-balance by continually moving the goalposts in a way both surprising and opaque. You feel your every possible gambit has already been anticipated and that the activity you think you're conducting is actually embedded in a much more complex scenario being deftly manipulated by your opponent. In “Tactics of Mistake’, Gordon R. Dickson describes an enemy force advancing down a river valley along its narrow flood plain; the friendly force is much smaller. A merely competent commander would presumably choose the best combat tactics commensurate with his poor hand – and would expect to lose. In fact Dickson’s hero places his troops in well-screened locations in the hills to the side of the valley, and then dams the river. As the water level rises, the flood plain floods and the enemy troops are forced into a killing zone. They are thus defeated. How is this solution-approach different from competence and search? The critical factor is that new elements have been brought in to create a larger ‘game’ – in particular the river, its flooding behaviour, the typography of the ground and the possibilities of damming. Creativity thus requires an additional context, extra resources which can be brought to bear on the original goals of the game. In the real world, everything we do is embedded in layers of enveloping reality. For example, you might defeat the chess champion by doping his coffee so that he plays particularly poorly. You might thus win in the extended game where the player is also an active constituent, but a chess program has no access to this larger reality. More legitimate 'psychological tricks’ are regularly employed by human players. This points to an important feature of creative intelligence. It requires a deep familiarity with the potential of embedding contexts of the proximate ‘game’ or problem – and the ability to select and refine additional operators which can be played back into a new kind of solution. Most games are defined to explicitly abstract away all inessential contexts: you are allowed to do just what the rules say and no more (so no stealing your opponent’s king and declaring victory!). But this immediately rules out the kind of creativity we're discussing here, thereby impoverishing the model of intelligence which can be studied or deployed. I’m not sure AI research has sufficiently taken this into account. In the real world there are no absolutely impermeable boundaries, so there are potentially no limits to the kinds of esoteric knowledge which can be brought to bear on a problem. And that’s what truly intelligent people do. What is measured by IQ tests? Test of crystallized intelligence (such as general knowledge) seem to be measuring competence which they hope correlates with ‘g’ as a proxy. However, the core of intelligence seems to be fluid intelligence, measured by test items such as Raven’s Progressive Matrices. These require the inference of new, compelling rules and patterns from presented data - which sounds a lot more like creative intelligence. Let me make one more remark - about scientists and mathematicians. Some people are very good at rapidly and easily seeing the consequences of assumptions; they sign-up to the "shut up and calculate" school. Others seem more comfortable exploring different paradigms for situating a problem, other ways of thinking about it. Theoretical physicist Lee Smolin called these two types "craftspeople and seers", while Freeman Dyson preferred "Frogs and Birds". Perhaps there is a connection here with the intelligence-as-search and intelligence-as-creativity distinction? Psychologist Daniel Nettle observed that intelligence is a kind of whole-brain efficiency measure implicated across all areas of neural functioning.  High-scorers on the personality trait of Openness are artistic, creative people capable of making associations between different – and perhaps surprising – kinds of things (Smolin's "seers") while those with a more "craftsperson" style are perhaps exhibiting the effects of high IQ per se. Friday, January 17, 2014 Disappearing accountants The Economist this week has a feature on the changing nature of work. Just as machines displaced agricultural workers in the fields and craft-artisans in their homes, so a new generation of smart computer systems are displacing middle-class intellectual workers. How likely is your job to be taken over by automation? The Economist article includes this chart. Lose your job to a machine? In previous rounds of automation, displaced workers were able to get an education and populate middle-class jobs (as clerks, and later as the famous 'computer programmers') which were pleasanter and better-paid than their previous jobs which automation had killed. But as the computers get smarter, perhaps a lot of people can't compete anymore - they're just not that smart or conscientious. And personal trainers, in general, are not fantastically remunerated. The process of intellectual displacement is interesting. The machine systems are not, again in general, particularly good social actors (which explains the chart above). They can, however, automate large parts of the informational, computational and process-rich components of a middle-class job. A few highly skilled practitioners - expert accountants, if you like - can use these highly-capable tools to solve problems which used to be tackled by small armies of lesser-skilled accountants. Productivity has risen but those displaced 'average accountants' appear to have nowhere else to go. A life of benefits and endless video games beckons. Oh, did I say automation hasn't produced good social actors? We progress one step at a time. The elite of the Roman Empire didn't do a lot of work, manual or otherwise. They managed the great affairs of state, and slaves or lesser mortals did everything else. It raises a question of whether we should embrace or fear ubiquitous automation. The pessimists amongst us will recall 'bread and circuses'. Thursday, January 16, 2014 Quantum Field Theory - the big picture Taken a first course in Quantum Theory and now interested in moving on to Quantum Field Theory? Every QFT book you were ever recommended hits you like a suddenly-faced vertical cliff face after a no-more-than-vigorous scramble through those non-relativistic woods. You're used to solving the Schrödinger equation to: model the hydrogen atom; understand molecular binding forces; and chart the motion of (slow) particles in potential fields. Suddenly all that's swept away: you have weird new replacements for Schrödinger - the Klein-Gordon, Dirac and Proca equations .. and then jarringly you're into the alien landscape of Feynman diagrams, propagators and highly convoluted integrals. Nowhere is the big picture ever explained. How does it all fit together? At last a book which feels your pain:  "Student Friendly Quantum Field Theory - Basic Principles and Quantum Electrodynamics". "With regard to phenomena, I recall wondering, as a student, why some of the fundamental things I studied in NRQM (non-relativistic quantum mechanics) seemed to disappear in QFT. One of these was bound state phenomena, such as the hydrogen atom. None of the introductory QFT texts I looked at even mentioned, let alone treated, it. It turns out that QFT can, indeed, handle bound states, but elementary courses typically don’t go there. Neither will we, as time is precious, and other areas of study will turn out to be more fruitful. Those other areas comprise scattering (including inelastic scattering where particles transmute types), deducing particular experimental results, and vacuum energy. "I also once wondered why spherical solutions to the wave equations are not studied, as they play a big role in NRQM, in both scattering and bound state calculations. It turns out that scattering calculations in QFT can be obtained to high accuracy with the simpler plane wave solutions. So, for most applications in QFT, they suffice. "Wave packets, as well, can seem nowhere to be found in QFT. Like the other things mentioned, they too can be incorporated into the theory, but simple sinusoids (of complex numbers) serve us well in almost all applications. So, wave packets, too, are generally ignored in introductory (and most advanced) courses. Wave function collapse, a topic of focus in NRQM, is generally not a topic of focus in QFT texts. It does, however, play a key, commonly hidden role, which is discussed herein in Sects. 7.4.3 and 7.4.4, pgs. 196-197. " And here is how the book begins - from chapter 1 (PDF): "Before starting on any journey, thoughtful people study a map of where they will be going. This allows them to maintain their bearings as they progress, and not get lost en route. This chapter is like such a map, a schematic overview of the terrain of quantum field theory (QFT) without the complication of details. You, the student, can get a feeling for the theory, and be somewhat at home with it, even before delving into the “nitty-gritty” mathematics. Hopefully, this will allow you to keep sight of the “big picture”, and minimize confusion, as you make your way, step-by-step, through this book." Best of all, the first three chapters (and a few more) are free and online (available here). Thanks, Robert D. Klauber! Sunday, January 12, 2014 Dave declares Hi Everyone, Welcome to my new blog,!  After a lifetime of systems engineering, it’s ciao to C, tara to Java and goodness knows what to Linux and Perl. As I now have some time on my hands, I will be writing my thoughts on a variety of topics I find of interest. Politics and current affairs, science and technology of course, some personal details concerning our grandchildren and our occasional caravan holidays – just for family interest. Please add your comments below. I will be moderating them for language of course. Cheers and thanks for your interest, Dave Hudson. I’m playing with a steganography package. You can embed your thoughts secretly in a .jpg image and no-one will know. So now I can write a parallel secret blog, just for myself. I expect some supercomputer years in the future will be able to detect and break the code, so my thoughts will not be lost for ever. I can confide here another reason for my new blog. If I die my dear wife Dorothy will be able to go back each day to the very same date while I was still alive and read my thoughts for that day. I think she'll find it very comforting. My first secret entry will be embedded in an uploaded picture of my eldest grandchild, the one we call Muffin. When I look back at how much material I have produced here I am amazed. And a lot of it is good stuff too. I don’t seem to get many comments though. I put it down to blogger’s useless interface. Having my old stuff hidden away in archive folders seems to turn people off browsing there. What a waste. I have decided to make my blog much more interactive. It’s a project which will take a few months at least, but I'm’m going to link Google’s speech-interaction system to my blog entries with a semantic net written in JavaScript. The result will be an agent interface, who I might as well call “Dave”. When you type in a question or comment, or talk via the Skype interface, “Dave” will answer, just like I would. I think it’s going to be a winner. I think Dorothy will be happier with “Dave” if I were to pass away first. It’s just so much more real to have an interactive presence than reading what to her must seem like just old diaries. I haven't told her the real reason, but it will be interesting to see what she thinks once “Dave” is completed. I am very disappointed in Dorothy. She’s really quite dismissive of all my hard work – told me that one Dave is quite enough, without having an echo on her computer. So this is where my thoughts are going. It’s not enough just to have a soft version of me running on a computer. No, it must be fully embodied. Anyway, I've put an order in over the web, and it’s scheduled to arrive while D. is visiting her sister. I'll keep it locked away in my workroom and in my Will it will say to open the trunk and switch it on. Practical reincarnation, at least from her point of view! Apparently it’s damn heavy, so I hope I can move it without getting a very poorly-timed heart attack!  “Embodied Dave” is now working - quite a handsome man! His mouth is oddly shaped, but seems to articulate speech quite well. It’s like I imagine a twin must be! I see Dorothy and “Dave” sat in front of the TV after I've gone, just as homely as we are today. She'll tell him how her day’s gone and he'll respond in that calm, orderly  and authoritative way that I always have. ‘Down to the auction today and picked up a slice of good luck for a change. A couple called the Hudsons I think, checked out in a weird caravan accident or something and their goods up for sale. When the doll came up I was blown away – who'd have thought? No-one else showed an interest, got it for a song. Looks like it’s never been used.’ ‘Can't be too cautious, that’s my watchword. Never know what evil stuff might be in something like that. Got the tech boys to take a look. They tell me ‘Dave’ is compliant, pretty much agrees with whatever you say and is up for anything – reasonable or not. So it's ‘Hello sailors!’ – it’ll earn its keep in no time!’ (c) Nigel Seel, 2014 Friday, January 10, 2014 Kevin Pietersen: the Sherlock of English cricket Simon Barnes has an excellent article in The Times today, one with resonance for anyone who has ever worked with unreasonable but highly-effective people. "It’s a shame that Sherlock, the television show, changed from a brilliant and thrilling adventure based on the utterly exceptional qualities of its main character into a self-indulgent and self-referential soap-opera-cum-comedy based on one rather crude characterisation." Yes, I noticed that too. "It is an equal shame that precisely the same thing has happened with the England cricket team. The Kevin Pietersen story once again dominates the plot-lines of English cricket. Like Sherlock, it is a drama centred on a uniquely talented individual with questionable social skills. "In the first half of the most recent episode, someone describes Sherlock to his face as a psychopath. He contradicts, not without smugness: “No. High-functioning sociopath.” "Well, let’s not stick labels on people. Leave that to those qualified. An “anti-social personality disorder” often includes such traits as small regard for the feelings and welfare of others, inability to learn from experience, no sense of responsibility, lack of moral sense, no change after punishment, lack of guilt, pathological egocentricity and inability to love. "Pietersen’s perpetually sticky relationships with his cricketing colleagues unquestionably go personality-deep. With my level of expertise I think we can confidently describe him as a high-functioning awkward bugger. And it has been widely reported that his relationship with Andy Flower, the England team director, has broken down disastrously. "It has even been suggested that Flower will not carry on if Pietersen remains on board. Another version states that Pietersen can stay on board so long as he devotes himself to scoring runs in county cricket at the beginning of the new season, instead of playing in the IPL. Kev can stay, but it’ll cost him getting on for a million quid. "Flower was a great coach for England until the trip to Australia this winter spoilt his record. His greatest achievements? Defeat of Australia in Australia in 2010-11 and defeat of India in India from one-down in 2012. How did these things come about? "In Adelaide in 2010, Pietersen changed the series with an innings of murderous certainty in which he scored 227. In Mumbai two years later, England were batting on a turning pitch tailored for India’s needs; Pietersen scored 186, another classic momentum-shifter. "Flower, like all coaches, is essentially Watson. Coaches, even if they preen like José Mourinho, are at base facilitators, enablers and sounding boards. They don’t solve the case: they are just helping out as best they can. "Cases are actually solved by the Sherlocks: the high-functioning ones. It was Pietersen, not Flower, who solved The Case of the Prematurely Celebrating Australians and The Case of the Indians Hoist With Their Own Petard. "We would all sooner deal with Watson, we’d sooner have a drink or a cup of tea with Watson and we save most of our sympathies for Watson, who is always in a perfectly intolerable situation. But if we want to solve the case, we need Sherlock. The brutally-effective people I knew in business were more Steve Jobs than Sherlock. People who would call you at any hour of the day or night and calmly task you with impossible deadlines; give you jobs and then let slip that they had also asked some other people who had already delivered - so your efforts had been simply wasted (not that they cared or anything, or had bothered to mention this); people who, as peers, simply ignored your existence, wouldn't return calls or attend meetings. People who routinely shouted at people. It was tough working for or with these people: the only protection was to be an acolyte, tolerated in some subservient role at their court - not being a posse person, I was never very keen nor good at that. The safest place to be was one or two management levels above them, where their destructive effectiveness could be leveraged in fulfillment of one's own higher-level goals. Every successful executive needs an enforcer or troubleshooter. The choice is not solely between Sherlock or Watson. There are other kinds of talents which work in business and in the world - skills less abrasive which still add value. But we all have our Watson moments as we wonder how much punishment our pleasant, collegial organisation can or should take before we spit this person out. It's actually a genuine dilemma. Further Reading Bertrand Russell. "Natural Killers" in the US Army by Major David S. Pierson. "Ancestral Journeys" by Jean Manco "Amusing review you wrote on your blog about Ancestors." "That great tome you've been reading for days now. I read your review. It made me laugh." "I haven't written a review yet."  (Baffled). "Well, it was on the screen."  "It was Greg Cochran's review you read, on his blog 'West Hunter'. I was reviewing it before writing my own thoughts." I am of course immensely flattered that my wife might believe that a review by such an eminent population geneticist, polymath and all-round smart person could possibly have been written by myself. In her defence, she's not fantastically up on genetics or the deep history of Europe. She's right about the quality of the writing though: here's what Gregory Cochran had to say. "Jean Manco has a new book out on the peopling of Europe, Ancestral Journeys.  The general picture is that Europeans arise from three main groups: the Mesolithic hunters (Hyperboreans),  Levantine farmers, and Indo-Europeans off the steppe.   It’s a decent synthesis of archaeological, linguistic, and genetic evidence. I suspect that her general thesis is in the right ball park:  surely not correct in every detail, but right about the double population replacement. "It is a refreshing  antidote to previous accounts based on the pots-not-people fad that originated back in the 1960s, like so many other bad things.  Once upon a time, when the world was young, archaeologists would find a significant transition in artifact types, see a simultaneous change in skeletons,  and deduce that new tenants had arrived, for example with advent of the Bell Beaker culture.  This became unfashionable: archaeologists were taught to think that invasions  and Völkerwanderungs were never the explanation, even though history records many events of this kind. "I suppose the work Franz Boas  published back in 1912, falsely claiming that environment controlled skull shape rather than genetics, had something to do with it.  And surely some archaeologists  went overboard with migration, suggesting that New Coke cans were a sign of barbarian takeover.  The usual explanation though, is that archaeologists began to find the idea of prehistoric population replacement [of course you know that means war - war means fighting, and fighting means killing] distasteful and concluded that therefore it must not have happened.  Which meant that they were total loons, but that seems to happen a lot. "But the book could be better. Jean Manco relies fairly heavily on mtDNA and Y-chromosome studies, and they are not the most reliable evidence.  Not because the molecular geneticists are screwing up the sequencing, although there must sometimes be undetected contamination, but because mtDNA and Y-chromosomes are each single loci with an effective population size four times smaller than autosomal genes.  They are more affected by drift, and drift can deceive you.  Moreover, in some cases selection might affect the historical trajectory of mtDNA and Y-chromosomes,  which would add to the confusion. Now to be fair, we have more ancient mtDNA results than autosomal DNA,  and there is more published data on mtDNA and Y-chromosome than autosomal DNA in existing populations.  This situation is rapidly improving. "Autosomal DNA has zillions of loci and a larger effective population size.  Most of it is neutral.  That’s what you want, for investigating past mixing and movement  – and autosomal DNA yields interesting hints using publicly available data and software.  For example, using the program ADMIXTURE, you find a West Asian-like component in almost all Europeans (from Spain to Russia, and at about the same level) – but not in Sardinians or Basques. Which must be telling us something. "In addition, she’s not bloody-minded enough. She thinks that a fair fraction of the big population turnovers involved migrants moving into areas that had been abandoned by the previous owners. I can imagine that happening in a few cases.  Maybe the Greenlanders, living in an extremely marginal country for their kind of dairy farming, mostly left and/or died out  before the Eskimos showed up.  That is, in my dreams, because we know that the two groups fought. The Greenlanders may have been in trouble, but they didn’t just fall -  Eskimos pushed. The European colonization of the New World is closer, since there was a dramatic population collapse from the newly introduced Eurasian and African diseases, but even then there was a fair amount of fighting. I’m sure that there were serious epidemics in European prehistory, but  it seems unlikely they compared with the impact of the simultaneous arrival of  bubonic plague, diphtheria, leprosy, malaria, measles,  typhoid, and whooping cough on the Amerindians (with yellow fever and cholera for dessert). "I mean, when the first farmers were settling Britain, about 4000 BC, they built ditched and palisaded enclosures.   Some of these camps are littered with human bones – so, naturally, Brian Fagan, in a popular prehistory textbook, suggests that ”perhaps these camps were places where the dead were exposed for months before their bones were deposited in nearby communal burials”!  We also find thousands of flint arrowheads in extensive investigations of some of these enclosures, concentrated along the palisade and especially at the gates.  Sounds a lot like Fort Apache, to me. "There are some new and fascinating results about European prehistory that beg to be incorporated in a revised version of this book, for example the stuff about how the Hyperboreans contributed to the ancestry of modern European and Amerindians, but that info is so new that she could not possibly have incorporated it. Not her fault at all. "Read it." I'd classify "Ancestral Journeys" as a semi-popular book. Jean Manco's mastery of the details of populations and their movements across Europe, North Africa and the Near East across a time period stretching from 46,000 years ago to the early mediaeval is truly staggering - she seems to have absorbed and synthesised hundreds of sources. Somehow the clarity of her thinking and the organisation of her material saves the reader from being totally lost in the details. The study of deep history is a fusion of archaeology, linguistics and genetics with the genetics becoming more important as more DNA samples - ancient and modern - are sequenced. What does this genetic information actually mean? The reader will need to internalise the historical evolution and mutation of mitochondrial and Y-chromosome DNA to understand the resulting haplogroup trees which drive historical analysis. Evolutionary Tree of Y-DNA haplogroups (from Wikipedia) If you've not had a course in population genetics then you will be hitting Wikipedia hard (you could do worse than start here and follow the links to grasp the genetic concepts underpinning this book). Trivia point: Jean Marco is based just up the road from where we live, in Bristol. Thursday, January 09, 2014 "No Ordinary Life" by Peter Stokes From the Amazon blurb. "The true story of a father who, on his death bed, handed his son a dusty journal containing details of his secret past as a Second World War hero and founding member of the 2nd SAS. Just weeks prior to his death Horace Stokes asked his son Peter to return home as he had 'something important' to leave him, and presented him with a battered diary. "Peter, himself a decorated military officer, said: "He wanted me to come home so that he could talk to me about his life growing up in the shadow of war and also about his part in some of the most famous raids during the Second World War; throughout his life he'd never revealed these secrets”. His secret journal, published now as a book, recalls daring missions behind enemy lines in France, the Mediterranean and Italy. It also documents his capture, escape and recapture in Italy and Germany. "Stokey, as he was known to his war-time comrades, served with 12 Commando, the Small Scale Raiding Force and the SAS. This book tells the story of a modest man who epitomised a generation now nearly all gone, someone who lived no ordinary life." Sometimes a personality-type leaps off the page. I have read several books by special forces people and they uniformly come across as practical, no-nonsense, self-starting, mission-oriented and lacking empathy. Touchy-feely folk they are not. Ho-hum: no surprise. You would expect low openness, high conscientiousness, high extraversion, low agreeableness and very low neuroticism. And that's what you get. This short book itself is a real page turner. Young Horace grew up in the 1920s, dirt-poor, in a flea-ridden Birmingham tenement. Bright enough to go to grammar school, he's soon working as a greengrocer's assistant to help make ends meet. The war rescues him and makes him a commando. Epic deeds follow as our hero raids France, invades North Africa and parachutes into Italy, causing mayhem wherever he goes. At one point in northern Italy, having sustained a rupture after a parachute landing gone wrong, he ends up with "really bad boils in my groin which were going septic and I had also developed scabies." He continues, "It was clear to all of us that I was struggling." He is so bad that his comrades have to leave him, miles inside German-occupied Italy. So he steals a bike and cycles by himself 230 km to the Vatican City in Rome (4-5 days) - which of course he succeeds in reaching - where just in time he gets life-saving medical treatment. After some months fomenting local subversion he is captured and tortured by the Gestapo. Post-war and demobbed, Stokes drifts from one job to another: a confirmed socialist, he appears to have had a problem with mediocre authority. Eventually he becomes a publican and that seems to work for him - he later became chair of the Birmingham licencees union.  Stokes senior died in 1986. Thanks to Adrian for this gift. Tuesday, January 07, 2014 Haplogroup Tree Mutation Map (from 23andMe) See here for background to this entry which analyses my Y-DNA haplotype within the R1b haplogroup tree. The call column indicates what was found in my sequencing (not all bases may have been tested for or derived, resulting in a blank); anc means the ancestral form; der means the derived form via the specific SNP (mutation) at this Y-chromosome locus. My Paternal line (Y chromosome) R1b1b2a1a2f* defining mutations R1b1b2a1a2f defining mutations rs11799226 (L21)GCG R1b1b2a1a2 defining mutations rs34276300 (P312)CA R1b1b2a1a defining mutations rs13304168 (L52)TCT rs9785659 (P311)GAG rs9786076 (L11)CTC rs9786283 (P310)CAC R1b1b2a1 defining mutations rs9786140 (L51)AGA R1b1b2a defining mutations rs9785971 (L23)GA rs9786142 (L49)ATA R1b1b2 defining mutations rs877756 (S3)CTC rs9786153 (M269)CTC R1b1b defining mutations rs9785702 (P297)CGC R1b1 defining mutations rs150173 (P25)CA R1b defining mutations rs9786184 (M343)CA R1 defining mutations rs1118473 (P286)TCT rs17307070 (P225)TGT rs2032624 (M173)CAC rs7067478 (P242)GA rs9785717 (P238)AGA rs9785959 (P236)GCG rs9786197 (P234)CTC rs9786232 (P233)GTG
d719753ef8f7b47a
History of quantum mechanics From Wikipedia, the free encyclopedia Jump to: navigation, search 10 influential figures in the history of quantum mechanics. Left to right: The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began essentially with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday; the 1859–60 winter statement of the black-body radiation problem by Gustav Kirchhoff; the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete; the discovery of the photoelectric effect by Heinrich Hertz in 1887; and the 1900 quantum hypothesis by Max Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete "energy elements" ε (epsilon) such that each of these energy elements is proportional to the frequency ν with which each of them individually radiate energy, as defined by the following formula: where h is a numerical value called Planck's constant. Then, Albert Einstein in 1905, in order to explain the photoelectric effect previously reported by Heinrich Hertz in 1887, postulated consistently with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis. The photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface. The phrase "quantum mechanics" was coined (in German, Quantenmechanik) by the group of physicists including Max Born, Werner Heisenberg, and Wolfgang Pauli, at the University of Göttingen in the early 1920s, and was first used in Born's 1924 paper "Zur Quantenmechanik".[1] In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. Ludwig Boltzmann's diagram of the I2 molecule proposed in 1898 showing the atomic "sensitive region" (α, β) of overlap. Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete. He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Müller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would also be the case twenty years later with the first quantum theory put forward by Max Planck. In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's law, that included a Boltzmann distribution (applicable in the classical limit). Planck's law[2] can be stated as follows: where: h is the Planck constant; c is the speed of light in a vacuum; k is the Boltzmann constant; ν is the frequency of the electromagnetic radiation; and T is the temperature of the body in kelvins. The earlier Wien approximation may be derived from Planck's law by assuming . Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911–1913, and subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, which was later called the "magneton"; similar quantum computations, but with numerically quite different values, were subsequently made possible for both the magnetic moments of the proton and the neutron that are three orders of magnitude smaller than that of the electron. Photoelectric effect The emission of electrons from a metal plate caused by light quanta (photons) with energy greater than the work function of the metal. The photoelectric effect reported by Heinrich Hertz in 1887, and explained by Albert Einstein in 1905. Low-energy phenomena: Photoelectric effect Mid-energy phenomena: Compton scattering High-energy phenomena: Pair production In 1905, Einstein explained the photoelectric effect by postulating that light, or more generally all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states: "According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of 'energy quanta' that are localized in points in space, move without dividing, and can be absorbed or generated only as a whole." This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.[3] These energy quanta later came to be called "photons", a term introduced by Gilbert N. Lewis in 1926. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement; it effectively solved the problem of black-body radiation attaining infinite energy, which occurred in theory if light were to be explained only in terms of waves. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules. These theories, though successful, were strictly phenomenological: during this time, there was no rigorous justification for quantization, aside, perhaps, from Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quanta.[4][5] They are collectively known as the old quantum theory. The phrase "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics (1931). With decreasing temperature, the peak of the blackbody radiation curve shifts to longer wavelengths and also has lower intensities. The blackbody radiation curves (1862) at left are also compared with the early, classical limit model of Rayleigh and Jeans (1900) shown at right. The short wavelength side of the curves was already approximated in 1896 by the Wien distribution law. Niels Bohr's 1913 quantum model of the atom, which incorporated an explanation of Johannes Rydberg's 1888 formula, Max Planck's 1900 quantum hypothesis, i.e. that atomic energy radiators have discrete energy values (ε = hν), J. J. Thomson's 1904 plum pudding model, Albert Einstein's 1905 light quanta postulate, and Ernest Rutherford's 1907 discovery of the atomic nucleus. Note that the electron does not travel along the black line when emitting a photon. It jumps, disappearing from the outer orbit and appearing in the inner one and cannot exist in the space between orbits 2 and 3. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. This theory was for a single particle and derived from special relativity theory. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan[6][7] developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation to the generalised case of de Broglie's theory.[8] Schrödinger subsequently showed that the two approaches were equivalent. Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron. The Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron. He also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period, still stand, and remain widely used. The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Caltech, and John C. Slater into various theories such as Molecular Orbital Theory or Valence Theory. Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, resulting in quantum field theories. Early workers in this area include P.A.M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan. This area of research culminated in the formulation of quantum electrodynamics by R.P. Feynman, F. Dyson, J. Schwinger, and S. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, and served as a model for subsequent quantum field theories.[6][7][9] Feynman diagram of gluon radiation in quantum chromodynamics The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilczek in 1975. Building on pioneering work by Schwinger, Higgs and Goldstone, the physicists Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force, for which they received the 1979 Nobel Prize in Physics. Founding experiments[edit] See also[edit] 1. ^ Max Born, My Life: Recollections of a Nobel Laureate, Taylor & Francis, London, 1978. ("We became more and more convinced that a radical change of the foundations of physics was necessary, i.e., a new kind of mechanics for which we used the term quantum mechanics. This word appears for the first time in physical literature in a paper of mine...") 2. ^ M. Planck (1914). The theory of heat radiation, second edition, translated by M. Masius, Blakiston's Son & Co, Philadelphia, pp. 22, 26, 42–43. 3. ^ Folsing, Albrecht (1997), Albert Einstein: A Biography, trans. Ewald Osers, Viking  5. ^ Irons, F. E. (August 2001), "Poincaré's 1911–12 proof of quantum discontinuity interpreted as applying to atoms", American Journal of Physics, 69 (8): 879–84, Bibcode:2001AmJPh..69..879I, doi:10.1119/1.1356056  6. ^ a b David Edwards,The Mathematical Foundations of Quantum Mechanics, Synthese, Volume 42, Number 1/September, 1979, pp. 1–70. 8. ^ Hanle, P.A. (December 1977), "Erwin Schrodinger's Reaction to Louis de Broglie's Thesis on the Quantum Theory.", Isis, 68 (4): 606–09, doi:10.1086/351880  9. ^ S. Auyang, How is Quantum Field Theory Possible?, Oxford University Press, 1995. 10. ^ The Davisson-Germer experiment, which demonstrates the wave nature of the electron Further reading[edit] • Bacciagaluppi, Guido; Valentini, Antony (2009), Quantum theory at the crossroads: reconsidering the 1927 Solvay conference, Cambridge, UK: Cambridge University Press, p. 9184, Bibcode:2006quant.ph..9184B, ISBN 978-0-521-81421-8, OCLC 227191829, arXiv:quant-ph/0609184Freely accessible  • Bernstein, Jeremy (2009), Quantum Leaps, Cambridge, Massachusetts: Belknap Press of Harvard University Press, ISBN 978-0-674-03541-6  • Cramer, JG (2015). The Quantum Handshake: Entanglement, Nonlocality and Transactions. Springer Verlag. ISBN 978-3-319-24642-0.  • Greenberger, Daniel, Hentschel, Klaus, Weinert, Friedel (Eds.) Compendium of Quantum Physics. Concepts, Experiments, History and Philosophy, New York: Springer, 2009. ISBN 978-3-540-70626-7. • Jammer, Max (1966), The conceptual development of quantum mechanics, New York: McGraw-Hill, OCLC 534562  • Jammer, Max (1974), The philosophy of quantum mechanics: The interpretations of quantum mechanics in historical perspective, New York: Wiley, ISBN 0-471-43958-4, OCLC 969760  • F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz and D. Sternheimer, Deformation theory and quantization I,and II, Ann. Phys. (N.Y.), 111 (1978) pp. 61-151. • D. Cohen, An Introduction to Hilbert Space and Quantum Logic, Springer-Verlag, 1989. This is a thorough and well-illustrated introduction. • Finkelstein, D. (1969), "Matter, Space and Logic", Boston Studies in the Philosophy of Science, Boston Studies in the Philosophy of Science, V: 1969, ISBN 978-94-010-3383-1, doi:10.1007/978-94-010-3381-7_4.  • A. Gleason. Measures on the Closed Subspaces of a Hilbert Space, Journal of Mathematics and Mechanics, 1957. • R. Kadison. Isometries of Operator Algebras, Annals of Mathematics, Vol. 54, pp. 325–38, 1951 • G. Ludwig. Foundations of Quantum Mechanics, Springer-Verlag, 1983. • G. Mackey. Mathematical Foundations of Quantum Mechanics, W. A. Benjamin, 1963 (paperback reprint by Dover 2004). • R. Omnès. Understanding Quantum Mechanics, Princeton University Press, 1999. (Discusses logical and philosophical issues of quantum mechanics, with careful attention to the history of the subject). • N. Papanikolaou. Reasoning Formally About Quantum Systems: An Overview, ACM SIGACT News, 36(3), pp. 51–66, 2005. • C. Piron. Foundations of Quantum Physics, W. A. Benjamin, 1976. • Hermann Weyl. The Theory of Groups and Quantum Mechanics, Dover Publications, 1950. • A. Whitaker. The New Quantum Age: From Bell's Theorem to Quantum Computation and Teleportation, Oxford University Press, 2011, ISBN 978-0-19-958913-5 • Stephen Hawking. The Dreams that Stuff is Made of, Running Press, 2011, ISBN 978-0-76-243434-3 • A. Douglas Stone. Einstein and the Quantum, the Quest of the Valiant Swabian, Princeton University Press, 2006. Print. • Richard P. Feynman. QED: The Strange Theory of Light and Matter. Princeton, NJ: Princeton University Press, 2006. Print. External links[edit]
2e5a4f65097843b7
Friday, September 26, 2008 1. Black hole entropy and dark black holes 2. Hierarchy of Planck lengths 3. Dark flow 4. How to avoid heat death? Tuesday, September 23, 2008 Flyby anomaly as a relativistic transverse Doppler effect? For half year ago I discussed a model for the flyby anomaly based on the hypothesis that dark matter ring around the orbit of Earth causes the effect. The model reproduced the formula deduced for the change of the velocity of the space-craft at qualitative level, and contained single free parameter: essentially the linear density of the dark matter at the flux tube. From Lubos I learned about a new twist in the story of flyby anomaly. September twelfth 2007 Jean-Paul Mbelek proposed an explanation of the flyby anomaly as a relativistic transverse Doppler effect. The model predicts also the functional dependence of the magnitude of the effect on the kinematic parameters and the prediction is consistent with the empirical findings in the example considered. Therefore the story of flyby anomaly might be finished and dark matter at the orbit of Earth could bring in only an additional effect. It is probably too much to hope for this kind of effect to be large enough if present. For background see the chapter TGD and Astrophysics. Monday, September 22, 2008 Tritium beta decay anomaly and variations in the rates of radioactive processes The determination of neutrino mass from the beta decay of tritium leads to a tachyonic mass squared [2,3,4,5]. I have considered several alternative explanations for this long standing anomaly. The first class of models relies on the presence of dark neutrino or antineutrino belt around the orbit of Earth. The second class of models relies on the prediction of nuclear string model that the neutral color bonds connecting nucleons to nuclear string can be also charged. This predicts large number of fake nuclei having only apparently the proton and neutron numbers deduced from the mass number. 1. 3He nucleus resulting in the decay could be fake (tritium nucleus with one positively charged color bond making it to look like 3He). The idea that slightly smaller mass of the fake 3He might explain the anomaly: it however turned out that the model cannot explain the variation of the anomaly from experiment to experiment. 2. Later (yesterday evenening!) I realized that also the initial 3H nucleus could be fake (3He nucleus with one negatively charged color bond). It turned out that fake tritium option has the potential to explain all aspects of the anomaly and also other anomalies related to radioactive and alpha decays of nuclei. 3. Just one day ago I still believed on the alternative based on the assumption of dark neutrino or antineutrino belt surrounding Earth's orbit. This model has the potential to explain satisfactorily several aspects of the anomaly but fails in its simplest form to explain the dependence of the anomaly on experiment. Since the fake tritium scenario is based only on the basic assumptions of the nuclear string model and brings in only new values of kinematical parameters it is definitely favored. In the following I shall describe only the models based on the decay of tritium to fake Helium and the decay of fake tritium to Helium. 1. Fake 3He option Consider first the fake 3He option. Tritium (pnn) would decay with some rate to a fake 3He, call it 3Hef, which is actually tritium nucleus containing one positively charged color bond and possessing mass slightly different than that of 3He (ppn). 1. In this kind of situation the expression for the function K(E,k) differs from K(stand) since the upper bound E0 for the maximal electron energy is modified: E0 ® E1=M(3H)-M(3Hef)-mm = M(3H)-M(3He)+DM-mm , DM = M(3He)-M(3Hef) . Depending on whether 3Hef is heavier/lighter than 3He E0 decreases/decreases. From Vb Î [5-100] eV and from the TGD based prediction order m([`(n)]) ~ .27 eV one can conclude that DM should be in the range 5-100 eV. 2. In the lowest approximation K(E) can be written as K(E) = K0(E,E1)q(E1-E) @ (E1-E)q(E1-E). Here q(x) denotes step function and K0(E,E1) corresponds to the massless antineutrino. 3. If the fraction p of the final state nuclei correspond to a fake 3He the function K(E) deduced from data is a linear combination of functions K(E,3He) and K(E,3Hef) and given by K(E) = (1-p)K(E,3He)+ pK(E,3Hef) @ (1-p)(E0-E)q(E0-E)+ p(E1-E)q(E1-E) in the approximation mn=0. For m(3Hef) < m(3He) one has E1 > E0 giving K(E) = (E0-E)q(E0-E)+ p(E1-E0)q(E1-E)q(E-E0). K(E,E0) is shifted upwards by a constant term (1-p)DM in the region E0 > E. At E=E0 the derivative of K(E) is infinite which corresponds to the divergence of the derivative of square root function in the simpler parametrization using tachyonic mass. The prediction of the model is the presence of a tail corresponding to the region E0 < E < E1. 4. The model does not as such explain the bump near the end point of the spectrum. The decay 3H® 3Hef can be interpreted in terms of an exotic weak decay d® u+W- of the exotic d quark at the end of color bond connecting nucleons inside 3H. The rate for these interactions cannot differ too much from that for ordinary weak interactions and W boson must transform to its ordinary variant before the decay W® e+`n. Either the weak decay at quark level or the phase transition could take place with a considerable rate only for low enough virtual W boson energies, say for energies for which the Compton length of massless W boson correspond to the size scale of color flux tubes predicted to be much longer than nuclear size. Is so the anomaly would be absent for higher energies and a bump would result. 5. The value of K(E) at E=E0 is Vb º p(E1-E0). The variation of the fraction p could explain the observed dependence of Vb on experiment as well as its time variation. It is however difficult to understand how p could vary. 2. Fake 3H option Assume that a fraction p of the tritium nuclei are fake and correspond to 3He nuclei with one negatively charged color bond. 1. By repeating the previous calculation exactly the same expression for K(E) in the approximation mn=0 but with the replacement DM = M(3He)-M(3Hef)® M(3Hf)-M(3H) . 2. In this case it is possible to understand the variations in the shape of K(E) if the fraction of 3Hf varies in time and from experiment to experiment. A possible mechanism inducing this variation is a transition inducing the transformation 3Hf® 3H by an exotic weak decay d+p® u+n, where u and d correspond to the quarks at the ends of color flux tubes. This kind of transition could be induced by the absorption of X-rays, say artificial X-rays or X-rays from Sun. The inverse of this process in Sun could generate X rays which induce this process in resonant manner at the surface of Earth. 3. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV, 3 octaves in good approximation. This radiation could be partly due to transitions between ordinary and exotic states of nuclei rather than brehmstrahlung resulting in the acceleration of charged particles to relativistic energies. The energy range suggests the presence of three p-adic length scales: nuclear string model indeed predicts several p-adic length scales for color bonds corresponding to different mass scales for quarks at the ends of the bonds. This energy range is considerably above the energy range 5-100 eV and suggests the range [4×10-4, 6×10-2] for the values of p. The existence of these excitations would mean a new branch of low energy nuclear physics, which might be dubbed X-ray nuclear physics. 4. The approximately 1/2 year period of the temporal variation would naturally correspond to the 1/R2 dependence of the intensity of X-ray radiation from Sun. There is evidence that the period is few hours longer than 1/2 years which supports the view that the origin of periodicity is not purely geometric but relates to the dynamics of X-ray radiation from Sun. Note that for 2 hours one would have DT/T @ 2-11, which defines a fundamental constant in TGD Universe and is also near to the electron proton mass ratio. 5. All nuclei could appear as similar anomalous variants. Since both weak and strong decay rates are sensitive to the binding energy, it is possible to test this prediction by finding whether nuclear decay rates show anomalous time variation. 6. The model could explain also other anomalies of radioactive reaction rates including the findings of Shnoll [1] and the unexplained fluctuations in the decay rates of 32Si and 226Ra reported quite recently and correlating with 1/R2, R distance between Earth and Sun. 226Ra decays by alpha emission but the sensitive dependence of alpha decay rate on binding energy means that the temporal variation of the fraction of fake 226Ra isotopes could explain the variation of the decay rates. The intensity of the X-ray radiation from Sun is proportional to 1/R2 so that the correlation of the fluctuation with distance would emerge naturally. 7. Also a dip in the decay rates of 54Mn coincident with a peak in proton and X-ray fluxes during solar flare has been observed: the proposal is that neutrino flux from Sun is also enhanced during the solar flare and induces the effect. A peak in X-ray flux is a more natural explanation in TGD framework. 8. The model predicts interaction between atomic physics and nuclear physics, which might be of relevance in biology. For instance, the transitions between exotic and ordinary variants of nuclei could yield X-rays inducing atomic transitions or ionization. The wave length range 1-8 Angstroms for anomalous X-rays corresponds to the range Z in the rage [11,30] for ionization energies. The biologically important ions Na+, Mg++, P-, Cl-, K+, Ca++ have Z= (11,15,17,19,20). I have proposed that Na+, Cl-, K+ (fermions) are actually bosonic exotic ions forming Bose-Einstein condensates at magnetic flux tubes (see this). The exchange of W bosons between neutral Ne and A(rgon) atoms (bosons) could yield exotic bosonic variants of Na+ (perhaps even Mg++, which is boson also as ordinary ion) and Cl- ions. Similar exchange between A atoms could yield exotic bosonic variants of Cl- and K+ (and even Ca++, which is also boson as ordinary variant). This transformation might relate to the paradoxical finding that noble gases can act as narcotics. This hypothesis is testable by measuring the nuclear weights of these ions. X-rays from Sun are not present during night time and this could relate to the night-day cycle of living organisms. Note that the nagnetic bodies are of size scale of Earth and even larger so that the exotic ions inside them could be subject to intense X-ray radiation. X-rays could also be dark X-rays with large Planck constant and thus with much lower frequency than ordinary X-rays so that control could be possible. [1] S. E. Shnoll et al (1998), Realization of discrete states during fluctuations in macroscopic processes, Uspekhi Fisicheskikh Nauk, Vol. 41, No. 10, pp. 1025-1035. [2]V. M. Lobashev et al(1996), in Neutrino 96 (Ed. K. Enqvist, K. Huitu, J. Maalampi). World Scientific, Singapore. [3] Ch. Weinheimer et al (1993), Phys. Lett. 300B, 210. [4] J. I. Collar (1996), Endpoint Structure in Beta Decay from Coherent Weak-Interaction of the Neutrino, hep-ph/9611420. [5]G. J. Stephenson Jr. (1993), Perspectives in Neutrinos, Atomic Physics and Gravitation, ed. J. T. Thanh Van, T. Darmour, E. Hinds and J. Wilkerson (Editions Frontieres, Gif-sur-Yvette), p.31. For more details see the chapters TGD and Nuclear Physics and Nuclear String Hypothesis of "p-Adic length scale Hypothesis and Dark Matter Hierarchy". Monday, September 15, 2008 Zero energy ontology, self hierarchy, and the notion of time In the previous posting I discussed the most recent view about zero energy ontology and p-adicization program. One manner to test the internal consistency of this framework is by formulating the basic notions and problems of TGD inspired quantum theory of consciousness and quantum biology in terms of zero energy ontology. I have discussed these topics already earlier but the more detailed understanding of the role of causal diamonds (CDs) brings many new aspects to the discussion. In consciousness theory the basic challenges are to understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should also explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should also have a vision about the evolution of consciousness takes place: how quantum leaps leading to an expansion of consciousness take place. In the following my intention is to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed seem to be in tune and that this process of comparison helps considerably in the attempt to develop the TGD based ontology at the level of details. 1  Causal diamonds as correlates for selves Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notion of TGD inspired quantum theory of consciousness. In the most ambitious program self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps. It is natural to interpret CD:s as correlates of selves. CDs can be interpreted in two manners: as subsets of the generalized imbedding space or as sectors of the world of classical worlds (WCW). Accordingly, selves correspond to CD:s of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CD:s geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW. The fractal hierarchy of CDs within CDs defines the counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k) = 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience. 2. Why sensory experience is about so short time interval? CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level: in fact, the secondary p-adic time scale of electron is T=1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The obvious explanation would be that sensory input corresponds to sub-selves (mental images) which correspond to CD:s with T(127) @ .1 s (electrons or their Cooper pairs) at the upper light-like boundary of CD assignable to the self. This requires a strong asymmetry between upper and lower light-like boundaries of CD:s. 1. The only reasonable manner to explain the situation seems to be that the addition of CD:s within CD:s in the state construction must always glue them to the upper light-like boundary of CD along light-like radial ray from the tip of the past directed light-cone. This conforms with the classical picture according to which classical sensory data arrives from the geometric past with velocity which is at most light velocity. 2. One must also explain the rare but real occurrence of phase conjugate signals understandable as negative energy signals propagating towards geometric past. The conditions making possible negative energy signals are achieved when the sub-CD is glued to both the past and future directed light-cones at the space-like edge of CD along light-like rays emerging from the edge. This exceptional case gives negative energy signals traveling to the geometric past. The above mentioned basic control mechanism of biology would represent a particular instance of this situation. Negative energy signals as a basic mechanism of intentional action would explain why living matter seems to be so special. 3. Geometric memories would correspond to the lower boundaries of CD:s and would not be in general sharp because only the sub-CD:s glued to both upper and lower light-cone boundary would be present. A temporal sequence of mental images, say the sequence of digits of a phone number, could corresponds to a sequence of sub-CD:s glued to the upper light-cone boundary. 4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate for this could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CD:s. It could be that these ëpisodal" memories correspond to CD:s at upper light-cone boundary of CD. On basis of these arguments it seems that the basic conceptual framework of TGD inspired theory of consciousness can be realized in zero energy ontology. Interesting questions relate to how dynamical selves are. 1. Is self doomed to live inside the same sub-WCW eternally as a lonely god? This question has been already answered: there are interactions between sub-CD:s of given CD, and one can think of selves as quantum superposition of states in CD:s with wave function having as its argument the tips of CD, or rather only the second one since T is assumed to be quantized. 2. Is there a largest CD in the personal CD hierarchy of self in an absolute sense? Or is the largest CD present only in the sense that the contribution to the contents of consciousness coming from very large CD:s is negligible? Long time scales T correspond to low frequencies and thermal noise might indeed mask these contributions very effectively. Here however the hierarchy of Planck constants and generalization of the imbedding space would come in rescue by allowing dark EEG photons to have energies above thermal energy. 3. Can selves evolve in the sense that the size of CD increases in quantum leaps so that the corresponding time scale T=2kT0 of memory and planned action increases? Geometrically this kind of leap would mean that CD becomes a sub-CD of a larger CD either at the level of conscious experience or in absolute sense. This leap can occur in two senses: as an increase of the largest p-adic time scale in the personal hierarchy of space-time sheets or as increase of the largest value of Planck constants in the personal dark matter hierarchy. At the level of individual this would mean emergence of increasingly lower frequencies of generalization of EEG and of the levels of dark matter hierarchy with large value of Planck constant. 4. In 2-D illustration of the leap leading to a higher level of self hierarchy would mean simply the continuation of CD to right or left in the 2-D visualization of CD. Since the preferred M2 is contained in the tangent space of space-time surfaces, and since preferred M2 plays a key role in dark matter hierarchy too, one must ask whether the 2-D illustration might have some deeper truth in it. 3. New view about arrow of time Perhaps the most fundamental problem related to the notion of time concerns the relationship between experienced time and geometric time. The two notions are definitely different: think only the irreversibility of experienced time and the reversibility of the geometric time and the absence of future of the experienced time. Also the deterministic character of the dynamics in geometric time is in conflict with the notion of free will supported by the direct experience. In the standard materialistic ontology experienced time and geometric time are identified. In the naivest picture the flow of time is interpreted in terms of the motion of 3-D time=constant surface of space-time towards geometric future without any explanation for why this kind of motion would occur. This identification is plagued by several difficulties. In special relativity the difficulties relate to the impossibility define the notion of simultaneity in a unique manner and the only possible manner to save this notion seems to be the replacement of time=constant 3-surface with past directed light-cone assignable to the world-line of observer. In general relativity additional difficulties are caused by the general coordinate invariance unless one generalizes the picture of special relativity: problems are however caused by the fact that past light-cones make sense only locally. In quantum physics quantum measurement theory leads to a paradoxical situation since the observed localization of the state function reduction to a finite space-time volume is in conflict with the determinism of Schrödinger equation. 1. Selves correspond to CD:s the own sub-WCW:s. These sub-WCW:s and their projections to the imbedding space do not move anywhere. Therefore standard explanation for the arrow of geometric time cannot work. Neither can the experience about flow of time correspond to quantum leaps increasing the size of the largest CD contributing to the conscious experience of self. 2. The only plausible interpretation is based on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift of quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can indeed understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a kind of temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense. Why this shifting should always take place to the direction of geometric past of the imbedding space? What seems clear is that the asymmetric construction of zero energy states should correlate with the preferred direction. If question is about probabilities, the basic question would be why the probabilities for shifts in the direction of geometric past are higher. Here some alternative attempts to answer this question are discussed. 1. Cognition and time relate to each other very closely and the required fusion of real physics with various p-adic physics of cognition and intentionality could also have something to do with the asymmetry. Indeed, in the p-adic sectors the transcendental values of p-adic light-cone proper time coordinate correspond to literally infinite values of the real valued light-cone proper time, and one can say that most points of p-adic space-time sheets serving as correlates of thoughts and intentions reside always in the infinite geometric future in the real sense. Therefore cognition and intentionality would break the symmetry between positive and negative energies and geometric past and future, and the breaking of arrow of geometric time could be seen as being induced by intentional action and also due to the basic aspects of cognitive experience. 2. Zero energy ontology suggests also a possible reason for the asymmetry. Standard quantum mechanics encourages the identification of the space of negative energy states as the dual for the space of positive energy states. There are two kinds of duals. Hilbert space dual is identified as the space of continuous linear functionals from Hilbert space to the coefficient field and is isometrically anti-isomorphic with the Hilbert space. This justifies the bra-ket notation. In the case of vector space the relevant notion is algebraic dual. Algebraic dual can be identified as an infinite direct product of the coefficient field identified as a 1-dimensional vector space. Direct product is defined as the set of functions from an infinite index set I to the disjoint union of infinite number of copies of the coefficient field indexed by I. Infinite-dimensional vector space corresponds to infinite direct sum consisting of functions which are non-vanishing for a finite number of indices only. Hence vector space dual in infinite-dimensional case contains much more states than the vector space and does not have enumerable basis. If negative energy states correspond to a subspace of vector space dual containing Hilbert space dual, the number of negative energy states is larger than the number of positive energy states. This asymmetry could correspond to better measurement resolution at the upper light-cone cone boundary so that the state space at lower light-cone boundary would be included via inclusion of HFFs to that associated with the upper light-cone boundary. Geometrically this would mean the possibility to glue to the upper light-cone boundary CD which can be smaller than those associated with the lower one. 3. The most convincing candidate for an answer comes from consciousness theory. One must understand also why the contents of sensory experience is concentrated around a narrow time interval whereas the time scale of memories and anticipation are much longer. The proposed mechanism is that the resolution of conscious experience is higher at the upper boundary of CD. Since zero energy states correspond to light-like 3-surfaces, this could be a result of self-organization rather than a fundamental physical law. 1. The key assumption is that CDs have CDs inside CDs and that the vertices of generalized Feynman diagrams are contained within sub-CDs. It is not assumed that CDs are glued to the upper boundary of CD since the arrow of time results from self organization when the distribution of sub-CDs concentrates around the upper boundary of CD. In a category theoretical formulation for generalized Feynman diagrammatics based on this picture is developed. 2. CDs define the perceptive field for self. Selves are curious about the space-time sheets outside their perceptive field in the geometric future (relative notion) of the imbedding space and perform quantum jumps tending to shift the superposition of the space-time sheets to the direction of geometric past (past defined as the direction of shift!). This creates the illusion that there is a time=snapshot front of consciousness moving to geometric future in fixed background space-time as an analog of train illusion. 3. The fact that news come from the upper boundary of CD implies that self concentrates its attention to this region and improves the resolutions of sensory experience and quantum measurement here. The sub-CD:s generated in this manner correspond to mental images with contents about this region. As a consequence, the contents of conscious experience, in particular sensory experience, tend to be about the region near the upper boundary. 4. This mechanism in principle allows the arrow of the geometric time to vary and depend on p-adic length scale and the level of dark matter hierarchy. The occurrence of phase transitions forcing the arrow of geometric time to be same everywhere are however plausible for the reason that the lower and upper boundaries of given CD must possess the same arrow of geometric time. For details see chapters TGD as a Generalized Number Theory I: p-Adicization Program. Sunday, September 14, 2008 The most recent vision about zero energy ontology and p-adicization The generalization of the number concept obtained by fusing real and p-adics along rationals and common algbraics is the basic philosophy behind p-adicization. This however requires that it is possible to speak about rational points of the imbedding space and the basic objection against the notion of rational points of imbedding space common to real and various p-adic variants of the imbedding space is the necessity to fix some special coordinates in turn implying the loss of a manifest general coordinate invariance. The isometries of the imbedding space could save the situation provided one can identify some special coordinate system in which isometry group reduces to its discrete subgroup. The loss of the full isometry group could be compensated by assuming that WCW is union over sub-WCW:s obtained by applying isometries on basic sub-WCW with discrete subgroup of isometries. The combination of zero energy ontology realized in terms of a hierarchy causal diamonds and hierarchy of Planck constants providing a description of dark matter and leading to a generalization of the notion of imbedding space suggests that it is possible to realize this dream. The article TGD: What Might be the First Principles? provides a brief summary about recent state of quantum TGD helping to understand the big picture behind the following considerations. 1. Zero energy ontology briefly 1. The basic construct in the zero energy ontology is the space CD×CP2, where the causal diamond CD is defined as an intersection of future and past directed light-cones with time-like separation between their tips regarded as points of the underlying universal Minkowski space M4. In zero energy ontology physical states correspond to pairs of positive and negative energy states located at the boundaries of the future and past directed light-cones of a particular CD. CD:s form a fractal hierarchy and one can glue smaller CD:s within larger CD along the upper light-cone boundary along a radial light-like ray: this construction recipe allows to understand the asymmetry between positive and negative energies and why the arrow of experienced time corresponds to the arrow of geometric time and also why the contents of sensory experience is located to so narrow interval of geometric time. One can imagine evolution to occur as quantum leaps in which the size of the largest CD in the hierarchy of personal CD:s increases in such a manner that it becomes sub-CD of a larger CD. p-Adic length scale hypothesis follows if the values of temporal distance T between tips of CD come in powers of 2n. All conserved quantum numbers for zero energy states have vanishing net values. The interpretation of zero energy states in the framework of positive energy ontology is as physical events, say scattering events with positive and negative energy parts of the state interpreted as initial and final states of the event. 2. In the realization of the hierarchy of Planck constants CD×CP2 is replaced with a Cartesian product of book like structures formed by almost copies of CD:s and CP2:s defined by singular coverings and factors spaces of CD and CP2 with singularities corresponding to intersection M2ÇCD and homologically trivial geodesic sphere S2 of CP2 for which the induced Kähler form vanishes. The coverings and factor spaces of CD:s are glued together along common M2ÇCD. The coverings and factors spaces of CP2 are glued together along common homologically non-trivial geodesic sphere S2. The choice of preferred M2 as subspace of tangent space of X4 at all its points and having interpretation as space of non-physical polarizations, brings M2 into the theory also in different manner. S2 in turn defines a subspace of the much larger space of vacuum extremals as surfaces inside M4×S2. 3. Configuration space (the world of classical worlds, WCW) decomposes into a union of sub-WCW:s corresponding to different choices of M2 and S2 and also to different choices of the quantization axes of spin and energy and and color isospin and hyper-charge for each choice of this kind. This means breaking down of the isometries to a subgroup. This can be compensated by the fact that the union can be taken over the different choices of this subgroup. 4. p-Adicization requires a further breakdown to discrete subgroups of the resulting sub-groups of the isometry groups but again a union over sub-WCW:s corresponding to different choices of the discrete subgroup can be assumed. Discretization relates also naturally to the notion of number theoretic braid. Consider now the critical questions. 1. Very naively one could think that center of mass wave functions in the union of sectors could give rise to representations of Poincare group. This does not conform with zero energy ontology, where energy-momentum should be assignable to say positive energy part of the state and where these degrees of freedom are expected to be pure gauge degrees of freedom. If zero energy ontology makes sense, then the states in the union over the various copies corresponding to different choices of M2 and S2 would give rise to wave functions having no dynamical meaning. This would bring in nothing new so that one could fix the gauge by choosing preferred M2 and S2 without losing anything. This picture is favored by the interpretation of M2 as the space of longitudinal polarizations. 2. The crucial question is whether it is really possible to speak about zero energy states for a given sector defined by generalized imbedding space with fixed M2 and S2. Classically this is possible and conserved quantities are well defined. In quantal situation the presence of the lightcone boundaries breaks full Poincare invariance although the infinitesimal version of this invariance is preserved. Note that the basic dynamical objects are 3-D light-like "legs" of the generalized Feynman diagrams. 2. Definition of energy inzero energy ontology Can one then define the notion of energy for positive and negative energy parts of the state? There are two alternative approaches depending on whether one allows or does not allow wave-functions for the positions of tips of light-cones. Consider first the naive option for which four momenta are assigned to the wave functions assigned to the tips of CD:s. 1. The condition that the tips are at time-like distance does not allow separation to a product but only following kind of wave functions Ψ = exp(ip·m)Θ(m2) Θ(m0)× Φ(p) , m=m+-m-. Here m+ and m- denote the positions of the light-cones and Q denotes step function. F denotes configuration space spinor field in internal degrees of freedom of 3-surface. One can introduce also the decomposition into particles by introducing sub-CD:s glued to the upper light-cone boundary of CD. 2. The first criticism is that only a local eigen state of 4-momentum operators p± = (h/2p) Ñ/i is in question everywhere except at boundaries and at the tips of the CD with exact translational invariance broken by the two step functions having a natural classical interpretation. The second criticism is that the quantization of the temporal distance between the tips to T = 2kT0 is in conflict with translational invariance and reduces it to a discrete scaling invariance. The less naive approach relies of super conformal structures of quantum TGD assumes fixed value of T and therefore allows the crucial quantization condition T=2kT0. 1. Since light-like 3-surfaces assignable to incoming and outgoing legs of the generalized Feynman diagrams are the basic objects, can hope of having enough translational invariance to define the notion of energy. If translations are restricted to time-like translations acting in the direction of the future (past) then one has local translation invariance of dynamics for classical field equations inside dM4± as a kind of semigroup. Also the M4 translations leading to interior of X4 from the light-like 2-surfaces surfaces act as translations. Classically these restrictions correspond to non-tachyonic momenta defining the allowed directions of translations realizable as particle motions. These two kinds of translations have been assigned to super-canonical conformal symmetries at dM4±×CP2 and and super Kac-Moody type conformal symmetries at light-like 3-surfaces. Equivalence Principle in TGD framework states that these two conformal symmetries define a structure completely analogous to a coset representation of conformal algebras so that the four-momenta associated with the two representations are identical . 2. The condition selecting preferred extremals of Kähler action is induced by a global selection of M2 as a plane belonging to the tangent space of X4 at all its points . The M4 translations of X4 as a whole in general respect the form of this condition in the interior. Furthermore, if M4 translations are restricted to M2, also the condition itself - rather than only its general form - is respected. This observation, the earlier experience with the p-adic mass calculations, and also the treatment of quarks and gluons in QCD encourage to consider the possibility that translational invariance should be restricted to M2 translations so that mass squared, longitudinal momentum and transversal mass squared would be well defined quantum numbers. This would be enough to realize zero energy ontology. Encouragingly, M2 appears also in the generalization of the causal diamond to a book-like structure forced by the realization of the hierarchy of Planck constant at the level of the imbedding space. 3. That the cm degrees of freedom for CD would be gauge like degrees of freedom sounds strange. The paradoxical feeling disappears as one realizes that this is not the case for sub-CDs, which indeed can have non-trivial correlation functions with either upper or lower tip of the CD playing a role analogous to that of an argument of n-point function in QFT description. One can also say that largest CD in the hierarchy defines infrared cutoff. 3. p-Adic variants of the imbedding space Consider now the construction of p-adic variants of the imbedding space. 1. Rational values of p-adic coordinates are non-negative so that light-cone proper time a4,+=Ö(t2-z2-x2-y2) is the unique Lorentz invariant choice for the p-adic time coordinate near the lower tip of CD. For the upper tip the identification of a4 would be a4,-=Ö((t-T)2-z2-x2-y2). In the p-adic context the simultaneous existence of both square roots would pose additional conditions on T. For 2-adic numbers T=2nT0, n ³ 0 (or more generally T=åk ³ n0bk 2k), would allow to satisfy these conditions and this would be one additional reason for T=2nT0 implying p-adic length scale hypothesis. The remaining coordinates of CD are naturally hyperbolic cosines and sines of the hyperbolic angle h±,4 and cosines and sines of the spherical coordinates q and f. 2. The existence of the preferred plane M2 of un-physical polarizations would suggest that the 2-D light-cone proper times a2,+ = Ö(t2-z2) a2,- = Ö((t-T)2-z2) can be also considered. The remaining coordinates would be naturally h±,2 and cylindrical coordinates (r,f). 3. The transcendental values of a4 and a2 are literally infinite as real numbers and could be visualized as points in infinitely distant geometric future so that the arrow of time might be said to emerge number theoretically. For M2 option p-adic transcendental values of r are infinite as real numbers so that also spatial infinity could be said to emerge p-adically. 4. The selection of the preferred quantization axes of energy and angular momentum unique apart from a Lorentz transformation of M2 would have purely number theoretic meaning in both cases. One must allow a union over sub-WCWs labeled by points of SO(1,1). This suggests a deep connection between number theory, quantum theory, quantum measurement theory, and even quantum theory of mathematical consciousness. 5. In the case of CP2 there are three real coordinate patches involved . The compactness of CP2 allows to use cosines and sines of the preferred angle variable for a given coordinate patch. ξ1= tan(u)× cos(Θ/2)× exp(i(Ψ+Φ)/2) , ξ2= tan(u)× sin(Θ/2)× exp(i(Ψ-Φ)/2). The ranges of the variables u,Q, F,Y are [0,p/2],[0,p],[0,4p],[0,2p] respectively. Note that u has naturally only the positive values in the allowed range. S2 corresponds to the values F = Y = 0 of the angle coordinates. 6. The rational values of the (hyperbolic) cosine and sine correspond to Pythagorean triangles having sides of integer length and thus satisfying m2 = n2+r2 (m2=n2-r2). These conditions are equivalent and allow the well-known explicit solution . One can construct a p-adic completion for the set of Pythagorean triangles by allowing p-adic integers which are infinite as real integers as solutions of the conditions m2=r2±s2. These angles correspond to genuinely p-adic directions having no real counterpart. Hence one obtains p-adic continuum also in the angle degrees of freedom. Algebraic extensions of the p-adic numbers bringing in cosines and sines of the angles p/n lead to a hierarchy increasingly refined algebraic extensions of the generalized imbedding space. Since the different sectors of WCW directly correspond to correlates of selves this means direct correlation with the evolution of the mathematical consciousness. Trigonometric identities allow to construct points which in the real context correspond to sums and differences of angles. 7. Negative rational values of the cosines and sines correspond as p-adic integers to infinite real numbers and it seems that one use several coordinate patches obtained as copies of the octant (x ³ 0,y ³ 0,z ³ 0,). An analogous picture applies in CP2 degrees of freedom. 8. The expression of the metric tensor and spinor connection of the imbedding in the proposed coordinates makes sense as a p-adic numbers in the algebraic extension considered. The induction of the metric and spinor connection and curvature makes sense provided that the gradients of coordinates with respect to the internal coordinates of the space-time surface belong to the extensions. The most natural choice of the space-time coordinates is as subset of imbedding space-coordinates in a given coordinate patch. If the remaining imbedding space coordinates can be chosen to be rational functions of these preferred coordinates with coefficients in the algebraic extension of p-adic numbers considered for the preferred extremals of Kähler action, then also the gradients satisfy this condition. This is highly non-trivial condition on the extremals and if it works might fix completely the space of exact solutions of field equations. Space-time surfaces are also conjectured to be hyper-quaternionic , this condition might relate to the simultaneous hyper-quaternionicity and Kähler extremal property. Note also that this picture would provide a partial explanation for the decomposition of the imbedding space to sectors dictated also by quantum measurement theory and hierarchy of Planck constants. 4. p-Adic variants for the sectors of WCW One can also wonder about the most general definition of the p-adic variants of the sectors of the world of classical worlds. 1. The restriction of the surfaces in question to be expressible in terms of rational functions with coefficients which are rational numbers of belong to algebraic extension of rationals means that the world of classical worlds can be regarded as a a discrete set and there would be no difference between real and p-adic worlds of classical worlds: a rather unexpected conclusion. 2. One can of course whether one should perform completion also for WCWs. In real context this would mean completion of the rational number valued coefficients of a rational function to arbitrary real coefficients and perhaps also allowance of Taylor and Laurent series as limits of rational functions. In the p-adic case the integers defining rational could be allowed to become p-adic transcendentals infinite as real numbers. Also now also Laurent series could be considered. 3. In this picture there would be close analogy between the structure of generalized imbedding space and WCW. Different WCW:s could be said to intersect in the space formed by rational functions with coefficients in algebraic extension of rationals just real and p-adic variants of the imbedding space intersect along rational points. In the spirit of algebraic completion one might hope that the expressions for the various physical quantities, say the value of Kähler action, Kähler function, or at least the exponent of Kähler function (at least for the maxima of Kähler function) could be defined by analytic continuation of their values from these sub-WCW to various number fields. The matrix elements for p-adic-to-real phase transitions of zero energy states interpreted as intentional actions could be calculated in the intersection of real and p-adic WCW:s by interpreting everything as real. Wednesday, September 03, 2008 Dark nuclear strings as analogs of DNA-, RNA- and amino-acid sequences and baryonic realization of genetic code In the earlier posting I considered the possibility that the evolution of genome might not be random but be controlled by magnetic body and that various DNA sequences might be tested in the virtual world made possible by the virtual counterparts of bio-molecules realized in terms of the homeopathic mechanism as it is understood in TGD framework. The minimal option is that virtual DNA sequences have flux tube connections to the lipids of the cell membrane so that their quality as hardware of tqc can be tested but that there is no virtual variant of transcription and translation machinery. One can however ask whether also virtual amino-acids could be present and whether this could provide deeper insights to the genetic code. 1. Water molecule clusters are not the only candidates for the representatives of linear molecules. An alternative candidate for the virtual variants of linear bio-molecules are dark nuclei consisting of strings of scaled up dark variants of neutral baryons bound together by color bonds having the size scale of atom, which I have introduced in the model of cold fusion and plasma electrolysis both taking place in water environment. Colored flux tubes defining braidings would generalize this picture by allowing transversal color magnetic flux tube connections between these strings. 2. Baryons consist of 3 quarks just as DNA codons consist of three nucleotides. Hence an attractive idea is that codons correspond to baryons obtained as open strings with quarks connected by two color flux tubes. The minimal option is that the flux tubes are neutral. One can also argue that the minimization of Coulomb energy allows only neutral dark baryons. The question is whether the neutral dark baryons constructed as string of 3 quarks using neutral color flux tubes could realize 64 codons and whether 20 aminoacids could be identified as equivalence classes of some equivalence relation between 64 fundamental codons in a natural manner. The following model indeed reproduces the genetic code directly from a model of dark neutral baryons as strings of 3 quarks connected by color flux tubes. 1. Dark nuclear baryons are considered as a fundamental realization of DNA codons and constructed as open strings of 3 dark quarks connected by two colored neutral flux tubes. DNA sequences would in turn correspond to sequences of dark baryons. It is assumed that the net charge of the dark baryons vanishes so that Coulomb repulsion is minimized. 2. One can classify the states of the open 3-quark string by the total charges and spins associated with 3 quarks and to the two color bonds. Total em charges of quarks vary in the range ZB Î {2,1,0,-1} and total color bond charges in the range Zb Î {2,1,0,-1,-2}. Only neutral states are allowed. Total quark spin projection varies in the range JB=3/2,1/2,-1/2,-3/2 and the total flux tube spin projection in the range Jb = 2,1,-1,-2. If one takes for a given total charge assumed to be vanishing one representative from each class (JB,Jb), one obtains 4×5=20 states which is the number of amino-acids. Thus genetic code might be realized at the level of baryons by mapping the neutral states with a given spin projection to single representative state with the same spin projection. 3. The states of dark baryons in quark degrees of freedom can be constructed as representations of rotation group and strong isospin group. The tensor product 2Ä2Ä2 is involved in both cases. Physically it is known that only representations with isospin 3/2 and spin 3/2 (D resonance) and isospin 1/2 and spin 1/2 (proton and neutron) are realized. Spin statistics problem forced to introduce quark color (this means that one cannot construct the codons as sequences of 3 nucleons!). 4. Second nucleon spin doublet has wrong parity. Using only 4Å2 for rotation group would give degeneracies (1,2,2,1). One however requires the representations 4Å2Å2 rather than only 4Å2 to get 8 states with a given charge. One should transform the wrong parity doublet to positive parity doublet somehow. Since open string geometry breaks rotational symmetry to a subgroup of rotations acting along the direction of the string, the attractive possible is add a stringy excitation with angular momentum projection L=-1 to the wrong parity doublet so that parity comes out correctly. This would give degeneracies (1,2,3,2). 5. In flux tube degrees of freedom the situation is analogous to construction of mesons from quarks and antiquarks and one obtains pion with spin 0 and r meson with spin 1. States of zero charge correspond to the tensor product 2Ä2=3Å1 for rotation group. Drop the singlet and take only the analog of neutral r meson. The tensor product 3Ä3=5Å3Å1 gives 8+1 states and leaving only spin 2 and spin 1 states gives 8 states. The degeneracies of states with given spin projection for 5Å3 are (1,2,2,2,1). Genetic code means projection of the states of 5Å3 to those of 5 with the same spin projection. 6. Genetic code maps of ( 4Å2Å2)Ä(5Å3) to the states of 4×5. The most natural map maps the states with given spin to state with same spin so that the code is unique. This would give the degeneracies D(k) as products of numbers DB Î {1,2,3,2} and Db Î {1,2,2,2,1}. The numbers N(k) of aminoacids coded by D(k) codons would be [N(1),N(2),N(3),N(4),N(6)]=[2,7,2,6,3] . The correct numbers for vertebrate nuclear code are (N(1),N(2),N(3),N(4),N(6)) = (2,9,1,5,3). Some kind of symmetry breaking must take place and should relate to the emergence of stopping codons. If one codon in second 3-plet becomes stopping codon, 3-plet becomes doublet. If 2 codons in 4-plet become stopping codons it also becomes doublet and one obtains the correct result (2,9,1,5,3)! The conclusion is that genetic code can be understand as a map of stringy baryonic states induced by the projection of all states with same spin projection to a representative state with same spin projection. Genetic code would be realized at the level of dark nuclear physics and perhaps also at the level of ordinary nuclear physics and that biochemical representation would be only one particular higher level representation of the code. For details see chapters Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms" and The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes"
5bf0dea989ca354c
söndag 29 maj 2011 Mathematical Secret of Flight 1 Computed Lift and Drag of a 3d NACA0012 wing for different angles of attack by Unicorn (blue) compared with different experiments. My talk on June 15 at Svenska Mekanikdagar 2011, is now available for preview as describing joint work with Johan Hoffman and Johan Jansson. Based on accurate solution of the incompressible Navier-Stokes equations we identify the true mechanism for the generation of large lift L at small drag D of a wing with lift to drag quotient L/D of size 10 - 50, which is not described in the literature. We combine the Navier-Stokes equations with a slip boundary condition on the wing motivated by the experimental fact that the skin friction is small for a slightly viscous fluid such as air or water, and we exhibit the role the slip condition in two crucial aspects: • prevention of separation at the crest of the wing generating large lift • 3d slip-separation at the trailing edge not destroying large lift and causing small drag. Text books claim following Prandtl, named the father of modern fluid mechanics, that both lift and drag result from a boundary layer arising from a no-slip condition. We obtain lift and drag in full accordance with experiments by solving the Navier-Stokes equations with a slip condition, which does not generate any boundary layer, and we thus present strong evidence that lift and drag do not originate from any boundary layer. In short, we show that solutions to the Navier-Stokes equations with slip are computable and correctly capture the physics of (subsonic) flight. See also • To solve the Navier-Stokes equations for, say, the flow over an airplane requires a finely spaced computational grid to resolve the smallest eddies. • Consider a transport airplane with a 50-meter-long fuselage and wings with a chord length (the distance from the leading to the trailing edge) of about five meters. If the craft is cruising at 250 meters per second at an altitude of 10,000 meters, about 10 quadrillion (10^16) grid points are required to simulate the turbulence near the surface with reasonable detail. Kim and Moin express the necessity dictated by Prandtl to resolve thin boundary layers to correctly compute lift and drag of a wing or an entire airplane, which would require 50 years of Moore's law to increase the computing power with a factor 10^10 to reach the dictated 10^16 points. We show that this is possible already today using 10^6 points by using slip without boundary layers to resolve. Monstrosity of Quantum Mechanics 6: Collapse of Wave Function Is quantum mechanics a physics beauty contest with all possibilities collapsing into one actuality upon observation? Who would you choose? Since the multi-dimensional wave function of quantum mechanics is supposed to represent a probability distribution over all possibilities, the high dimensionality has to be drastically reduced to become an actuality of some interest. This is supposed to happen in an interaction with an observer, referred to as collapse of the wave function, where the observer somehow picks one of all the potentialities and makes it into an actuality, as when Miss America somehow is chosen among many candidates by some educated physics observers. Is then quantum mechanics a beauty contest? Well, ask your favorite physicist about the nature of the collapse of the wave function. Is it real? What is collapsing? Physical reality or our knowledge about reality. Or is it quantum mechanics itself which collapses upon critical observation? lördag 28 maj 2011 Monstrosity of Quantum Mechanics 5: Passive Observation Impossible Is passive observation really impossible in the world of quantum mechanics? David Albert, together with Barry Loewer inventor of a version of the Many-Worlds Interpretation referred to as Many-Minds (different from the one I suggest), tells us that the physical process of making an observation in the quantum world necessarily interferes with what is being observed. In other words, the ideal of fully passive observation of classical mechanics, cannot be upheld in quantum mechanics. The observer will always interfere more or less with what is being observed. Albert tells us that this is the big difference between classical and quantum mechanics. But is this true? Is fully passive observation impossible in quantum mechanics? Maybe, or maybe not, depending on what is meant by an observation. A human being can make observations in different forms: 1. Inspection of an analog physical apparatus capabable of measuring some phenomenon. 2. Inspection of digital simulation of the phenomen. Here 2. represents a digital simulation based on solving the Schrödinger equation describing the phenomenon, e g the ground state of an atom, and observing its energy, while 1. would be to directly observe the emission spectrum. Th nice thing about 2. is that it is a completely passive observation, in the sense that the computational process is independent of the observer making the final observation of the energy as a number coming out of the computation. So maybe passive observation is possible in quantum mechanics. Maybe quantum mechanics is not so different from classical mechanics. Not so mysterious? fredag 27 maj 2011 Monstrosity of Quantum Mechanics 4: Quantum Computers The belief of the modern physicist that the linear multi-dimensional Schrödinger equation describes the quantum world of atoms and molecules, has led to the idea of the quantum computer: • device for computation that makes direct use of quantum mechanical phenomena, such as superpositionand entanglement, to perform operations on data. • Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). I have noticed in previous posts that the linear multi-dimensional Schrödinger equation is a monster, which cannot be solved, not even on any thinkable supercomputer with any thinkable known microprocessor technique. The dimensionality is simply overwhelming. We have noticed that the impossibility of solving the multi-dimensional Schrödinger equation results from the fact the equation describes all possibilities rather than specific actualities, which is overwhelming for microprocessors limited to performing computations on specific data. The Schrödinger equation is thus a monster computationally, and to handle such a beast a monster computer is needed, a computer which computes all possibilities rather than specific actualities, which computes on all data rather than on specific data: In other words a quantum computer is needed. Are there any quantum computers? No, only with a few quantum bits. Is it possible to construct a quantum computer? Nobody knows. Few seem to believe one can. Does the multi-dimensional Schrödinger equation give a realistic description of the atomic world? Nobody knows because solutions cannot be computed and compared to experimental observation. Can you solve a monster equation on a monster computer, that is a device which simulates a real analog monster by being a real digital monster? What if the multi-dimensional Schrödinger equation is just an invented fictional monster, which will disappear as soon you stop talking about it? Compare with the post today on The Reference Frame singing praise to the Copenhagen Interpretation of the multidimensional Schrödinger equation, as if it has a meaning. Read yourself and ask if you understand anything. tisdag 24 maj 2011 Monstrosity of Quantum Mechanics 3: Many-Worlds The monstrosity of quantum mechanics is expressed in full bloom in Everett's Many-Worlds interpretation reflecting that solutions of the linear multi-dimensional Schrödinger equation can freely be superimposed. The Schrödinger cat in its closed box thus can be in a state of superposition of both alive and dead and only upon opening the box for observation does the cat have to collapse into either alive or dead, as if there were two possible parallel universa prior to collapse into one actual universe. The solution of the linear multi-dimensional Schrödinger equation thus is interpreted as a universal wave-function supposedly representing all possible universa, out of which a specific actual universe is singled out in one way or the other. How to react to this breath-taking ocean of possibilities? In this case there seems to be two possibilities: 1. Accept the linear multi-dimensional Schrödinger equation as given by God. 2. Replace the linear multi-dimensional Schrödinger as a basic model of quantum mechanics with something more reasonable. I would vote for 2. and I explore one possibility in Many-Minds Quantum Mechanics. After all, it was Schrödinger and not God who wrote down the equation. It was Schrödinger who understood that his equation had serious flaws and should be replaced by a version describing actualities instead of possibilities. What do you say? 1 or 2? One actuality or all possibilities? Would you prefer all possible lives before one actual life. Compare with the title of the biography: A Life of Erwin Schrödinger. Nobody would be able to write a biography with the title All Possible Lives of Erwin Schrödinger, and even if somebody could, nobody would be interested in reading it. måndag 23 maj 2011 Monstrosity of Quantum Mechanics 2 The simplicity (linearity) of the Schrödinger equation is seductive and has misled many minds. Quantum mechanics as a description of the microscopic world of atoms and molecules is based on Schrödinger's wave equation, which as a mathematical object is (see above) • scalar • linear • multidimensional in 3N coordinates for N electrons/kernels with solutions called wave-functions commonly denoted as Psi: • Psi (x1, x2, ..., xN, t) with xj representing the three position coordinates of particle j with j=1,...,N, and t denoting time. The wave function Psi thus depends on 3N independent real variables plus time. The simplicity of the Schrödinger wave equation (scalar and linear) as a description of a complex reality, is thus balanced by an extreme richness of the wave function depending on 3N + 1 independent variables. The richness of the wave-function thus makes it impossible to give it a physical meaning as representing a configuration or distribution of electrons and kernels, which threatened to kill quantum mechanics at birth but was rescued by Max Born declaring that • | Psi (x1,....,xN,t) |^2 represents the probability of the configuration given by the coordinates (x1,...,xN,t), and by Niels Bohr declaring that the wave function as a probability distribution, upon observation could collapse into a definite physical state, as when opening the box containing the Schrödinger cat. Born and Bohr thus developed the Copenhagen Interpretation (of quantum mechanics), which is today the officially accepted truth although contested by alternatives as hidden variables and many-worlds interpretations, without any winner. Schrödinger himself left quantum mechanics as soon as the Copenhagen Interpretation captured the minds of most physicists. The richness of the wave function is in fact a monstrosity already for small systems with N = 100 say, not to speak of real systems of 10^23 particles in a mole of gas, as pointed out by Walter Kohn, Nobel Prize in Physics in 1998: • The wave function does not exist for N larger than 100. • Why? Because it cannot be computed, because of the many dimensions. Kohn got the Nobel Prize for computing electron densities instead of probabilities as solutions of a non-linear version of the Schrödinger equation in 3 space dimension, referred to as density functional theory. If now the wave-function as solution to the Schrödinger equation does not exist, there must something fishy about the Schrödinger equation. What? We saw that the equation is scalar and linear and thus has a simple structure, which is not problematic in itself, but if it necessitates a monstrous richness in dimensions, it seems that one should question the very formulation of the Schrödinger equation as a scalar linear multidimensional equation. From where did Schrödinger get his equation? Did he derive it from basic principles? Not really. It is more of an ad hoc invention expressing particle interaction by electrostatic Coulomb potentials combined with a new mysterious form of kinetic energy. How can we know that the equation is a good model of physics if it cannot be solved? How can we check that its solutions give correct predictions if they cannot be computed and thus determined? Nevertheless it is mantra of modern physics that the Schrödinger equation is a good model, but it is a mantra without physical meaning about an equations which cannot be solved. It is like claiming that a certain truth is hidden in a riddle which cannot be solved. Thus, new versions of the Schrödinger equation are needed. I explore one such line of thought in Many-Minds Quantum Mechanics in the spirit of the Hartree method as a non-linear coupled system of one-electron/kernel Schrödinger equations. The simplicity of linearity (and superposition) of the multi-dimensional Schrödinger equation is here replaced by a non-linear complexity, but the system solutions only depend on three space dimensions which makes a direct physical interpretation possible, without probabilities and wave function collapse. This is a realist approach as compared to the non-realist Copenhagen Interpretation. Compare with Lars-Göran Johansson: Interpreting Quantum Mechanics: A Realsist's View in Schrödinger's Vein suggesting a form of realist wave-particle duality as continuous waves for propagation and discontinuous particles for exchange of energy. söndag 22 maj 2011 Charles Mackay: Madness of Crowds and CO2 Alarmism In Extraordinary Popular Delusions and the Madness of Crowds published in 1841, Charles Mackay debunks witch-hunts, alchemy and economic bubbles. Today Mackay would have been writing about the crowd madness of CO2 alarmism, with the witches being the polluters of CO2, the alchemists the CO2 alarmists and the bubble the green economy. Mackay said many clever things obviously anticipating CO2 climate alarmism, while giving hope to skeptics of CO2 alarmism: • Money, again, has often been a cause of the delusion of the multitudes. Sober nations have all at once become desperate gamblers, and risked almost their existence upon the turn of a piece of paper. • Aid the dawning, tongue and pen: Aid it, hopes of honest men! • He who has mingled in the fray of duty that the brave endure, must have made foes. If you have none, small is the work that you have done. • Truth... and if mine eyes Can bear its blaze, and trace its symmetries, Measure its distance, and its advent wait, I am no prophet - I but calculate. fredag 20 maj 2011 Monstrosity of Quantum Mechanics Schrödinger trying to slay the many-headed monster of the wave function (assisted by Einstein) however without success. Basically, classical physics is Newtonian mechanics and modern physics is quantum mechanics. Quantum mechanics is supposed to be described by Schrödinger's equation, worshipped by modern physicists. The equation was formulated by Erwin Schrödinger in 1925 seeking an equation with wave-like solutions called wave functions describing the dynamics of atoms and molecules resulting from an interplay of positive kernels and negative electrons under attractive and repulsive electric Coulomb forces. Nothing strange in principle, but what Schrödinger had created showed to be nothing but a Monster. Monster? Why? Well, the wave function for the simplest case of the Hydrogen atom with one electron depends on 3 space coordinates and and time, but the wave function for an atom with N electrons depends on 3N space coordinates (and time), which makes it into a Many-Headed Monster beyond direct physical interpretation: • Instead of describing an actuality in 3 space dimensions, the wave function describes all possibilities • Instead of describing a specific actual sequence of 1000 coin flips, the wave function describes the 2^1000, much more than 10^100 = googol, possible sequences of coin flips. • Instead of describing the life of one specific actual human being, it describes the lives of all possible human beings. As soon as Schrödinger understood that he had created a scientific monster, he tried to kill it but failed and then he withdrew from physics, while the Monster captured the minds of all the modern physicists (except Einstein's) who quickly formed a whole army under the leadership of Niels Bohr and his Copenhagen Interpretation of the wave function as a probability distribution of all possibilites. To get from possibility to actuality, the idea of collapse of the wave-function was invented, a Monstrous Idea to handle a Monster. Before collapse the Schrödinger Cat in the box would be in a state of superposition of alive and dead with all possibilities still present, and only upon opening of the box and inspection, would the Cat collapse into an actuality as alive or dead. This Monstrous Idea has led modern physics into an endless desert of Multiverses and Many-Worlds of all possibilities. A recent contribution to this monstrosity is the The Multiverse Interpretation of Quantum Mechanics by Raphael Bousso and Leonard Susskind: • We argue that the many-worlds of quantum mechanics and the many worlds of the multiverse are the same thing, and that the multiverse is necessary to give exact operational meaning to probabilistic predictions from quantum mechanics. Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment". Read and try to understand where physics is today... For a new approach without monsters, see Many-Minds Quantum Mechanics based on a different non-linear version of the Schrödinger equation as a coupled system of one-particle three-dimensional equations. The thesis of Hugh Everett III behind the many-worlds interpretation exhibits the difficulties or rather monstrosities of the usual scalar linear multidimensional version of Schrödinger's equation. We will return to Everett's thesis in search of a connection between many-minds and many-worlds physics. Since we all have different conceptions of the world, maybe we in fact live in a many-worlds universe, one for each mind. Of course, the following questions then comes up: What is a mind and how many are there? Another monstrosity perturbing the minds of many modern physicists is the Greenhouse Gas Effect, but there are some physicists fighting this monster, as e g William Happer: The Truth about Greenhouse Gases referring to Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds first published in 1841. The development of modern physics into monstrosity is described in Dr Faustus of Modern Physics. Free Will and Finite Precision Computation 5 • The free will that humans enjoy is similar to that exercised by animals as simple as flies. • Animals always have a range of options available to them...perceived as conscious decisions. • The idea tackles one of history's great philosophical debates. • What has been long established is that "deterministic behaviour" - the idea that an animal poked in just such a way will react with the same response every time - is not a complete description of behaviour. • Even the simple animals are not the predictable automatons that they are often portrayed to be. • However, the absence of determinism does not suggest completely random behaviour either. • Experiments has shown that although animal behaviour can be unpredictable, responses do seem to come from a fixed list of options. • Free will is not that lofty metaphysical thing that it was until the 1970s or so. • It is a biological property, a trait; the brain possesses the freedom to generate behaviours and options on its own. • The exact mechanism by which brains - from those of flies up to humans - do that generation remains a matter for experiments to more fully prove. • There is no way the conscious mind, the refuge of the soul, could influence the brain without leaving tell-tale signs; physics does not permit such ghostly interactions. • Tethered fruit flies proved their choices to be neither deterministic nor random. • The strong, Cartesian version of free will—the belief that if you were placed in exactly the same circumstances again, you could have acted otherwise—is difficult to reconcile with natural laws. • There is no way the conscious mind, the refuge of the soul, could influence the brain without leaving tell-tale signs. Physics does not permit such ghostly interactions. In short, free will seems to be expressed through a combination of goal-oriented determinism (as concerns big things) and indeterminism (as concerns little things), with a clear connection to finite precision computation. It seems that the discussion has passed from sterile metaphysics into a more constructive analysis of finite precision computing minds... torsdag 19 maj 2011 Free Will and Finite Precision Computation 4 Daniel Dennet advocates a compatibilism of determinism and free will, expressed as a capacity of human beings developed by evolution to avoid (unpleasant) things by voluntary action: Seeing a brick being thrown at us by some unfriendly agent, we typically choose to duck. Dennet argues that we do that by free will, since it would also be possible to choose to not duck and take the hit to get a case to bring to court. Dennet argues that either the future is fully determined by the past (Laplace demon) or the future is fully undetermined in the quantum sense that the next position of an electron is not determined but subject to throwing a dice. In either case we cannot really influence what is going to happen, and thus we cannot exercise any free will: whatever happens, happens. Yet Dennet claims that we have a free will in the sense that we can decide to avoid certain things, but not all: We will not have time to duck if the brick is replaced by a bullet. I get the impression that Dennet's resolution of the apparent contradiction between a free will and full determinism/indeterminism, is a scholastic resolution in the sense that something essential is being missed and nothing really new is brought in to solve the eternal free will problem. Would finite precision computation be helpful? The idea here is that little things may be left to be decided by the dice while major things are predetermind by a finite precision Laplacian demon. More precisely, we know that • there are major things that we cannot do even if we would like to (limited free will): e g fly like a bird. • there are major things we can do which we have decided to do (according to a predetermined master plan): e g go to college. • there are little things which we can decide by free will, which we could leave to the dice if we cannot decide: e g meat or fish for dinner. This opens to a finite precision resolution of the free will problem: • big things determined by a finite precision Laplace demon/master plan of ours • little things decided by a dice. In extreme cases a small thing could become big and would then be described as a strike of luck or accident: to win on the lottery or get hit by a falling brick. A Dark Side of CO2 Alarmism and The Royal Swedish Academy The Royal Swedish Academy of Sciences hiding behind a statue of Anders Retzius, father of racial biology with his cephalic index. The CO2 climate alarmism behind the 3rd Nobel Laurate Symposium on Global Sustainability, organized by the Royal Swedish Academy of Sciences, goes back to the Swedish physicist/chemist Svante Arrhenius, who The Stockholm-Memorandum signed by an invited group of Nobel Laurates states that • Humans are now propelling the planet into a new geological epoch, the Anthropocene and makes the following Call: • Fundamental transformation in all spheres and at all scales to stop and reverse global environmental change. • Greatly increase access to reproductive health services... • reduce birth rates. What is "reproductive health services"? How to "reduce birth rates"? Is there a connection to Arrhenius, or is it just a fantasy? Why was the Symposium held behind closed doors? I have already expressed my protest against the uncritical support of CO2 alarmism by the Royal Academy in a symbolic resignation. Basic Science: Climate Sensitivity Less Than 0.3 C For the convenience of the reader I here collect the links to a couple of basic arguments showing that the effect of doubled atmospheric CO2 at most could be a global warming of harmless 0.3 C, that is that the climate sensitivity is smaller than 0.3 C. The idea is to combine observation with simple mathematical models, where observation is used to determine the coefficients of the model, thus allowing prediction. Stefan-Boltzmann's radiation law for an ideal blackbody is not used, since it does not describe the complex Earth-atmosphere system. I thus only use simple models with coefficients determined by observation, which is the basic scientific method leading to the basic mathematical models of physics, such as the heat equation, potential flow equation and radiative transfer equation. I assume that doubled CO2 could correspond to a change of the radiative properties of the atmosphere of 1%, or a "radiative forcing" of 3 W/m2 = 1% of a total insolation of about 300 W/m2. I assume that the "atmospheric effect" is 33 C corresponding to raising the temperature of an Earth without atmosphere (= observed mean temperature of the Moon) of - 18 C to the observed temperature of the Earth with atmosphere of 15 C. These are three different arguments using different data and different simple models, all giving the same result of a climate sensitivity smaller than 0.3 C, where 0.3 C is to be viewed as an upper bound, with the real value probably a factor 2 - 3 smaller. IPCC claims a "best estimate" which is 10 times bigger = 3 C, which is obtained by confusing definition with physical fact and free invention of positive feed-backs. IPCC has invented a factor 10 for which there is no scientific basis. In economics a factor 10 would be swindle and it is the same in science, or even worse. onsdag 18 maj 2011 What is a Princess Allowed to Say? About CO2 and Great Transformation? Crown Princess Victoria of the Kingdom of Sweden stated in her presentation at the 3rd Nobel Laureate Symposium on Global Sustainability organized by the Royal Swedish Academy of Sciences: • Burdens must be shared by everyone (including masses of poor people). • Wind turbines, solar connectors, panels and geothermal energy, why is it that countries are using soo little of renewable energy sources despite having the knowledge and technique? • We can and must change our life styles and the manner in which we use energy. • What are we waiting for? The work has to start here and now. • The world succeeded to come together and decide upon removal of freons. • To succeed we need to reconnect humanity with the biosphere. • This is no small task. I see no better persons though than Nobel Laurates to carry this critical message to the world: • The need for a Great Transformation. • Our generation has the knowledge and ability to create a sustainable world for future generations. The presentation poses the following questions • Does the Princess make political statements? • Is the Princess allowed to make political statements? • Is the Princess allowed to advocate specific techniques for generating energy? • Is the Royal Swedish Academy influenced by Royals? Any answers? See also my Newsmill article about biased jury. The Princess speaks the same words as Hans Joachim Schellnhuber, main organizer and ideolog of the symposium, according to New York Times known for his "aggressive stance on climate policy": 1. Earth’s population could be devastated by buildup of greenhouse gases. Does the Princess understand what she is saying? PS1 The verdict of the Jury of Nobel Laurates of the Symposium is expressed in the Stockholm-Memorandum: • Greatly increase access to reproductive health services... reduce birth rates. • Introduce strict resource efficiency standards • Launch a major research initiative on the earth system. • Scale up our education efforts to increase scientific literacy. This is nothing but A New Brave World......but is there place for a Princess in this New Brave World? A resource efficient renewable Princess? tisdag 17 maj 2011 Free Will and Finite Precision Computation 3 This is a continuation of Free Will 2: So can we make it rain tomorrow by leaving the car window open? Can the flap of a butterfly in Brazil set up a tornado in Texas? How can we tell? Well we have already answered this question: Take away the butterfly and observe tornados anyway, close the window and observe rain anyway. Or let the butterfly flap and observe no tornados, open the window and observe no rain. Evidently we are talking about a big effect (tornado, rain) from a small cause (butterfly, car window), which is only possible if the system under consideration is unstable. Why? Because the definition of an unstable system is that a small cause can have a big effect. If the effect of any small causes is small, then the system is stable. Most of the systems we can observe are (more or less) stable, because unstable systems tend to break down or explode into non-existence. Is the weather unstable? Well, we say that the weather is unstable when changes are unpredictable, and we know that this is often the case. How unstable can then the weather be? Can it be so unstable that the flap of butterfly can cause a tornado? Probably not. We expect that sufficiently small perturbations cannot change the major features of the weather and cause a tornado. This means that it is irrelevant whether a butterfly flaps or not, or if we leave the car windows open or not, as concerns tornados and rain. If we accept that small causes do not change major features, that is, that we are dealing with a (more or less) stable system as a typical system which we may be confronted with, then we could say that we could leave certain little things to be determined by chance, by throwing a dice: • It would not change anything essential. • It would save us time for essentials by avoiding getting drowned into pedantry. • In fact, it would be necessary to not get bogged down by details. • In other words, we would have to act with finite precision in order to not get stuck on the spot at a specific point in time. • Time is advancing and so we have to advance as well and thus we have to take decisions with finite precision only, because we have no time to do everything with infinite precision. We are now approaching the question of free will. Can we do anything we could of think of doing? No, our abilities are limited, but within these limits we would say that we have some form of free will. We could decide what to study at the university, with whom to engage, how to dress, what to eat, what to say, but all these decisions could fit into some form of master plan for our life, which we probably should search for if we don't have any. Our free will would not be entirely free but subordinate to a master plan, which we may have chosen by free will or inherited from our parents, spouse or friends or society. So maybe our free will as concerns big things is not that free, as if the main pattern of our life largely is predetermined. We could still argue that we have a free will to decide little things, what movie to see, what to have for dinner et cet, but we could also say that we will only spend limited time on these issues to find the "optimal solution". We could even use a dice to decide if we cannot easily make up our mind or come to some agreement with somebody. But you would not like Luke Rhinehart decide big things, such as getting divorced or not, by throwing the dice, because that would quickly ruin your life. In short, you would act with finite precision and feel that you have some form of free will in particular for little things, possibly exercised using dice, while you may feel that the main path of your life (or at least other people's lives) is more or less predetermined. This corresponds to something between full determinism (no dice) and full indeterminism (all dice) as a for of finite precision determinism (dice only for little things). In other words, a free will which is not completely free, but not completely unfree either: • a finite precision free will. PS Suppose Tom wants to show Harry that he has a free will. Consider the following conversation: Tom: Look, I can decide to lift my left arm or my right arm according to my own free will. Harry: How you decide to lift the left or the right? Do have some predetermined preference? Tom: Of course not, then it would not be free will. Harry: OK, but if you are completely neutral, how are going to decide? Tom: Let me think...should I lift the right arm...or should I lift the left...what could be a good reason to lift the right arm...instead of the left...well, I cannot really decide...I need more time... but even so I don't how to choose while staying fully neutral... Harry: Can I offer some help? What about flipping a coin? Tom: Flipping a coin? Yes, that must be the only possibility which is completely neutral, without any perdetermined predjudice for right and left. That's what I will do to not get held up by this silly test... What Does a Nobel Laurate Understand about CO2? 1 Murray Gell-Mann, Nobel Prize in Physics in 1969 for the Standard Model of elementary particles, is one of the Nobel Laurates to decide about the future of humanity at the Nobel Laurate Symposium on Global Sustainability at The Royal Swedish Academy of Sciences, May 16-19, • Evidence that the Earth is warming by human emissions of greenhouse gases is unequivocal • fossil fuel raising CO2 above the limits of the Holocene • exit door from the Holocene had been opened • Great Acceleration: human population tripled, consumption in the global economy grew many times faster • Great Acceleration has not been an environmentally benign phenomenon • eroding the Earth’s resilience, ocean acidification. The agenda for the meeting is presented by The German Advisory Council on Global Change, chaired by Prof. Dr. Hans Joachim Schellnhuber, as a Summary for Policy Makers: World in Transition, A Social Contract for Sustainability: • carbon-based model unsustainable • low-carbon society is a Great Transformation • global energy system decarbonised • greenhouse gas emissions absolute minimum • low-carbon societies • quantum leap for civilisation • universal consensus • Global Enlightenment • new social contract • science subservient role • sustainability is a question of imagination. The purpose of the meeting is to get Nobel Laurates of Physics and Chemistry to confirm on scientific grounds that CO2 emission is the big threat to human civilization. The Nobel Laurates will form the jury of a Tribunal facing Humanity with charges of destroying the Earth (by CO2 emission). We ask the questions: • Do Nobel Laurates understand the role of CO2 for global climate? • Do Nobel Laurates say that society will have to be decarbonized by 2050? and will report on answers..stay tuned... PS Johan Rockström (organizer) and Andreas Carlgren (minister) write in DN Debate to prepare the Swedish opinion: • To avoid catastrophical climate change, many scientists believe that CO2 emisssion from fossil fuels must stop by 2050. • This requires resources to renewable energy of unprecedented size. Note the term many scientists, not as before all scientists... This seems to be an acknowledgement that there are also many scientists who consider CO2 emission to not be harmful at all. What if the jury was changed to the latter group of scientists? What would then the charges be? Who would then sit on the accused bench? måndag 16 maj 2011 Free Will and Finite Precision Computation 2 Continuation of Free Will 1: Self-publishing on Google Books? I have published the following new books of mine on Google Books as fullview with free PDF download: The idea is to compare direct publishing on Google Books with self-publishing on e.g. Amazon CreateSpace, or to conventional publishing through an established publisher as ebook or printed book. It appears that Amazon CreateSpace requires conversion of pdf to a different ebook format, which is not automatic and tricky if math formulas are involved. Maybe somebody has some good advice to give. söndag 15 maj 2011 Free Will and Finite Precision Computation 1 In recent books I have shown that the concept of finite precision computation, in reality in analog form and in simulation of reality in digital form, can be used to give rational deterministic (mathematical) explanations of the following phenomena: • direction of time • 2nd law of thermodynamics • blackbody radiation, which have evaded explanations using both classical deterministic exact mathematics and classical statistical physics. Finite precision computation opens classical exact determinism to some imprecision or indeterminism, without going all the way to the full indeterminism of statistical physics, and thus avoids the impossibility of both extreme determinsim and extreme indeterminsim. In finite precision computation, little things may be decided by throwing a dice, corresponding to chopping a decimal expansion into a finite number of digits, while big things still may be fully deterministic. The concept can be described as one of the following options for using dice throw to decide what to do: • Full Determinism: Calculate everything exactly. Never throw a dice. • Full Indeterminsim: Calculate nothing. Always throw a dice. • Finite Precision: Calculate the big. Throw a dice to decide the small. Full Indeterminism is represented by the cult novel The Dice Man by George Cockcroft about the psychiatrist Luke Rhinehart, who decides to let the dice decide everything with catastrophical results from using it to decide big things like getting divorced or not. Full Determinsim is represented by the fatalism of Richard Taylor exhibited by the cult author David Foster Wallace, who took his own life on Sept 12 2008, maybe after asking the dice to decide to pull the trigger or not. Wallace wrote a college thesis on Taylor's fatalism with title Fate, Time, and Language: An Essay of Free Will, republished in 2010 by Columbia State University. Can Finite Precision Computation be used to shed some light on the eternal philosophical problem Free Will? I will address this question in a sequence of posts, while reading a bit of Wallace. I will start with the following question: • Is it helpful to let a dice decide little things? onsdag 11 maj 2011 Har Professorn Avskaffat Sig Själv? I den nya Högskolelagen är det prefekten som "leder verksamheten", dvs bestämmer vad som skall göras och sägas, medan professorn/läraren "har hand om" utbildning och forskning, dvs gör jobbet, på order av ledningen. Jag tar upp detta i ett inlägg på Newsmill: med utgångspunkt från mina erfarenheter av censur och munkavle på KTH, redovisat under KTH-gate. Naturligtvis refuserades artikeln av Svd och DN var ju inte att tänka på. Munkavle på! Ytterst handlar det om akademisk tankefrihet, om vem som skall bestämma vad som är aktuell vetenskaplig sanning, professorn/vetenskapsmannen eller administratören/politikern? Mina professorskollegor i landet är anmärkningsvärt indifferenta till frågan, som om den inte berörde dem: • Kan det verkligen vara så att professorn avskaffat sig själv utan att någon har märkt något och än mindre sagt något? • Utan att professorns fackförbund SULF haft någon invändning? • Kanske det i den nya högskolan inte behövs några professorer med uppgift att tänka självständigt? • Har professorn utan att protestera låtit sig förses med munkavle? Debatten på Newsmill kanske ger svar. PS1 Vad gäller munkavle, censur och tystande av kritiska röster, så är det naturligtvis effektivt så länge det funkar till 100%, men det förutsätter att alla kanaler stängs och att övervakningen är total. Detta är dock svårt att uppnå i dagens nya informationsvärld: Debatten i klimatfrågan har nu tagits över av den fria blogg-sfären och det politiskt korrekta tänkandet har tappat sin hegemoni på vetenskaplig sanning. Något för KTH, DN och SvD att betänka, kanske. Det finns ju också skygglappar att sätta på. PS2 Lustigt nog arrangerar KTH ett Symposium om Akademiskt Ledarskap 13/5 till ära av Ingrid Melinder (som inte är professor), där naturligtvis de administratörer som satt munkavle med benäget bistånd av Melinder, talar: Peter Gudmundson och Folke Snickars. Kanske läge att ta upp KTH-gate? Nog inte: det är bara administratörer som får tala om akademiskt ledarskap. Professorer får hålla tyst, i den nya högskolan (utom Mathias Uhlen med sina 800 milj/år), medan Scoutförbundet får utveckla sin ledarskapsfilosofi, för högskolan. måndag 9 maj 2011 SULF om Censur och KTH-gate Efter att ha blivit utsatt för censur med ett direkt personangrepp av KTH backat av Rektor Peter Gudmundson, vilket jag redovisat i en serie poster under KTH-gate, vände jag mig till mitt fackförbund SULF för att se om jag kunde få något stöd. Jag presenterade mitt fall vid ett möte med förbundsjurist Carl Falck, som därefter deltog i ett möte med Rektor stöttad av sin adjutant Anders Lundgren, varvid mycket tydligt framkom att Rektor inte hyste minsta betänklighet att genom sitt agerande allvarligt skada min professionella verksamhet. Anders Lundgren var noga med att påpeka att KTH aldrig (aldrig) kommenterar uppgifter i pressen som tillskrivs Rektor, även om de är grovt felaktiga och hårt drabbar den som utsätts för felaktiga nedvärderande uppgifter. Aldrig! KTH har principer, och KTH följer sina principer, även om det är tufft. Peter Gudmundson är gammal hockeyspelare och är van vid hårda puckar. Carl Falck meddelar mig kort därefter, efter att ha träffat Anders Lundgren utan min närvaro: • Vi har diskuterat frågan vid flera tillfällen inom förbundet, men har nu kommit fram till att det idag inte finns utrymme för oss att föra denna fråga vidare. Det svar jag kan ge dig, och det är vårt gemensamma svar, är att vi inte kommer att agera fortsättningsvis i detta ärende. Förbundordförande Anna Götlind bekräftar med: • Som SULF:s förbundsjurist Carl Falck tidigare meddelat dig kommer SULF inte att bistå dig vidare i ditt ärende. • SULF är alltid positiva till debatt om den akademiska friheten, men som förbund kan vi inte debattera i enskilda ärenden. Tja, vad skall man nu säga om detta? Ja inte kan man säga att SULF givit mig något vidare stöd. Ur min synvinkel har SULF snarast försvårat min situation genom att till synes helt liera sig med KTHs ledning. Carl Falck säger att det idag inte finns utrymme medan Anna Götlind använder argumentet att SULF inte tar upp enskilda ärenden, som om det trots allt skulle finnas utrymme bara inte mitt ärende vore så himla enskilt. Jag har snällt betalat min avgift till SULF i 40 år (minst 100.000 kr) utan att någonsin besvära med något enda litet ärende. När jag till slut vänder mig till SULF i en utsatt situation får jag kalla handen. Det är klart att jag känner mig ganska korkad som gått på en sådan nit. Men jag är väl inte ensam i den gamla trogna skaran som trodde att facket var till för medlemmen. Bland de unga är det väl inte så populärt att betala fackföreningsavgift. SULF stadgar säger: • SULF har till uppgift att tillvarata och bevaka medlemmarnas fackliga, sociala och ekonomiska intressen samt att företräda medlemmarna i sådana frågor. Skall jag tolka detta som att mina professionella intressen i min roll som professor ligger utanför SULFs ansvarsområde? Tar inte SULF upp enskilda ärenden utan bara ärenden som gäller alla medlemmarna? Skall alla medlemmarna ha utsatts för censur för att SULF skall ta upp frågan? Vi får höra vad Anna Götlind svarar: • Jag svarar inte. Jag har redan svarat att SULF inte bistår dig i detta ärende. Så småningom får jag väl sammanfatta mina erfarenheter av SULF i ett debattinlägg i förbundets tidskrift Universitetsläraren, såvida inte detta också censureras bort...men Newsmill finns ju alltid....nytt inlägg kommer snart... Definition as Physical Fact In science and philosophy the distinction between synthetic and analytic statements is fundamental, according to Kant's Critique of Pure Reason. An analytic statement is about language and its truth can be evaluated by checking the meaning of the words forming the statements. A definition is analytic as a specification of the meaning of a new word in terms of previously defined words, e.g. bachelor as unmarried man. A synthetic statement is about some reality and can in principle be checked by observing the reality. The statement "1 meter is equal to 100 centimeters", is analytic, while the statement "this stick is 1 meter long", is synthetic. To subject an analytic statement to experimental observation would be ridiculous: To check by experiment if there are 100 centimeters on 1 meter would not give a Nobel Prize, just laughs. So if an experiment is set up to test a statement, that is a sign that the statement is viewed as synthetic. In modern physics the distinction between a definition (analytic statement) and synthetic statement is sometimes blurred into statements which are viewed to be both analytic (true by definition) and synthetic about some reality, or rather sometimes analytic and sometimes synthetic, sometimes definition sometimes fact. Such a statement makes it possible to say something about reality which cannot be denied, and it is directly recognized as such. When you hear a physicist making a statement claiming that something cannot be denied, then the statement is such a double analytic-synthetic statement. Here are some key examples: 1. The speed of light in vacuum is constant. 2. Heavy mass is equal to inertial mass. The constancy of the speed of light is a definition since according to the 1983 standard length unit of a meter is defined as a certain fraction of a lightsecond = the distance traveled by light in one second. The speed of light is thus by definition equal to 1 lightsecond per second, no more no less. On the other hand, a physicist is convinced that the speed of light is constant as a physical fact. A physicist would say that because the speed of light is constant in reality, it can be used to define the length standard. So we have a definition which is a physical fact at the same time: Double analytical-synthetic. Einstein was a master of this form of double-play: The basic assumption of special relativity is that the speed of light is constant, and Einstein uses this statement sometimes as analytic and sometimes as synthetic. Very clever and very confusing. But according to Kant it is not reasonable. In general relativity Einstein uses the equality of heavy and inertial mass both as definition and physical fact. In this case experimental verification of equality could give a Nobel Prize. In climate science the following statement is the very basis of climate alarmism: • No-feedback climate sensitivity is equal to 1 C, with climate sensitivity the global warming from doubled atmospheric CO2. This is presented as an undeniable fact and as such is an example of a double analytic-synthetic statement. The 1 C comes from a direct application of Stefan-Boltzmann's radiation law Q = sigma T, in its differentiated form dQ ~ 4 dT with Q ~ 240 W/m2, T ~ 288 K and dQ = 4 W/m2 as "radiative forcing" from doubled CO2. Thus dT = 1 C as climate sensitivity. This statement is analytic because the simple algebraic law Q = sigma T cannot tell anything about the reaction of the complex Earth-atmosphere system upon a small perturbation. So climate sensitivity = 1 C is a definition but it is used as statement of factual global warming of 1 C. It is a double analytic-synthetic statement, and it is recognized as an undeniable fact about reality. It is so undeniable that even skeptics like Lindzen, Monckton and Spencer, are convinced that it is a true fact and not just a definition. We just learned that a double analytic-synthetic statement can be extremely powerful, the very basis of climate alarmism, yet it is easy to discover as soon as one is aware of the double-play. I hope the reader is stimulated to find other examples of double analytic-synthetic statements used in the debate today. They are not difficult to find once the light is on. For example, what about the statement: • Educated people are superior to no so well educated people! Definition or fact, or both? söndag 8 maj 2011 The Final Solution by The Royal Swedish Academy • Together with Stockholm Environment Institute, Stockholm Resilience Centre, Beijer Institute for Ecological Economics and Potsdam Institute for Climate Impact Research, the Royal Swedish Academy of Sciences will bring together some of the world’s most renowned thinkers and experts on global sustainability, 16-19 May 2011 in Stockholm. Only for invited guests. • Normatively, the carbon-based economic model is also an unsustainable situation. • This structural transition is the start of a "Great Transformation" into a sustainable society, which must inevitably proceed within the planetary guard rails of sustainability. • By the middle of the century, the global energy systems must largely be decarbonised. • Production, consumption patterns and lifestyles in all of the three key transformation fields must be changed in such a way that global greenhouse gas emissions are reduced to an absolute minimum over the coming decades, and low-carbon societies can develop. • The extent of the transformation ahead of us can barely be overestimated. • In terms of profound impact, it is comparable to the two fundamental transformations in the world‘s history: • the Neolithic Revolution, i.e. the invention and spreading of farming and animal husbandry, and the Industrial Revolution, meaning the transition from agricultural to industrialised society. • This would be something of a quantum leap for civilisation. • It should in principle also be possible to reach a universal consensus regarding human civilisation's ability to survive within the natural boundaries imposed by planet Earth. • This necessarily presupposes an extensive "Global Enlightenment". • So nothing less than a new social contract must be agreed to. • Science will play a decisive, although subservient, role here. • Ultimately, sustainability is a question of imagination. In other words, a Final Solution to the Carbon Question will be presented by the Royal Swedish Academy of Sciences. The basic idea is to comb Europe through from West to East from North to South for carbon and transport it to Eastern Poland, where it will be gassed (in special camps, see picture above). This will secure a carbon-free sustainable Europe, which will serve as a model for the rest of the world including its 3 billion people who still do not have access to essential modern energy services. The Symposium will conclude with a memorandum signed by key Nobel Laureates, crowned by a dinner hosted by King Carl XVI Gustaf. Among the invited 50 of the world’s most renowned thinkers, we find: • Martin Rees, President Royal Society, • Mikhail Gorbachev, Nobel Peace Prize 1990 • Andreas Carlgren, Swedish Minister of Environment • Murray Gell-Mann, Nobel Prize in Physics 1969 for his contributions and discoveries concerning the classification of elementary particles and their interactions. • David Gross, Nobel Prize in Physics 2004 for the discovery of asymptotic freedom in the theory of the strong interaction. • Johan Rockström, Stockholm Resilience Center • Anders Wijkman, Stockholm Environment Institute. Note that the key Nobel Laurates when signing the memorandum accept that science will play a subservient role. It is natural to compare with Manifesto of the Ninety-Three and the suppression of quantum mechanics and relativity in the Soviet Union as "idealistic" and "bourgeois", and in Nazi-Germany as "Jewish physics". fredag 6 maj 2011 Presentation at Stockholm Initiative: The IPCC Trick The Hustler (1886-1905) by Ernst Josephson (student at the Royal Academy of Fine Arts in 1867) Here is a summary of a short presentation at the annual meeting of the Stockholm Initiative at the Royal Academy of Fine Arts, May 7. 1. IPCC Climate Sensitivity = 3 C • The CO2 climate alarmism of IPCC is based on an estimate of climate sensitivity (global warming by doubled CO2) of 3 C obtained by positive feedback from a no-feedback sensitivity of 1 C. • No-feedback sensitivity is obtained by definition from Stefan-Boltzmann dQ ~ 4 dT with dQ = 4 W/m2 assumed "radiative forcing" from doubled CO2. • Note: A definition says nothing about reality. The 4 W/m2 of "radiative forcing" is a theoretical assumption rather than observed reality. Insolation constant. 2. The Question • What is the global warming effect of a 1 % change of atmospheric radiative properties? • 4 W/m2 is about 1 % of gross insolation of 360 W/m2 • 3 C = 1 % of gross temperature 288 K • Reasonable?? Unreasonable?? 3. Observation + Simple Models: Climate Sensitivity = 0.3 C Combining basic mathematical models and direct observation of • temperatures, lapse rate, insolation and thermodynamics, one obtains a climate sensitivity which is 10 times smaller than IPCC: • 1 % change of atmospheric radiative properties • 0.3 C is about 1% of "atmospheric effect" of 33 C (= 288 - 255 K) • wellposed (stable): 1% forcing gives 1% = 0.3 C 3. IPCC Trick: Backradiation • Real radiative exchange between surface and atmosphere: 30 - 60 W/m2 • 1 % change of atmospheric properties: 0.3 - 0.6 W/m2 net radiative forcing • IPCC backradiation exchange: 300 - 400 W/m2 • 1 % change of atmospheric properties: 4 W/m2 gross radiative forcing • view 3 C as 1% of gross temperature 288 K, not 1% of "atmospheric effect". 4. Backradiation Fiction In Computational Blackbody Radiation I give a new mathematical derivation of Planck's radiation law showing that backradiation is fiction. This is mathematical evidence that the 3 C of IPCC is based on fiction: 10 times too big. 5. Wellposedness: Butterfly in Brazil vs Torando in Texas IPCC claims that a small cause (1% or 0.1% change of atmospheric properties) can have a big effect (global warming of 3 C = 10% of atmospheric effect 33 C). 6. The Lorenz Model Can a butterfly in Brazil set off a torando in Texas? • Can be disproved by removing butterfly and observing tornados. • Can never be proved, because a very precise model is required (both butterfly and tornado). Requires unstable system: small cause - big effect. 7. Is global climate unstable? Observations say No rather than Yes. Atmosphere as air conditioner: Radiative forcing changes intensity of thermodynamics with little temperature change. Compare with boiling water: heat forcing gives more vigorous boiling at steady temperature. 8. KTH-gate KTH censored my mathematical analysis of climate models. Unique in (Swedish) modern academic history (after 1632). At present my professors union SULF hesitates to take up my case, as if my union and KTH were acting in tandem to silence my voice. How is this possible? Well, in the new university system in Sweden 0f 2011, it is the administrative hierarchy of rector, dean and prefect, which determines the scientific truth and not the professor (as during 1632 - 2010). The censorship of my work is therefore fully logical and apparently accepted even by the professors union, and also by Swedish professors. Only one has questioned the censorship, Ingemar Nordin. torsdag 5 maj 2011 The IPCC Trick 6
5b4c75a055cc6361
Dismiss Notice Join Physics Forums Today! Atom Stability 1. Apr 28, 2007 #1 Why does in QM the electron does not fall toward the nucleus? After all, the only force between nucleus and electron is attractive (- electron and + nucleus). Is the same reason that justifies the moon does not fall to the earth? 2. jcsd 3. Apr 28, 2007 #2 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award Please read our FAQ in the General Physics forum. 4. Apr 28, 2007 #3 Electrons do really fall to the nucleus. They are during sometime in equilibrium on an orbit, exactly like the moon. In this equilibrium attraction in exactly compensated by inertia (centrifugal force). However, rotating electrons lose energy because the emit electromagnetic radiations. Therefore, they actually fall onto the nucleus. The first strange thing is that they do that suddenly, not continuously. The second strange thing is that they do not fall little by little but by finite steps to precesely defined orbits. The last strange thing is that they finally stop falling and do not reach the nucleus. The last level they reach is called the fundamental level. This is explained by the wave-like nature of electron and is at the basis and origin of quantum mechanics. The other level are called excited levels. At low temperatures, most atoms are in the fundamental level where the electron reached the final stable orbit. In principle the moon could also behave like that because gravitational wave may also dissipate its energy. However the moon, and the earth are such big objects that their wave-like behaviour are totally negligible and not observable. If you want more understanding, you should train and learn in physics and mathematics. Last edited: Apr 28, 2007 5. Apr 28, 2007 #4 User Avatar Staff: Mentor The Bohr-Sommerfeld planetary model of the atom has been dead, dead, dead since the 1920s. Please don't encourage people to think in terms of that model, except as a purely historical exercise. 6. Apr 28, 2007 #5 Ok, I'm a physicist. I know the Bohr-Sommerfeld model.. but it does not explain the atom structure. It states. Nothing else. THAT'S NOT A VALID ANSWER: it's like to say "this is so, because so it is". And the problem is the same in the planetary motion, gravitational force is only attractive... But why the moon does not fall to the earth? I don't want answers such "there exists strange things, theories that stands tall, etc". Give me the physical reason... I should think that no one knows it? 7. Apr 28, 2007 #6 User Avatar Staff: Mentor In the case of the moon (which can be described without QM, of course), it is always falling (accelerating) towards the earth. However, it is also moving sideways because of its orbital motion, so it always misses the earth! :biggrin: In the case of the atom, the electron does sometimes "hit the nucleus." QM does not allow us to calculate a planet-like trajectory for the electron. All we can calculate (from solving Schrödinger's Equation) is the probability of finding the electron in various locations. It turns out that in general the electron does have a very small probability of being located inside the nucleus, at any instant of time. If it is then possible for the electron to interact with the nucleus, and still satisfy conservation of energy, it can do so. This is called electron capture, and some radioactive nuclei do decay via this process. 8. Apr 28, 2007 #7 I don't understand the purpose and the utility of this remark: To answer the question by ClubDogo, which was in a naïve style, it would have been totally meaningless to come with the Schrödinger equation and wavefunctions. The main ingredients to answer the question were included in my post, in an "allegorical way" yet useful way. For students with a 30 hours background in quantum mechanics, the translation to the rigourous language is easy. However, they usually ignore basic things like radiations by charged particles and of course quantum field theory. Therefore, it would be an total illusion to think that a more precise language would make a better answer, at this level of a discussion. Finally, the next question is: why can't the fundamental level lose anymore energy by radiation​ and to answer this question, the BS model would indeed become insufficient. Well, I guess so, but I could have fun this evening to think about it. Electron capture involves the weak interaction. I think the initial question by ClubDogo was related to the stability of atoms under the electromagnetic interaction only. (- electron and + nucleus). This is indeed an important think to learn and understand in quantum mechanics. I think it is of no real help to involve the weak interaction in the answer. Last edited: Apr 28, 2007 9. Apr 28, 2007 #8 You should understand that the BS model gave a first "explanation" of the atomic levels. The idea was that electrons had a wave-like structure and that stationary states had to be "resonant". However, this was a naïve theory. Obviously this wave had to be described in 3 spatial dimensions and time. Bohr and Sommerfeld and everybody at that time knew that very well. Many people at that time also had a deep understanding of classical mechanics, like Schrödinger and Dirac. It turned out that Schrödinger was the first to come with a full wave picture for the Hydrogen atom, a result that he based on his knowlegde of CM. Dirac was soon able to go further. Now, what is the conclusion of this story? I think that, to some extent, we can not say that the BS theory is dead or that it does not explain the stability of the atoms. We cannot say that it states without explaining. The full consistent quantum theory will not give you any further explanation, altough it will give you more aspects as well as other consequences (vacuum fluctuations for example, ...) The stability of the atom in the BS model or in the full QM theory has the same explanation: the stationary states are a resonant structure. In other words: In the simplified BS model as well as in QM, the orbitals are resonnant structures (eigenmodes, eigenvectors). In the BS model, this structure is oversimplified, but gave the right levels(by chance). It was practically a simple 1D model. In the QM theory, the structure is almost perfectly described and therefore more predictions are possible.​ Last edited: Apr 28, 2007 10. Apr 28, 2007 #9 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award This seems to be a common occurrence here lately, and I don't know why. Can you show something that actually has a "physical reason" so that we can THEN at least understand what you mean by such a thing. As a "physicist", you of all people should have been aware that at the MOST FUNDAMENTAL LEVEL, all we have for every single phenomenon is a description. Go ahead and pick anything and see if what you think you have "understood" is nothing more than a physical description of that phenomenon. 11. Apr 28, 2007 #10 User Avatar Science Advisor Gold Member Echoing what ZapperZ has said: Physics is continuing attempt to answer the "why". That is actually done by way of improved descriptions - usually by way of a theory which has predictive power. Yet... when you answer one "why" you end up creating another! There are plenty of questions we will likely never be able to answer the "why" about. Why were you born? We might be able to answer the "how" (descriptive) but not the "why". In sum: It is not a flaw in a theory that it does not "explain" or "describe" in a fashion that answers "why" questions. It is possible that there is no better theory of gravity than General Relativity, for example, and we still do not know "why" we live in 4 spacetime dimensions rather than some other number. 12. Apr 28, 2007 #11 As I said very often, physics is not about explaining things. Physics is about describing things of the world and their relations with a minimum amount of information. We can think that quantum mechanics and electromagnetism explain atomic physics. It is more correct to think that atomic physics does not need more than these two theories to get a full theoretical description and that moreover, atomic physics shares QM and EM with many other parts of physics. 13. Apr 28, 2007 #12 User Avatar Staff: Mentor I mentioned electron capture because it illustrates that the electron sometimes really does "fall into the nucleus" in some sense, as described by the QM probability distribution. The fact that it proceeds via the weak interaction doesn't matter; the weak interaction doesn't get the electron "into" the nucleus, as far as I know. 14. Apr 29, 2007 #13 I understood why you mentioned the EC. But an EC depends much more on the state of the nucleus than on the state of the electron. It was good however to remind ClubDogo of the non-zero probability of presence within the nucleus. But, I thought ClubDogo was comparing the (apparent) stability of the moon orbit with the stability of atoms. Therefore, my preffered answer was back to the basics: without radiative effects, the stability is the consequence of the wave-like nature of the electrons, for any levels with radiative effects, only the fundamental level is absolutely stable the next good question is: why does the fundamental level not radiate EM energy?​ 15. May 2, 2007 #14 User Avatar Science Advisor The Bohr-Sommerfeld theory had lot's of problems; in fact both Bohr and Sommerfeld beame proponents of the, then new, quantum theory. Among other things, the Bohr-Sommerfeld and Schrodinger/Heisenberg theories were based on very different physical reasoning. Yet, the fundamental idea of a stationary state, invented by Bohr, became a key ingredient of modern QM. The stability of the hydrogen atom is virtually guaranteed in QM by the stationary states given by the Schrodinger Eq. -- naturally, this assumes that the hydrogen-radiation interaction is small. Really, we build in atomic stability from the very beginning in QM. And this stability is fundamentally a quantum effect. Reilly Atkinson 16. May 4, 2007 #15 Maybe your question intended to be: "If a proton and an electron are stationary at some distance and then they are released, if they are point particles, shoudn't there be a non zero probability they don't interact forming the hydrogen atom but, instead, the electron fall directly on the proton?" ? I personally don't think your is a silly question. Personal answer: they are not point particles. Last edited: May 4, 2007 Similar Discussions: Atom Stability
f9d69aab2fc75a5b
Sunday, October 12, 2014 Mind control Here's a pre-edited version of my piece for the Observer today, with a little bit more stuff still in it and some links. This was a great topic to research, and a bit disconcerting at times too. Be careful what you wish for. That’s what Joel, played by Jim Carrey, discovers in Charlie Kaufmann’s 2004 film Eternal Sunshine of the Spotless Mind, when he asks a memory-erasure company Lacuna Inc. to excise the recollections of a painful breakup from his mind. While the procedure is happening, Joel realizes that he doesn’t want every happy memory of the relationship to vanish, and seeks desperately to hold on to a few fragments. The movie offers a metaphor for how we are defined by our memories, how poignant is both their recall and their loss, and how unreliable they can be. So what if Lacuna’s process is implausible? Just enjoy the allegory. Except that selective memory erasure isn’t implausible at all. It’s already happening. Researchers and clinicians are now using drugs to suppress the emotional impact of traumatic memories. They have been able to implant false memories in flies and mice, so that innocuous environments or smells seem to be “remembered” as threatening. They are showing that memory is not like an old celluloid film, fixed but fading; it is constantly being changed and updated, and can be edited and falsified with alarming ease. “I see a world where we can reactivate any kind of memory we like, or erase unwanted memories”, says neuroscientist Steve Ramirez of the Massachusetts Institute of Technology. “I even see a world where editing memories is something of a reality. We’re living in a time where it’s possible to pluck questions from the tree of science fiction and ground them in experimental reality.” So be careful what you wish for. But while it’s easy to weave capabilities like this into dystopian narratives, most of which the movies have already supplied – the authoritarian memory-manipulation of Total Recall, the mind-reading police state of Minority Report, the dream espionage of Inception – research on the manipulation of memory could offer tremendous benefits. Already, people suffering from post-traumatic stress disorder (PTSD), such as soldiers or victims of violent crime, have found relief from the pain of their dark memories through drugs that suppress the emotional associations. And the more we understand about how memories are stored and recalled, the closer we get to treatments for neurodegenerative conditions such as Alzheimer’s and other forms of dementia. So there are good motivations for exploring the plasticity of memory – how it can be altered or erased. And while there are valid concerns about potential abuses, they aren’t so very different from those that any biomedical advance accrues. What seems more fundamentally unsettling, but also astonishing, about this work is what it tells us about us: how we construct our identity from our experience, and how our recollections of that experience can deceive us. The research, says Ramirez, has taught him “how unstable our identity can be.” Best forgotten Your whole being depends on memory in ways you probably take for granted. You see a tree, and recognize it as a tree, and know it is called “tree” and that it is a plant that grows. You know your language, your name, your loved ones. Few things are more devastating, to the individual and those close to them, than the loss of these everyday facts. As the memories fade, the person seems to fade with them. Christopher Nolan’s film Memento echoes the case of Henry Molaison, who, after a brain operation for epilepsy in the 1950s, lost the ability to record short-term memories. Each day his carers had to introduce themselves to him anew. Molaison’s surgery removed a part of his brain called the hippocampus, giving a clue that this region is involved in short-term memory. Yet he remembered events and facts learnt long ago, and could be taught new ones, indicating that long-term memory is stored somewhere else. Using computer analogies for the brain is risky, but it’s reasonable here to compare our short-term memory with a computer’s ephemeral working memory or RAM, and the long-term memory with the hard drive that holds information more durably. While short-term memory is associated with the hippocampus, long-term memory is more distributed throughout the cortex. Some information is stored long-term, such as facts and events we experience repeatedly or that have an emotional association; other items vanish within hours. If you look up the phone number of a plumber, you’ll probably have forgotten it by tomorrow, but you may remember the phone number of your family home from childhood. What exactly do we remember? Recall isn’t total – you might retain the key aspects of a significant event but not what day of the week it was, or what you were wearing, or exactly what was said. Your memories are a mixed bag: facts, feelings, sights, smells. Ramirez points out that, while Eternal Sunshine implies that all these features of a memory are bundled up and stored in specific neurons in a single location in the brain, in fact it’s now clear that different aspects are stored in different locations. The “facts”, sometimes called episodic memory, are filed in one place, the feelings in another (generally in a brain region called the amygdala). All the same, those components of the memory do each have specific addresses in the vast network of our billions of neurons. What’s more, these fragments remain linked and can be recalled together, so that the event we reconstruct in our heads is seamless, if incomplete. “Memory feels very cohesive, but in reality it’s a reconstructive process”, says Ramirez. Given all this filtering and parceling out, it’s not surprising that memory is imperfect. “The fidelity of memory is very poor”, says psychologist Alain Brunet of McGill University in Montreal. “We think we remember exactly what happens, but research demonstrates that this is a fallacy.” It’s our need for a coherent narrative that misleads us: the brain elaborates and fills in gaps, and we can’t easily distinguish the “truth” from the invention. You don’t need fancy technologies to mess with memory – just telling someone they experienced something they didn’t, or showing them digitally manipulated photos, can be enough to seed a false conviction. That, much more than intentional falsehood, is why eye-witness accounts may be so unreliable and contradictory. It gets worse. One of the most extraordinary findings of modern neuroscience, reported in 2000 by neurobiologist Joseph LeDoux and his colleagues at New York University, is that each time you remember something, you have to rebuild the memory again. LeDoux’s team reported that when rats were conditioned to associate a particular sound with mild electric shocks, so that they showed a “freezing” fear response when they heard the sound subsequently, this association could be broken by infusing the animals’ amygdala with a drug called anisomycin. The sound then no longer provoked fear – but only if the drug was administered within an hour or so of the memory being evoked. Anisomycin disrupts biochemical processes that create proteins, and the researchers figured that this protein manufacture was essential for restoring a memory after it has arisen. This is called reconsolidation: it starts a few minutes after recall, and takes a few hours to complete. So those security questions asking you for the name of your first pet are even more bothersome than you thought, because each time you have to call up the answer (sorry if I just made you do it again), your brain then has to write the memory back into long-term storage. A computer analogy is again helpful. When we work on a file, the computer makes a copy of the stored version and we work on that – if the power is cut, we still have the original. But as Brunet explains, “When we remember something, we bring up the original file.” If we don’t write it back into the memory, it’s gone. This rewriting process can, like repeated photocopying, degrade the memory a little. But LeDoux’s work showed that it also offers a window for manipulating the memory. When we call it up, we have the opportunity to change it. LeDoux found that a drug called propranolol can weaken the emotional impact of a memory without affecting the episodic content. This means that the effect of painful recollections causing PTSD can be softened. Propranolol is already known to be safe in humans: it is a beta blocker used to treat hypertension, and (tellingly) also to combat anxiety, because it blocks the action of the stress hormone epinephrine in the amygdala. A team at Harvard Medical School has recently discovered that xenon, the inert gas used as an anaesthetic, can also weaken the reconsolidation of fear memories in rats. An advantage of xenon over propranolol is that it gets in and out of the brain very quickly, taking about three minutes each way. If it works well for humans, says Edward Meloni of the Harvard team, “we envisage that patients could self-administer xenon immediately after experiencing a spontaneous intrusive traumatic memory, such as awakening from a nightmare.” The timing of the drug relative to reactivation of the trauma memory may, he says, be critical for blocking the reconsolidation process. These techniques are now finding clinical use. Brunet uses propranolol to treat people with PTSD, including soldiers returned from active combat, rape victims and people who have suffered car crashes. “It’s amazingly simple,” he says. They give the patients a pill containing propranolol, and then about an hour later “we evoke the memory by having patients write it down and then read it out.” That’s often not easy for them, he says – but they manage it. The patients are then asked to continue reading the script regularly over the next several weeks. Gradually they find that its emotional impact fades, even though the facts are recalled clearly. “After three or four weeks”, says Brunet, “our patients say things like ‘I feel like I’m smiling inside, because I feel like I’m reading someone else’s script – I’m no longer personally gripped by it.’” They might feel empathy with the descriptions of the terrible things that happened to this person – but that person no longer feels like them. No “talking cure” could do that so quickly and effectively, while conventional drug therapies only suppress the symptoms. “Psychiatry hasn’t cured a single patient in sixty years”, Brunet says. These cases are extreme, but aren’t even difficult memories (perhaps especially those) part of what makes us who we are? Should we really want to get rid of them? Brunet is confident about giving these treatments to patients who are struggling with memories so awful that life becomes a torment. “We haven’t had a single person say ‘I miss those memories’”, he says. After all, there’s nothing unnatural about forgetting. “We are in part the sum of our memories, and it’s important to keep them”, Brunet says. “But forgetting is part of the human makeup too. We’re built to forget.” Yet it’s not exactly forgetting. While propranolol and xenon can modify a memory by dampening its emotional impact, the memory remains: PTSD patients still recall “what happened”, and even the emotions are only reduced, not eliminated. We don’t yet really understand what it means to truly forget something. Is it ever really gone or just impossible to recall? And what happens when we learn to overcome fearful memories – say, letting go of a childhood fear of dogs as we figure that they’re mostly quite friendly? “Forgetting is fairly ill-defined”, says neuroscientist Scott Waddell at the University of Oxford. “Is there some interfering process that out-competes the original memory, or does the original memory disappear altogether?” Some research on flies suggests that forgetting isn’t just a matter of decay but an active process in which the old memory is taken apart. Animal experiments have also revealed the spontaneous re-emergence of memories after they were apparently eliminated by re-training, suggesting that memories don’t vanish but are just pushed aside. “It’s really not clear what is going on”, Waddell admits. Looking into a fly’s head That’s not so surprising, though, because it’s not fully understood how memory works in the first place. Waddell is trying to figure that out – by training fruit flies and literally looking into their brains. What makes flies so useful is that it’s easy to breed genetically modified strains, so that the role of specific genes in brain activity can be studied by manipulating or silencing them. And the fruit fly is big and complex enough to show sophisticated behavior, such as learning to associate a particular odour with a reward like sugar, while being simple enough to comprehend – it has around 100,000 neurons, compared to our many billions. What’s more, a fruit fly’s brain is transparent enough to look right through it under the microscope, so that one can watch neural processing while the fly is alive. By attaching fluorescent molecules to particular neurons, Waddell can identify the neural circuitry linked to a particular memory. In his lab in Oxford he showed me an image of a real fly’s brain: a haze of bluish-coloured neurons, with bright green spots and filaments that are, in effect, a snapshot of a memory. The memory might be along the lines of “Ah, that smell – the last time I followed it, it led to something tasty.” How do you find the relevant neurons among thousands of others? The key is that when neurons get active to form a memory, they advertise their state of busyness. They produce specific proteins, which can be tagged with other light-emitting proteins by genetic engineering of the respective genes. One approach is to inject benign viruses that stitch the light-emission genes right next to the gene for the protein you want to tag; another is to engineer particular cells to produce a foreign protein to which the fluorescent tags will bind. When these neurons get to work forming a memory, they light up. Ramirez compares it to the way lights in the windows of an office block at night betray the location of workers inside. This ability to identify and target individual memories has enabled researchers like Waddell and Ramirez to manipulate them experimentally in, well, mind-boggling ways. Rather than just watching memories form by fluorescent tagging, they can use tags that act as light-activated switches to turn gene activity on or off with laser light directed down an optical fibre into the brain. This technique, called optogenetics, is driving a revolution in neuroscience, Ramirez says, because it gives researchers highly selective control over neural activity – enabling them in effect to stimulate or suppress particular thoughts and memories. Waddell’s lab is not a good place to bring a banana for lunch. The fly store is packed with shelves of glass bottles, each full of flies feasting on a lump of sugar at the bottom. Every bottle is carefully labeled to identify the genetic strain of the insects it contains: which genes have been modified. But surely they get out from time to time, I wonder – and as if on cue, a fly buzzes past. Is that a problem? “They don’t survive for long on the outside,” Waddell reassures me. Having spent the summer cursing the plague of flies gathering around the compost bin in the kitchen, I’m given fresh respect for these creatures when I inspect one under the microscope and see the bejeweled splendor of its red eyes. It’s only sleeping: you can anaesthetize fruit flies with a puff of carbon dioxide. That’s important for mapping neurons to memories in the microscope, because there’s not much going on in the mind of a dead fly. These brain maps are now pretty comprehensive. We know, for example, which subset of neurons (about 2,000 in all) is involved in learning to recognize odours, and which neurons can give those smells good or bad associations. And thanks to optogenetics, researchers have been able to switch on some of these “aversive” neurons while flies smell a particular odour, so that they avoid it even though they have actually experienced nothing bad (such as shock treatment) in its presence – in other words, you might say, to stimulate a fictitious false memory. For a fly, it’s not obvious that we can call this “fear”, Waddell says, but “it’s certainly something they don’t like”. In the same way, by using molecular switches that are flipped with heat rather than light, Waddell and his colleagues were able to give flies good vibes about a particular smell. Flies display these preferences by choosing to go in particular directions when they are placed in little plastic mazes, some of them masterfully engineered with little gear-operated gates courtesy of the lab’s 3D printer. Ramirez, working in a team at MIT led by Susumu Tonegawa, has practiced similar deceptions on mice. In an experiment in 2012 they created a fear memory in a mouse by putting it in a chamber where it experienced mild electric shocks to the feet. While this memory was being laid down, the researchers used optogenetic methods to make the corresponding neurons, located in the hippocampus, switchable with light. Then they put the mouse in a different chamber, where it seemed perfectly at ease. But when they reactivated the fear memory with light, the mouse froze: suddenly it had bad feelings about this place. That’s not exactly implanting a false memory, however, but just reactivating a true one. To genuinely falsify a recollection, the researchers devised a more elaborate experiment. First, they placed a mouse in a chamber and labeled the neurons that recorded the memory of that place with optogenetic switches. Then the mouse was put in a different chamber and given mild shocks – but while these were delivered, the memory of the first chamber was triggered using light. When the mouse was then put back in the first chamber it froze. Its memory insisted, now without any artificial prompting, that the first chamber was a nasty place, even though nothing untoward had ever happened there. It is not too much to say that a false reality had been directly written into the mouse’s brain. You must remember this The problem with memory is often not so much that we totally forget something or recall it wrongly, but that we simply can’t find it even though we know it’s in there somewhere. What triggers memory recall? Why does a fly only seem to recall a food-related odour when it is hungry? Why do we feel fear only if we’re in actual danger, and not all the time? Indeed, it is the breakdown of these normal cues that produces PTSD, where the fear response gets triggered in inappropriate situations. A good memory is largely about mastering this triggering process. Participants in memory competitions that involve memorizing long sequences of arbitrary numbers are advised to “hook” the information onto easily recalled images. A patient named Solomon Shereshevsky, studied in the early twentieth century by the neuropsychologist Alexander Luria, exploited his condition of synaesthesia – the crosstalk between different sensory experiences such as sound and colour – to tag information with colours, images, sounds or tastes so that he seemed able to remember everything he heard or read. Cases like this show that there is nothing implausible about Jorge Luis Borges’ fictional character Funes the Memorious, who forgets not the slightest detail of his life. We don’t forget because we run out of brain space, even if it sometimes feels like that. Rather than constructing a complex system of mnemonics, perhaps it is possible simply to boost the strength of the memory as it is imprinted. “We know that emotionally arousing situations are more likely to be remembered than mundane ones”, LeDoux has explained. “A big part of the reason is that in significant situations chemicals called neuromodulators are released, and they enhance the memory storage process.” So memory sticks when the brain is aroused: emotional associations will do it, but so might exercise, or certain drugs. And because of reconsolidation, it seems possible to enhance memory after it has already been laid down. LeDoux has found that a chemical called isoproterenol has the opposite effect from propranolol on reconsolidation of memory in rats, making fear memories even stronger as they are rewritten into long-term storage in the amygdala. If it works for humans too, he speculates that the drug might help people who have “sluggish” memories. Couldn’t we all do with a bit of that, though? Ramirez regards chemical memory enhancement as perfectly feasible in principle, and in fact there is already some evidence that caffeine can enhance long-term memory. But then what is considered fair play? No one quibbles about students going into an exam buoyed up by an espresso, but where do we draw the line? Mind control It’s hard to come up with extrapolations of these discoveries that are too far-fetched to be ruled out. You can tick off the movies one by one. The memory erasure of Eternal Sunshine is happening right now to some degree. And although so far we know only how to implant a false memory if it has actually been experienced in another context, as our understanding of the molecular and cellular encoding of memory improves Ramirez thinks it might be feasible to construct memories “from the ground up”, as in Total Recall or the implanted childhood recollections of the replicant Rachael in Blade Runner. As Rachael so poignantly found out, that’s the way to fake a whole identity. If we know which neurons are associated with a particular memory, we can look into a brain and know what a person is thinking about, just by seeing which neurons are active: we can mind-read, as in Minority Report. “With sufficiently good technology you could do that”, Ramirez affirms. “It’s just a problem of technical limitations.” By the same token, we might reconstruct or intervene in dreams, as in Inception (Ramirez and colleagues called their false-memory experiment Project Inception). Decoding the thought processes of dreams is “a very trendy area, and one people are quite excited about”, says Waddell. How about chips implanted in the brain to control neural activity, Matrix-style? Theodore Berger of the University of Southern California has implanted microchips in rats’ brains that can duplicate the role of the hippocampus in forming long-term memories, recording the neural signals involved and then playing them back. His most recent research shows that the same technique of mimicking neural signals seems to work in rhesus monkeys. The US Defense Advanced Research Projects Agency (DARPA) has two such memory-prosthesis projects afoot. One, called SUBNETS, aims to develop wireless implant devices that could treat PTSD and other combat-related disorders. The other, called RAM (Restoring Active Memories), seeks to restore memories lost through brain injury that are needed for specialized motor skills, such as how to drive a car or operate machinery. The details are under wraps, however, and it’s not clear how feasible it will be to record and replay specific memories. LeDoux professes that he can’t imagine how it could work, given that long-term memories aren’t stored in a single location. To stimulate all the right sites, says Waddell, “you’d have to make sure that your implantation was extremely specific – and I can’t see that happening.” Ramirez says that it’s precisely because the future possibilities are so remarkable, and perhaps so unsettling, that “we’re starting this conversation today so that down the line we have the appropriate infrastructure.” Are we wise enough to know what we want to forget, to remember, or to think we remember? Do we risk blanking out formative, instructive and precious experiences, or finding ourselves one day being told, as Deckard tells Rachael in Blade Runner, “those aren’t your memories – they’re someone else’s”? “The problems are not with the current research, but with the question of what we might be able to do in 10-15 years,” says Brunet. It’s one thing to bring in legislation to restrict abuses, just as we do for other biomedical technologies. But the hardest arguments might be about not what we prohibit but what we allow. Should individuals be allowed to edit their own memories or have false ones implanted? Ramirez is upbeat, but insists that the ethical choices are not for scientists alone to thrash out. “We all have some really big decisions ahead of us,” he says. Thursday, October 09, 2014 Do we tell the right stories about evolution? There’s a super discussion on evolutionary theory in Nature this week. It’s prompted by the views of Kevin Laland at St Andrews, who has been arguing for some time that the traditional “evolutionary synthesis” needs to be extended beyond its narrow focus on genetics. In response, Gregory Wray at Duke University and others accuse Laland et al. of presenting a caricature of evolutionary biology and of ignoring all the work that is already being done on the issues Laland highlights. It all sounds remarkably like the response I got to my article in Nature a couple of years back, which was suggesting that, not only is there much we still don’t understand about the way evolution happens at the molecular/genetic level but that the question of how genetic inheritance works seems if anything to be less rather than more clear in the post-genomic era. That too led some biologists to respond in much the same way: No, all is well. (The well-known fact that rules of academic courtesy don’t apply towards “journalists” meant that one or two didn’t quite phrase it that way. You get used to it.) I guess you might expect, in the light of this, that I’d side with Laland et al. But in fact it looks to me as though Wray et al. have a perfectly valid case. After all, my article was formulated after speaking to several evolutionary biologists – and ones who sit well within what could be considered the mainstream. In particular, I think they are right to imply that the diverse mechanisms of evolutionary change known today are ones that, if Darwin didn’t already suspect, would be welcomed avidly by him. The real source of the argument, it seems to me, is expressed right at the outset by Laland et al.: “mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies”. I’m not sure that this is true, although for good reason this is certainly a major focus – perhaps the major one – of the field. Wray et al. regard this as a caricature, but I think that what Laland et al. are complaining about here is what I wanted to highlight too: not so much the way most evolutionary biologists think, but how evolutionary biology is perceived from the outside. Part of the reason for that predominant “popular” focus on genes is due (ironically, given what it is actually revealing) to the genomics revolution itself, not least because we were promised that this was going to answer every question about who we are and where we came from. But of course, the popular notion that evolution is simply a process of natural selection among genes was well in place before the industrial-scale sequencing of genomes – and one doesn’t have to look too hard to find the origins of that view. As Wray et al. rightly say, the basic processes that produce evolutionary change are several-fold: natural selection, drift, mutation, recombination and gene flow. Things like phenotypic plasticity add fascinating perspectives to this, and my own suspicion is that an awful lot will become clearer once we have tools for grappling with the complexities of gene regulatory networks. There doesn’t seem to be a huge amount of argument about this. But attempts to communicate much beyond a simple equation of evolution with natural selection at the genetic level have been few and far between. And some of the responses to my article made it clear that this is sometimes a conscious decision. Take the view of Paul Griffith, philosopher of science at the University of Sydney. According to ABC News, “While simplistic communication about genetics can be used to hype the importance of research, and it can encourage the impression that genes determine everything, Professor Griffiths said he does not believe the answer is to communicate more complexity.” Then there’s “science communication academic” Joan Leach from The University of Queensland, who apparently “agrees the average member of the public is not going to be that interested in the complexity of genetics, unless its relevant to an issue that they care about.” The ABC story goes on: "Is there a problem that we need to know about here?" Dr Leach said in response to Dr Ball's article. "There are dangers in telling the simple story, but he hasn't spelt out the advantages of embracing complexity in public communication." Sorry plebs, you’re too dumb to be told the truth – you’ll have to make do with the simplistic stories we told many decades ago. A tale of many electrons In what I hope might be a timely occasion with Nobel-fever in the air, here is my leader for the latest issue of Nature Materials. This past decision was a nice one for physics, condensed matter and materials – although curiously it was a chemistry prize. Density functional theory, invented half a century ago, now supplies one of the most convenient and popular shortcuts for dealing with systems of many electrons. It was born in a fertile period when theoretical physics stretched from abstruse quantum field theory to practical electrical engineering. It’s often pointed out that quantum theory is not just a source of counter-intuitive mystery but also an extraordinarily effective intellectual foundation for engineering. It supplies the theoretical basis for the transistor and superconductor, for understanding molecular interactions relevant from mineralogy to biology, and for describing the basic properties of all matter, from superhard alloys to high-energy plasmas. But popular accounts of quantum physics rarely pay more than lip service to this utilitarian virtue – there is little discussion of what it took to turn the ideas of Bohr, Heisenberg and Schrödinger into a theory that works at an everyday level. One of the milestones in that endeavour occurred 50 years ago, when Pierre Hohenberg and Walter Kohn published a paper [1] that laid the foundations of density functional theory (DFT). This provided a tool for transforming the fiendishly complicated Schrödinger equation of a many-body system such as the atomic lattice of a solid into a mathematically tractable problem that enables the prediction of properties such as structure and electrical conductivity. The milieu in which this advance was formulated was rich and fertile, and from the distance of five decades it is hard not to idealize it as a golden age in which scientists could still see through the walls that now threaten to isolate disciplines. Kohn, exiled from his native Austria as a young Jewish boy during the Nazi era and educated in Canada, was located at the heart of this nexus. Schooled in quantum physics by Julian Schwinger at Harvard amidst peers including Philip Anderson, Rolf Landauer and Joaquin Luttinger, he was also familiar with the challenges of tangible materials systems such as semiconductors and alloys. In the mid-1950s Kohn worked as a consultant at Bell Labs, where the work of John Bardeen, Walter Brattain and William Shockley on transistors a few years earlier had generated a focus on the solid-state theory of semiconductors. And his ground-breaking paper with Hohenberg came from research on alloys at the Ecole Normale Supérieure in Paris, hosted by Philippe Nozières. Now that DFT is so familiar a technique, used not only to understand electronic structures of molecules and materials but also as a semi-classical approach for studying the atomic structures of fluids, it is easy to forget what a bold hypothesis its inception required. In principle one may write the electron density n(r) of an N-electron system as the integral over space of the N-electron wavefunction, and then to use this to calculate the total energy of the system as a functional of n(r) and the potential energy v(r) of each electron interacting with all the fixed nuclei. (A functional here is a “function of a function” – the energy is a function of the function v(r), say.) Then one could do the calculation by invoking some approximation for the N-electron wavefunction. But Kohn inverted the idea: what if you didn’t start from the complicated N-body wavefunction, but just from the spatially varying electron density n(r)? That’s to say, maybe the external potential v(r), and thus the total energy (for the ground state of the system), depend only on the equilibrium n(r)? Then, that density function is all you needed to know. As Andrew Zangwill puts it in a recent commentary on Kohn’s career [2], “This was a deep question. Walter realized he wasn’t doing alloy theory any more.” Kohn figured out a proof of this remarkable conjecture, but it seemed so simple that he couldn’t believe it hadn’t been noticed before. So he asked Hohenberg, a post-doc in Nozières’ lab, to help. Together the pair formulated a rigorous proof of the conjecture for the case of an inhomogeneous electron gas; since their 1964 paper, several other proofs have been found. That paper was formal and understated to the point of desiccation, and one needed to pay it close attention to see how remarkable the result was. The initial response was muted, and Hohenberg moved subsequently into other areas, such as hydrodynamics, phase transitions and pattern formation. Kohn, however, went on to develop the idea into a practical method for calculating the electronic ground states of molecules and solids, working in particular with Hong Kong-born postdoc Lu-Jeu Sham. Their crucial paper3 was much more explicit about the potential of this approach as an approximation for calculating real materials properties of solids, such as cohesive energies and elastic constants, from quantum principles. It is now one of the most highly cited papers in all of physics, but was an example of a “sleeper”: still the community took some time to wake up to what was on offer. Not until the work of John Pople in the early 1990s did chemists begin to appreciate that DFT could offer a simple and convenient way to calculate electronic structures. It was that work which led to the 1998 Nobel prize in chemistry for Pople and Kohn – incongruous for someone so immersed in physics. Zangwill argues that DFT defies the common belief that important theories reflect the Zeitgeist: it was an idea that was not in the air at all in the 1960s, and, says Zangwill, “might be unknown today if Kohn had not created it in the mid-1960s.” Clearly that’s impossible to prove. But there’s no mistaking the debt that materials and molecular sciences owe to Kohn’s insight, and so if Zangwill is right, all the more reason to ask if we still create the right sort of environments for such fertile ideas to germinate. 1. Hohenberg, P. & Kohn, W. Phys. Rev. 136, B864-871 (1964). 2. Zangwill, A., (2014). 3. Kohn, W. & Sham, L. J. Phys. Rev. 140, A1133-1138 (1965). Wednesday, October 08, 2014 The moment of uncertainty As part of a feature section in the October issue of La Recherche on uncertainty, I interviewed Robert Crease, historian and philosopher of science at Stony Brook University, New York, on the cultural impact of Heisenberg’s principle. It turned out that Robert had just written a book looking at this very issue – in fact, at the cultural reception of quantum theory in general. It’s called The Quantum Moment, is coauthored by Alfred Scharff Goldhaber, and is a great read – I have written a mini-review for the next (November) issue of Prospect. Here’s the interview, which otherwise appears only in French in La Recherche. Since Robert has such a great way with words, it was one of the easiest I’ve ever done. What led Heisenberg to formulate the uncertainty principle? Was it something that fell out of the formalism in mathematical terms? That’s a rather dramatic story. The uncertainty principle emerged in exchange of letters between Heisenberg and Pauli, and fell out of the work that Heisenberg had done on quantum theory the previous year, called matrix mechanics. In autumn 1926, he and Pauli were corresponding about how to understand its implications. Heisenberg insisted that the only way to understand it involved junking classical concepts such as position and momentum in the quantum world. In February 1927 he visited Niels Bohr in Copenhagen. Bohr usually helped Heisenberg to think, but this time the visit didn’t have the usual effect. They grew frustrated, and Bohr abandoned Heisenberg to go skiing. One night, walking by himself in the park behind Bohr’s institute, Heisenberg had an insight. He wrote to Pauli: “One will always find that all thought experiments have this property: when a quantity p is pinned down to within an accuracy characterized by the average error p, then... q can only be given at the same time to within an accuracy characterized by the average error q1 ≈ h/p1.” That’s the uncertainty principle. But like many equations, including E = mc2 and Maxwell’s equations, its first appearance is not in its now-famous form. Anyway, Heisenberg sent off a paper on his idea that was published in May. How did Heisenberg interpret it in physical terms? He didn’t, really; at the time he kept claiming that the uncertainty principle couldn’t be interpreted in physical terms, and simply reflected the fact that the subatomic world could not be visualized. Newtonian mechanics is visualizable: each thing in it occupies a particular place at a particular time. Heisenberg thought the attempt to construct a visualizable solution for quantum mechanics might lead to trouble, and so he advised paying attention only to the mathematics. Michael Frayn captures this side of Heisenberg well in his play Copenhagen. When the Bohr character charges that Heisenberg doesn't pay attention to the sense of what he’s doing so long as the mathematics works out, the Heisenberg character indignantly responds, "Mathematics is sense. That's what sense is". Was Heisenberg disturbed by the implications of what he was doing? No. Both he and Bohr were excited about what they had discovered. From the very beginning they realized that it had profound philosophical implications, and were thrilled to be able to explore them. Almost immediately both began thinking and writing about the epistemological implications of the uncertainty principle. Was anyone besides Heisenberg and Bohr troubled? The reaction was mixed. Arthur Eddington, an astronomer and science communicator, was thrilled, saying that the epistemological implications of the uncertainty principle heralded a new unification of science, religion, and the arts. The Harvard physicist Percy Bridgman was deeply disturbed, writing that “the bottom has dropped clean out” of the world. He was terrified about its impact on the public. Once the implications sink in, he wrote, it would “let loose a veritable intellectual spree of licentious and debauched thinking.” Did physicists all share the same view of the epistemological implications of quantum mechanics? No, they came up with several different ways to interpret it. As the science historian Don Howard has shown, the notion that the physics community of the day shared a common view, one they called the “Copenhagen interpretation,” is a myth promoted in the 1950s by Heisenberg for his own selfish reasons. How much did the public pay attention to quantum theory before the uncertainty principle? Not much. Newspapers and magazines treated it as something of interest because it excited physicists, but as far too complicated to explain to the public. Even philosophers didn’t see quantum physics as posing particularly interesting or significant philosophical problems. The uncertainty principle’s appearance in 1927 changed that. Suddenly, quantum mechanics was not just another scientific theory – it showed that the quantum world works very differently from the everyday world. How did the uncertainty principle get communicated to a broader public? It took about a year. In August 1927, Heisenberg, who was not yet a celebrity, gave a talk at a meeting of the British Association for the Advancement of Science, but it sailed way over the heads of journalists. The New York Times’s science reporter said trying to explain it to the public was like “trying to tell an Eskimo what the French language is like without talking French.” Then came a piece of luck. Eddington devoted a section to the uncertainty principle in his book The Nature of the Physical World, published in 1928. He was a terrific explainer, and his imagery and language were very influential. How did the public react? Immediately and enthusiastically. A few days after October 29, 1929, the New York Times, tongue-in-cheek, invoked the uncertainty principle as the explanation for the stock market crash. And today? Heisenberg and his principle still feature in popular culture. In fact, thanks to the uncertainty principle, I think I’d argue that Heisenberg has made an even greater impact on popular culture than Einstein. In the American television drama series Breaking Bad, 'Heisenberg' is the pseudonym of the protagonist, a high school chemistry teacher who manufactures and sells the illegal drug crystal methamphetamine. The religious poet Christian Wiman, in his recent book about facing cancer, writes that "to feel enduring love like a stroke of pure luck" amid "the havoc of chance" makes God "the ultimate Uncertainty Principle." In The Ascent of Man, the Polish-British scientist Jacob Bronowski calls the uncertainty principle the Principle of Tolerance. There’s even an entire genre of uncertainty principle jokes. A police officer pulls Heisenberg over and says, "Did you know that you were going 90 miles an hour?" Heisenberg says, "Thanks. Now I'm lost." Has the uncertainty principle been used for serious philosophical purposes? Yes. Already in 1929, John Dewey wrote about it to promote his ideas about pragmatism, and in particular his thoughts about the untenability of what he called the “spectator theory of knowledge.” The literary critic George Steiner has used the uncertainty principle to describe the process of literary criticism – how it involves transforming the “object” – that is, text – interpreted, and delivers it differently to the generation that follows. More recently, the Slovene philosopher Slavoj Žižek has devoted attention to the philosophical implications of the uncertainty principle. Some popular culture uses of the uncertainty principle are off the wall. How do you tell meaningful uses from the bogus ones? It’s not easy. Popular culture often uses scientific terms in ways that are pretentious, erroneous, wacky, or unverifiable. It’s nonsense to apply the uncertainty principle to medicines or self-help issues, for instance. But how is that different from Steiner using it to describe the process of literary criticism? Outside of physics, has our knowledge that uncertainty is a feature of the subatomic world, and the uses that it has been put by writers and philosophers, helped to change our worldview in any way? I think so. The contemporary world does not always feel smooth, continuous, and law-governed, like the Newtonian World. Our world instead often feels jittery, discontinuous, and irrational. That has sometimes prompted writers to appeal to quantum imagery and language to describe it. John Updike’s characters, for instance, sometimes appeal to the uncertainty principle, while Updike himself did so in speaking of the contemporary world as full of “gaps, inconsistencies, warps, and bubbles in the surface of circumstance.” Updike and other writers and poets have found this imagery metaphorically apt. The historians Betty Dobbs and Margaret Jacob have remarked that the Newtonian Moment provided “the material and mental universe – industrial and scientific – in which most Westerners and some non-Westerners now live, one aptly described as modernity.” But that universe is changing. Quantum theory showed that at a more fundamental level the world is not Newtonian at all, but governed by notions such as chance, probability, and uncertainty. Robert Crease’s book (with Alfred S. Goldhaber) The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty will be published by Norton in October 2014. Uncertain about uncertainty This is the English version of the cover article (in French) of the latest issue of La Recherche (October). It’s accompanied by an interview that I conducted with Robert Crease about the cultural impact of the uncertainty principle, which I’ll post next. If there’s one thing most people know about quantum physics, it’s that it is uncertain. There’s a fuzziness about the quantum world that prevents us from knowing everything about it with absolute detail and clarity. Almost 90 years ago, the German physicist Werner Heisenberg pointed this out in his famous Uncertainty Principle. Yet over the few years there has been heated debate among physicists about just what Heisenberg meant, and whether he was correct. The latest experiments seem to indicate that one version of the Uncertainty Principle presented by Heisenberg might be quite wrong, and that we can get a sharper picture of quantum reality than he thought. In 1927 Heisenberg argued that we can’t measure all the attributes of a quantum particle at the same time and as accurately as we like [1]. In particular, the more we try to pin down a particle’s exact location, the less accurately we can measure its speed, and vice versa. There’s a precise limit to this certainty, Heisenberg said. If the uncertainty is position is denoted Δx, and the uncertainty in momentum (mass times velocity) is Δp, then their product ΔxΔp can be no smaller than ½h, where h [read this as h bar] is the fundamental constant called Planck’s constant, which sets the scale of the ‘granularity’ of the quantum world – the size of the ‘chunks’ into which energy is divided. Where does this uncertainty come from? Heisenberg’s reasoning was mathematical, but he felt he needed to give some intuitive explanation too. For something as small and delicate as a quantum particle, he suggested, it is virtually impossible to make a measurement without disturbing and altering what we’re trying to measure. It we “look” at an electron by bouncing a photon of light off it in a microscope, that collision will change the path of the electron. The more we try to reduce the intrinsic inaccuracy or “error” of the measurement, say by using a brighter beam of photons, the more we create a disturbance. According to Heisenberg, error (Δe) and disturbance (Δd) are also related by an uncertainty principle in which ΔeΔd can’t be smaller than ½h. The American physicist Earle Hesse Kennard showed very soon after Heisenberg’s original publication that in fact his thought experiment is superfluous to the issue of uncertainty in quantum theory. The restriction on precise knowledge of both speed and position is an intrinsic property of quantum particles, not a consequence of the limitations of experiments. All the same, might Heisenberg’s “experimental” version of the Uncertainty Principle – his relationship between error and disturbance – still be true? “When we explain the Uncertainty Principle, especially to non-physicists,” says physicist Aephraim Steinberg of the University of Toronto in Canada, “we tend to describe the Heisenberg microscope thought experiment.” But he says that, while everyone agrees that measurements disturb systems, many physicists no longer think that Heisenberg’s equation relating Δe and Δd describes that process adequately. Japanese physicist Masanao Ozawa of Nagoya University was one of the first to question Heisenberg. In 2003 he argued that it should be possible to defeat the apparent limit on error and disturbance [2]. Ozawa was motivated by a debate that began in the 1980s on the accuracy of measurements of gravity waves, the ripples in spacetime predicted by Einstein’s theory of general relativity and expected to be produced by violent astrophysical events such as those involving black holes. No one has yet detected a gravity wave, but the techniques proposed to do so entail measuring the very small distortions in space that will occur when such a wave passes by. These disturbances are so tiny – fractions of the size of atoms – that at first glance the Uncertainty Principle would seem to determine if they are feasible at all. In other words, the accuracy demanded in some modern experiments like this means that this question of how measurement disturbs the system has real, practical ramifications. In 1983 Horace Yuen of Northwestern University in Illinois suggested that, if gravity-wave measurement were done in a way that barely disturbed the detection system at all, the apparently fundamental limit on accuracy dictated by Heisenberg’s error-disturbance relation could be beaten. Others disputed that idea, but Ozawa defended it. This led him to reconsider the general question of how experimental error is related to the degree of disturbance it involves, and in his 2003 paper he proposed a new relationship between these two quantities in which two other terms were added to the equation. In other words, ΔeΔd + A + Bh/2, so that ΔeΔd itself could be smaller than h/2 without violating the limit.. Last year, Cyril Branciard of the University of Queensland in Australia (now at the CNRS Institut Néel at Grenoble) tightened up Ozawa’s new uncertainty equation [3]. “I asked whether all values of Δe and Δd that satisfy his relation are allowed, or whether there could be some values that are nevertheless still forbidden by quantum theory”, Branciard explains. “I showed that there are actually more values that are forbidden. In other words, Ozawa's relation is ‘too weak’.” But Ozawa’s relationship had by then already been shown to give an adequate account of uncertainty for most purposes, since in 2012 it was put to the test experimentally by two teams [4,5]. Steinberg and his coworkers in Toronto figured out how to measure the quantities in Ozawa’s equation for photons of infrared laser light travelling along optical fibres and being sensed by detectors. They used a way of detecting the photons that perturbed their state as little as possible, and found that indeed they could exceed the relationship between precision and disturbance proposed by Heisenberg but not that of Ozawa. Meanwhile, Ozawa himself teamed up with a team at the Vienna University of Technology led by Yuji Hasegawa, who made measurements on the quantum properties of a beam of neutrons passing through a series of detectors. They too found that the measurements could violate the Heisenberg limit but not Ozawa’s. Very recent experiments have confirmed that conclusion with still greater accuracy, verifying Branciard’s relationships too [6,7]. Branciard himself was a collaborator on one of those studies, and he says that “experimentally we could get very close indeed to the bounds imposed by my relations.” Doesn’t this prove that Heisenberg was wrong about how error is connected to disturbance in experimental measurements? Not necessarily. Last year, a team of European researchers claimed to have a theoretical proof that in fact this version of Heisenberg’s Uncertainty Principle is correct after all [8]. They argued that Ozawa’s theory, and the experiments testing it, were using the wrong definitions of error. So they might be correct in their own terms, but weren’t really saying anything about Heisenberg’s error-disturbance principle. As team member Paul Busch of the University of York in England puts it, “Ozawa effectively proposed a wrong relationship between his own definitions of error and disturbance, wrongly ascribed it to Heisenberg, then showed how to fix it.” So Heisenberg was correct after all in the limits he set on the tradeoff, argues Busch: “if the error is kept small, the disturbance must be large.” Who is right? It seems to depend on exactly how you pose the question. What, after all, does measurement error mean? If you make a single measurement, there will be some random error that reflects the limits on the accuracy of your technique. But that’s why experimentalists typically make many measurements on the same system, so that you average out some of the randomness. Yet surely, some argue, the whole spirit of Heisenberg’s original argument was about making measurements of different properties on a particular, single quantum object, not averages for a whole bunch of such objects? It now seems that Heisenberg’s limit on how small the combined uncertainty can be for error and disturbance holds true if you think about averages of many measurements, but that Ozawa’s smaller limit applies if you think about particular quantum states. In the first case you’re effectively measuring something like the “disturbing power” of a specific instrument; in the second case you’re quantifying how much we can know about an individual state. So whether Heisenberg was right or not depends on what you think he meant (and perhaps on whether you think he even recognized the difference). As Steinberg explains, Busch and colleagues “are really asking how much a particular measuring apparatus is capable of disturbing a system, and they show that they get an equation that looks like the familiar Heisenberg form. We think it is also interesting to ask, as Ozawa did, how much the measuring apparatus disturbs one particular system. Then the less restrictive Ozawa-Branciard relations apply.” Branciard agrees with Steinberg that this isn’t a question of who’s right and who’s wrong, but just a matter of how you make your definitions. “The two approaches simply address different questions. They each argue that the problem they address was probably the one Heisenberg had in mind. But Heisenberg was simply not clear enough on what he had in mind, and it is always dangerous to put words in someone else's mouth. I believe both questions are interesting and worth studying.” There’s a broader moral to be drawn, for the debate has highlighted how quantum theory is no longer perceived to reveal an intrinsic fuzziness in the microscopic world. Rather, what the theory can tell you depends on what exactly you want to know and how you intend to find out about it. It suggests that “quantum uncertainty” isn’t some kind of resolution limit, like the point at which objects in a microscope look blurry, but is to some degree chosen by the experimenter. This fits well with the emerging view of quantum theory as, at root, a theory about information and how to access it. In fact, recent theoretical work by Ozawa and his collaborators turns the error-disturbance relationship into a question about the cost of gaining information about one property of a quantum system on the other properties of that system [9]. It’s a little like saying that you begin with a box that you know is red and think weighs one kilogram – but if you want to check that weight exactly, you weaken the link to redness, so that you can’t any longer be sure that the box you’re weighing is a red one. The weight and the colour start to become independent pieces of information about the box. If this seems hard to intuit, that’s just a reflection of how interpretations of quantum theory are starting to change. It appears to be telling us that what we can know about the world depends on how we ask. To that extent, then, we choose what kind of a world we observe. The issue isn’t just academic, since an approach to quantum theory in which quantum states are considered to encode information is now starting to produce useful technologies, such as quantum cryptography and the first prototype quantum computers. “Deriving uncertainty relations for error-disturbance or for joint measurement scenarios using information-theoretical definitions of errors and disturbance has a great potential to be useful for proving the security of cryptographic protocols, or other information-processing applications”, says Branciard. “This is a very interesting and timely line of research.” 3. C. Branciard, Proc. Natl. Acad. Sci. U.S.A. 110, 6742 (2013). 4. J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa & Y. Hasegawa, Nat. Phys. 8, 185 (2012). 5. L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar & A. M. Steinberg, Phys. Rev. Lett. 109, 100404 (2012). 6. F. Kandea, S.-Y. Baek, M. Ozawa & K. Edamatsu, Phys. Rev Lett. 112, 020402 (2014). 7. M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard & A. G. White, Phys. Rev. Lett. 112, 020401 (2014). 9. F. Buscemi, M. J. W. Hall, M. Ozawa & M. W. Wilde, Phys. Rev. Lett. 112, 050401 (2014). Tuesday, October 07, 2014 Waiting for the green (and blue) light This was intended as a "first response" to the Nobel announcement this morning, destined for the Prospect blog. But as it can take a little while for things to appear there, here it is anyway while the news is still ringing in the air. I'm delighted by the choice. Did you notice when traffic lights began to change colour? The green “go” light once was once a yellowish pea green, but today it has a turquoise hue. And whereas the lights would switch with a brief moment of fading up and down, now they blink on and off in an instant. I will be consigning myself to the farthest reaches of geekdom by admitting this, but I used to feel a surge of excitement whenever, a decade or so ago, I noticed these new-style traffic lights. That’s because I knew I was witnessing the birth of a new age of light technology. Even if traffic lights didn’t press your buttons, the chances are that you felt the impact of the same innovations in other ways, most notably when the definition of your DVD player got a boost from the introduction of Blu-Ray technology, which happened about a decade ago. What made the difference was the development of a material that could be electrically stimulated into emitting bright blue light: the key component of blue light-emitting diodes (LEDs), used in traffic lights and other full-colour signage displays, and of lasers, which read the information on Blu-Ray DVDs. It’s for such reasons that this year’s Nobel laureates in physics have genuinely changed the world. Japanese scientists Isamu Akasaki, Hiroshi Amano and Shuji Nakamura only perfected the art of making blue-light-emitting semiconductor devices in the 1990s, and as someone who watched that happen I still feel astonished at how quickly this research progressed from basic lab work to a huge commercial technology. By adding blue (and greenish-blue) to the spectrum of available colours, these Japanese researchers have transformed LED displays from little glowing dots that simply told you if the power was on or off to full-colour screens in which the old red-green-blue system of colour televisions, previously produced by firing electron beams at phosphor materials on the screen, can now be achieved instead with compact, low-power and ultra-bright electronics. It’s because LEDs need much less power than conventional incandescent light bulbs that the invention of blue LEDs is ultimately so important. Sure, they also switch faster, last longer and break less easily than old-style bulbs – you’ll see fewer out-of-service traffic lights these days – but the low power requirements (partly because far less energy is wasted as heat) mean that LED light sources are also good for the environment. Now that they can produce blue light too, it’s possible to make white-light sources from a red-green-blue combination that can act as regular lighting sources for domestic and office use. What’s more, that spectral mixture can be tuned to simulate all kinds of lighting conditions, mimicking daylight, moonlight, candle-light or an ideal spectrum for plant growth in greenhouses. The recent Making Colour exhibition at the National Gallery in London featured a state-of-the-art LED lighting system to show how different the hues of a painting can seem under different lighting conditions. As with so many technological innovations, the key was finding the right material. Light-emitting diodes are made from semiconductors that convert electrical current into light. Silicon is no good at doing this, which is why it has been necessary to search out other semiconductors that are relatively inexpensive and compatible with the silicon circuitry on which all microelectronics is based. For red and yellow-green light that didn’t prove so hard: semiconductors such as gallium arsenide and gallium aluminium arsenide have been used since the 1960s for making LEDs and semiconductor lasers for optical telecommunications. But getting blue light from a semiconductor proved much more elusive. From the available candidates around the early 1990s, both Akasaki and Amano at Nagoya University and Nakamura at the chemicals company Nichia put their faith in a material called gallium nitride. It seemed clear that this stuff could be made to emit light at blue wavelengths, but the challenge was to grow crystals of sufficient quality to do that efficiently – if there were impurities or flaws in the crystal, it wouldn’t work well enough. Challenges of this kind are typically an incremental business rather than a question of some sudden breakthrough: you have to keep plugging away and refining your techniques, improving the performance of your system little by little. Nakamura’s case is particularly appealing because Nichia was a small, family-run company on the island of Shikoku, generally considered a rural backwater – not the kind of place you would expect to beat the giants of Silicon Valley in a race for such a lucrative goal. It was his conviction that gallium nitride really was the best material for the job that kept him going. The Nobel committee has come up trumps here – it’s a choice that rewards genuinely innovative and important work, which no one will grumble about, and which in retrospect seems obvious. And it’s a reminder that physics is everywhere, not just in CERN and deep space.
d7aaf75bf0501e26
Take the 2-minute tour × I had no problem appliying the Neothers theorem for translations to the non-relativistic Schrödinger equation $\mathrm i\hbar\frac{\partial}{\partial t}\psi(\mathbf{r},t) \;=\; \left(- \frac{\hbar^2}{2m}\Delta + V(\mathbf{r},t)\right)\psi(\mathbf{r},t)$ $\Longrightarrow\ \mathcal{L}\left(\psi, \mathbf{\nabla}\psi, \dot{\psi}\right) = \mathrm i\hbar\, \frac{1}{2} (\psi^{*}\dot{\psi}-\dot{\psi^{*}}\psi) - \frac{\hbar^2}{2m} \mathbf{\nabla}\psi^{*} \mathbf{\nabla}\psi - V( \mathbf{r},t)\,\psi^{*}\psi$ $\Longrightarrow\ \pi=\frac{\partial \mathcal{L}}{\partial \dot{\psi}} \propto \psi^{*}$ $T[\psi]\propto \mathbf{\nabla} \psi$ $\Longrightarrow\ I_{\ \psi,{\ T_\text{(translation)}}}=\int\text d^3x\ \pi\ T[\psi]\propto \int\text d^3x\ \psi^{*} \mathbf{\nabla} \psi = \langle P \rangle_\psi$ But I actually wonder why that works out, given that the Schrödigner equation is not invariant under Galileian transformations. It might well be that the Schrödinger group, which I'm not familiar with, is close enough to the Galileian group, that the fourth line $T[\psi]\propto \mathbf{\nabla} \psi$ is just the same and that's the reason. I'd like to know if the evaluation of the infinitesimal transformation is the only point at which one has to know the transformations one is actually dealing with. Is my guess right? Also, regrding the "trick" to establish Galilei-invariance after the conventional transformation via multiplication of the Schrödinger field by a phase (a phase which, among other things, is mass dependend): Some authors change $\psi(r,t)$ to $\psi(r',t')=\psi(r-vt,t)$, like here in the paper referenced on wikipedia (there is also a two year old version of it online (google)), but other authors, like the writers of the page in the first link, also transform $p$ to $p+mv$ in $\phi$ (which doesn't change the fact that they still have to add a phase). This is all before the phase multiplication. So what is the "right way" here? If I do this transformations involving a multiplication of the phase, do I only transform the actual arguments of the scalar field $\psi(r,t)$ or do I also transform the objects like $p$, which classically transform too, but are really just parameters (and the Eigenvalues) or the field - and not arguments? share|improve this question The Schrodinger equation changes form under Galilean transformations, but it is invariant in a quantum sense under these, since you cancel out the change with a phase factor. I wonder why you are confused, because translations and Galilean transformations are both mathematically and logically independent--- you can make a translation symmetry ignoring galilean symmetry, like in a crystal, where you have discrete translations and no boosts, or in He4, where you have continuous translation symmerty but again no boosts. –  Ron Maimon Aug 10 '12 at 19:26 @RonMaimon: You're right, I just did the computation for the translations (because that's easy) and here I was just assuming there is some conserved quantity for boosts as well. Is that not the case? And furthermore, are there interesting conserved quantities via Noether due to the new symmetry group (the Schrödinger symmetries)? –  NikolajK Aug 11 '12 at 0:32 Yes, there are further non-obvious conserved quantities, the location of the center of mass. This shows up as phase relations in scattering, and in separation theorems, like the reduced-mass/total-mass decomposition for the two-body problem. The center of mass law is independent of the conservation of momentum, although this is counterintuitive. Is this your question? I will answer this way, but it's not clear from what you ask. –  Ron Maimon Aug 11 '12 at 3:57 1 Answer 1 The conserved quantity corresponding to translation is the generator of translations. This is P, and you can see this because $e^{iPa}$ acting on a state $|x\rangle$ produces $|x+a\rangle$. By P-X symmetry, the operator $X$ generates translations in $P$, so that $e^{iXa}$ takes $|p\rangle$ to $|p-a\rangle$ (the minus sign is dictated by the orientation of the phase space, but you can also explicitly see it from the usual form of the X,P operators). So the naive generator of boosts is $$ mvX$$ Because this shifts the momentum by $mv$. But this is nonsense, because it doesn't commute with H! So it is not a symmetry. But the reason is because you need a time-dependent phase factor to fix the phase space. Once you do this, the correct conserved quantity B is $$ vB = v(mX - Pt)$$ Which shifts the momentum eigenstates by $mv$ and multiplies by an additional phase. The quantity $mX - Pt$ is the additional conservation law for boost invariance, and it is the location of the center of mass. For several particles, the generator of boosts is: $$ {\sum_i m_i X_i - Pt} $$ which shifts each of the momenta by $m_i v$, and corrects by a total phase. The Hamiltonian $$ {p^2\over 2} + {p^4\over 4} + V(x) $$ Is an example of an H that is not Boost invariant but is translation invariant. Motion in this H doesn't conserve center of mass, but conserves momentum. Another example is a crystal, where the p-dependence goes like $1-\cos(p)$, so again, you have translation invariance (discrete translation invariance--- p is periodic), but no boost invariance. In the crystal case, boost invariance is an accidental symmetry at low p. To see how boosts work in the Lagrangian picture, look here: Galilean invariance of classical lagrangian . share|improve this answer Your Answer
0ec17e7af6a1661e
From Wikipedia, the free encyclopedia   (Redirected from Energy (chemistry)) Jump to: navigation, search This article is about the scalar physical quantity. For an overview of and topical guide to energy, see Outline of energy. For other uses, see Energy (disambiguation). "Energetic" redirects here. For other uses, see Energetic (disambiguation). "Effort" redirects here. For other uses, see Effortfulness. Energy transformation; In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, most notably light energy, sound energy and thermal energy. In physics, energy is a property of objects, transferable among them via fundamental interactions, which can be converted in form but not created or destroyed. The joule is the SI unit of energy, based on the amount transferred to an object by the mechanical work of moving it 1 metre against a force of 1 newton.[1] Work and heat are two categories of processes or mechanisms that can transfer a given amount of energy. The second law of thermodynamics limits the amount of work that can be performed by energy that is obtained via a heating process—some energy is always lost as waste heat. The maximum amount that can go into work is called the available energy. Systems such as machines and living things often require available energy, not just any energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. There are many forms of energy, but all these types must meet certain conditions such as being convertible to other kinds of energy, obeying conservation of energy, and causing a proportional change in mass in objects that possess it. Common energy forms include the kinetic energy of a moving object, the radiant energy carried by light and other electromagnetic radiation, the potential energy stored by virtue of the position of an object in a force field such as a gravitational, electric or magnetic field, and the thermal energy comprising the microscopic kinetic and potential energies of the disordered motions of the particles making up matter. Some specific forms of potential energy include elastic energy due to the stretching or deformation of solid objects and chemical energy such as is released when a fuel burns. Any object that has mass when stationary, such as a piece of ordinary matter, is said to have rest mass, or an equivalent amount of energy whose form is called rest energy, though this isn't immediately apparent in everyday phenomena described by classical physics. According to mass–energy equivalence, all forms of energy (not just rest energy) exhibit mass. For example, adding 25 kilowatt-hours (90 megajoules) of energy to an object in the form of heat (or any other form) increases its mass by 1 microgram; if you had a sensitive enough mass balance or scale, this mass increase could be measured. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. Although any energy in any single form can be transformed into another form, the law of conservation of energy states that the total energy of a system can only change if energy is transferred into or out of the system. This means that it is impossible to create or destroy energy. The total energy of a system can be calculated by adding up all forms of energy in the system. Examples of energy transfer and transformation include generating or making use of electric energy, performing chemical reactions, or lifting an object. Lifting against gravity performs work on the object and stores gravitational potential energy; if it falls, gravity does work on the object which transforms the potential energy to the kinetic energy associated with its speed. More broadly, living organisms require available energy to stay alive; humans get such energy from food along with the oxygen needed to metabolize it. Civilisation requires a supply of energy to function; energy resources such as fossil fuels are a vital topic in economics and politics. Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun (as well as the geothermal energy contained within the earth), and are sensitive to changes in the amount received. The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, while energy is always conserved (in the sense that the total energy does not change despite energy transformations), energy can be converted into a form, e.g., thermal energy, that cannot be utilized to perform work. When one talks about "conserving energy by driving less", one talks about conserving fossil fuels and preventing useful energy from being lost as heat. This usage of "conserve" differs from that of the law of conservation of energy.[2] Main article: Forms of energy The total energy of a system can be subdivided and classified in various ways. For example, Classical mechanics distinguishes between kinetic energy, which is determined by an object's movement through space, and potential energy, which is a function of the position of an object within a field. It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, several types of nuclear energy (which utilize potentials from the nuclear force and the weak force), electric energy (from the electric field), and magnetic energy (from the magnetic field), among others. Many of these classifications overlap; for instance, thermal energy usually consists partly of kinetic and partly of potential energy. Some types of energy are a varying mix of both potential and kinetic energy. An example is mechanical energy which is the sum of (usually macroscopic) kinetic and potential energy in a system. Elastic energy in materials is also dependent upon electrical potential energy (among atoms and molecules), as is chemical energy, which is stored and released from a reservoir of electrical potential energy between electrons, and the molecules or atomic nuclei that attract them.[need quotation to verify].The list is also not necessarily complete. Whenever physical scientists discover that a certain phenomenon appears to violate the law of energy conservation, new forms are typically added that account for the discrepancy. The distinctions between different kinds of energy is not always clear-cut. As Richard Feynman points out: Some examples of different kinds of energy: Forms of energy Type of energy Description Kinetic (≥0), that of the motion of a body Potential A category comprising many forms in this list Mechanical The sum of (usually macroscopic) kinetic and potential energies Mechanical wave (≥0), a form of mechanical energy propagated by a material's oscillations Chemical that contained in molecules Electric that from electric fields Magnetic that from magnetic fields Radiant (≥0), that of electromagnetic radiation including light Nuclear that of binding nucleons to form the atomic nucleus Ionization that of binding an electron to its atom or molecule Elastic that of deformation of a material (or its container) exhibiting a restorative force Gravitational that from gravitational fields Intrinsic,  the rest energy (≥0) that equivalent to an object's rest mass Thermal A microscopic, disordered equivalent of mechanical energy Heat an amount of thermal energy being transferred (in a given process) in the direction of decreasing temperature Mechanical work an amount of energy being transferred in a given process due to displacement in the direction of an applied force Thomas Young – the first to use the term "energy" in the modern sense. The word energy derives from the Ancient Greek: ἐνέργεια energeia “activity, operation”,[3] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis via only by a factor of two. In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[4] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy, was also first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[5] It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. Since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. Measurement and units A schematic diagram of a Calorimeter - An instrument used by physicists to measure energy. In this example is it is X-Rays. Main article: Units of energy Energy, like mass, is a scalar physical quantity. The joule is the International System of Units (SI) unit of measurement for energy. It is a derived unit of energy, work, or amount of heat. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories for instance. There is always a conversion factor for these to the SI unit; for instance; one kWh is equivalent to 3.6 million joules.[6] The SI unit of power (energy per unit time) is the watt, which is simply a joule per second. Thus, a joule is a watt-second, so 3600 joules equal a watt-hour. The CGS energy unit is the erg, and the imperial and US customary unit is the foot pound. Other energy units such as the electron volt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce and have unit conversion factors relating them to the joule. Because energy is defined as the ability to do work on objects, there is no absolute measure of energy. Only the transition of a system from one state into another can be defined and thus energy is measured in relative terms. The choice of a baseline or zero point is often arbitrary and can be made in whatever way is most convenient for a problem. For example in the case of measuring the energy deposited by X-rays as shown in the accompanying diagram, conventionally the technique most often employed is calorimetry. This is a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer. Energy density is a term used for the amount of useful energy stored in a given system or region of space per unit volume. For fuels, the energy per unit volume is sometimes a useful parameter. In a few applications, comparing, for example, the effectiveness of hydrogen fuel to gasoline it turns out that hydrogen has a higher specific energy than does gasoline, but, even in liquid form, a much lower energy density. Scientific use Classical mechanics Work, a form of energy, is force times distance. W = \int_C \mathbf{F} \cdot \mathrm{d} \mathbf{s} This says that the work (W) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[7] Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. Main articles: Bioenergetics and Food energy Basic overview of energy and human life. Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP ADP + HPO42− → ATP + H2O The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains when split and reacted with water, is used for other metabolism (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[10] gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[11] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[12] i.e. reconverted into carbon dioxide and heat. Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior.,[13] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth. Quantum mechanics Main article: Energy operator In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. In results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of slow changing (non-relativistic) wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h\nu (where h is the Planck's constant and \nu the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: E = m c^2 , m is the mass, c is the speed of light in vacuum, E is the rest mass energy. For example, consider electronpositron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter energy (the photons). However, the total system mass and energy do not change during this interaction. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[14] It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[14] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Main article: Energy transformation Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator. There are strict limits to how efficiently energy can be converted into other forms of energy via work, and heat as described by Carnot's theorem and the second law of thermodynamics. These limits are especially evident when an engine is used to perform work. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang, later being "released" (transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy (E_p) to kinetic energy (E_k) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: E_{pi} + E_{ki} = E_{pF} + E_{kF} The equation can then be simplified further since E_p = mgh (mass times acceleration due to gravity times the height) and E_k = \frac{1}{2} mv^2 (half mass times velocity squared). Then the total amount of energy can be found by adding E_p + E_k = E_{total}. Conservation of energy and mass in transformation Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J. J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information). Matter may be converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass/energy equivalence remains a constant for both the matter and the energy, during any process when they are converted into each other. However, since c^2 is extremely large relative to ordinary human scales, the conversion of ordinary amount of matter (for example, 1 kg) to other forms of energy (such as heat, light, and other radiation) can liberate tremendous amounts of energy (~9\times 10^{16} joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (i.e., kinetic energy into particles with rest mass) are found in high-energy nuclear physics. Reversible and non-reversible transformations Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal). As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less. Conservation of energy According to conservation of energy, energy can neither be created (produced) nor destroyed by itself. It can only be transformed. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[15] Richard Feynman said during a 1961 lecture:[16] Most kinds of energy (with gravitational energy being a notable exception)[17] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[2][16] This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[18] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured. Transfer between systems Main article: Energy transfer Closed systems Energy transfer usually refers to movements of energy between systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work doing during the transfer is called heat.[19] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[20] and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. Mathematically, the process of energy transfer is described by the first law of thermodynamics: \Delta{}E = W + Q where E is the amount of energy transferred, W  represents the work done on the system, and Q represents the heat flow into the system.[21] As a simplification, the heat term, Q, is sometimes ignored, especially when the thermal efficiency of the transfer is high. \Delta{}E = W This simplified equation is the one used to define the joule, for example. Open systems There are other ways in which an open system can gain or lose energy. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term E" which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses). \Delta{}E = W + Q + E Where E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it. Internal energy Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[22] First law of thermodynamics The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[23] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas), the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as \mathrm{d}E = T\mathrm{d}S - P\mathrm{d}V\,, where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by \mathrm{d}E=\delta Q+\delta W where \delta Q is the heat supplied to the system and \delta W is the work applied to the system. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom. This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. See also Notes and references 1. ^ Energy units are usually defined in terms of the work they can do. However, because work is an indirect measurement of energy, (One example of the difficulties involved: if you use the first law of thermodynamics to define energy as the work an object can do, you must perform a perfectly reversible process, which is impossible in a finite time.) many experts emphasize understanding how energy behaves, specifically the conservation of energy, rather than trying to explain what energy "is". "The Feynman Lectures on Physics Vol I.". Retrieved 3 Apr 2014.  2. ^ a b The Laws of Thermodynamics including careful definitions of energy, free energy, et cetera. 3. ^ Harper, Douglas. "Energy". Online Etymology Dictionary. Retrieved May 1, 2007.  4. ^ Smith, Crosbie (1998). The Science of Energy – a Cultural History of Energy Physics in Victorian Britain. The University of Chicago Press. ISBN 0-226-76420-6.  5. ^ Lofts, G; O'Keeffe D; et al. (2004). "11 — Mechanical Interactions". Jacaranda Physics 1 (2 ed.). Milton, Queensland, Australia: John Willey & Sons Australia Ltd. p. 286. ISBN 0-7016-3777-3.  6. ^ Ristinen, Robert A., and Kraushaar, Jack J. Energy and the Environment. New York: John Wiley & Sons, Inc., 2006. 7. ^ The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 8. ^ "Retrieved on May-29-09". Retrieved 2010-12-12.  9. ^ Bicycle calculator - speed, weight, wattage etc. [1]. 10. ^ These examples are solely for illustration, as it is not the energy available for work which limits the performance of the athlete but the power output of the sprinter and the force of the weightlifter. A worker stacking shelves in a supermarket does more work (in the physical sense) than either of the athletes, but does it more slowly. 11. ^ Crystals are another example of highly ordered systems that exist in nature: in this case too, the order is associated with the transfer of a large amount of heat (known as the lattice energy) to the surroundings. 12. ^ Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model." in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. 13. ^ "Earth's Energy Budget". Retrieved 2010-12-12.  14. ^ a b Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.  15. ^ Berkeley Physics Course Volume 1. Charles Kittel, Walter D Knight and Malvin A Ruderman 16. ^ a b Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 0-201-02115-3.  17. ^ "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". 1918-07-16. Retrieved 2010-12-12.  18. ^ "Time Invariance". Retrieved 2010-12-12.  19. ^ Although heat is "wasted" energy for a specific energy transfer,(see: waste heat) it can often be harnessed to do useful work in subsequent interactions. However, the maximum energy that can be "recycled" from such recovery processes is limited by the second law of thermodynamics. 20. ^ The mechanism for most macroscopic physical collisions is actually electromagnetic, but it is very common to simplify the interaction by ignoring the mechanism of collision and just calculate the beginning and end result. 21. ^ The signs in this equation follow the IUPAC convention. 22. ^ I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39 23. ^ Kittel and Kroemer (1980). Thermal Physics. New York: W. H. Freeman. ISBN 0-7167-1088-9.  Further reading • Alekseev, G. N. (1986). Energy and Entropy. Moscow: Mir Publishers.  • Crowell, Benjamin (2011) [2003]. Light and Matter. Fullerton, California: Light and Matter.  • Ross, John S. (23 April 2002). "Work, Power, Kinetic Energy". Project PHYSNET. Michigan State University.  • Smil, Vaclav (2008). Energy in nature and society: general energetics of complex systems. Cambridge, USA: MIT Press. ISBN 0-262-19565-8.  • Walding, Richard,  Rapkins, Greg,  Rossiter, Glenn (1999-11-01). New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 0-19-551084-4.  External links
ad1e909a80828ccb
Superheavy Nuclei The following article provides a brief introduction to the recent discovery of deformed metastable superheavy nuclei and to theoretical calculations that had predicted their stability and decay. It is available in printed form as T-2 Fact Sheet-1. Discovery of 277112 The heaviest nucleus known to man, 277112, was discovered in February 1996 by scientists working at the Gesellschaft für Schwerionenforschung in Darmstadt, Germany. This nucleus, which consists of 112 protons and 165 neutrons, has a mass number of 277. It is the latest in a series of recently discovered nuclei lying on a rock of deformed metastable superheavy nuclei predicted to exist beyond the earlier chart of the nuclides. These newly discovered nuclei are shown as tiny red egg-shaped objects in the first figure, positioned according to the number of neutrons and the number of protons that they contain. Such superheavy nuclei exist only because of subtle quantum-mechanical effects leading to localized regions of nuclei with increased stability. The above figure shows 10 recently discovered superheavy nuclei, superimposed on a theoretical calculation of the microscopic corrections to the ground-state masses of nuclei extending from the vicinity of lead to heavy and superheavy nuclei. Local minima in this quantity correspond to increased nuclear stability arising from the closing of proton and neutron shells, such as occurs near the doubly magic 208Pb nucleus in the lower left-hand corner. The heaviest nucleus, whose location on the diagram is indicated by the flag, was produced through a gentle reaction between spherical 70Zn and 208Pb nuclei in which a single neutron was emitted. Nuclear Stability There occur in nature about 300 nuclei, representing isotopes of elements containing from one to at most 94 protons. Some 2,200 additional nuclei have been made artificially during the past 70 years. It becomes increasingly difficult to make heavier nuclei because the disruptive electrostatic forces between the positively charged protons grow faster than the cohesive nuclear forces that hold the nucleons (protons and neutrons) together. The large electrostatic forces cause heavy nuclei to decay rapidly by the emission of alpha particles (helium nuclei) and by spontaneous fission. Nuclei with increased stability beyond the earlier chart of the nuclides can exist because of the closing of proton and neutron shells. A nucleus with a completely filled shell of either protons or neutrons is said to be magic because it is relatively more stable than nuclei with either a larger or a smaller number of nucleons. Most magic nuclei are spherical in shape, but some nuclei can lower their energy somewhat, and hence increase their stability, by rearranging their protons and neutrons into deformed shells accommodating a different number of nucleons. The closing of these deformed shells leads to deformed magic numbers. By use of theories that reproduce the magic numbers and other properties of known nuclei, theorists have predicted that the next spherical proton magic number is 114 and that the next spherical neutron magic number is 184. In addition, they have predicted a deformed proton magic number at 110 and a deformed neutron magic number at 162. These predictions are made by solving the Schrödinger equation of quantum mechanics with an appropriate single-particle potential to describe the motion of the protons and neutrons. The results shown here are calculated from a realistic, diffuse-surface single-particle potential within the framework of the 1992 version of the finite-range droplet model (FRDM) of nuclear physics. The above figure shows the energy released in the alpha decay of the recently discovered superheavy nucleus 272111 and its daughter nuclei. The columns representing the energies are located on squares of the nuclear chart corresponding to the alpha-emitting nuclei. The 1992 version of the finite-range droplet model has accurately predicted the energy released in each of these decays. Decay by Alpha Emission Most of the metastable superheavy nuclei that have been discovered live for only about a thousandth of a second, after which they generally decay by emitting a series of alpha particles, as illustrated in the second figure for the superheavy nucleus 272111. However, the decay products of the most recently discovered nucleus 277112 show for the first time that nuclei at the center of the predicted rock of stability live longer than 10 seconds. By measuring the emission times and energies of these alpha particles, scientists can positively identify the initial superheavy nucleus that emitted the first alpha particle. The excellent agreement between these observations and theoretical predictions confirms the predictive power of current nuclear-structure models and represents a triumph for nuclear physics. Start the schoolbus tour, 12 February 1997
15209d205bdae1e1
SciELO - Scientific Electronic Library Online vol.28 issue3A kinetic model for the charged triple layer in low pressure arc dischargesTransitions between periodic orbits and control of chaos assisted by an external force in Hamiltonian systems author indexsubject indexarticles search Home Pagealphabetic serial listing   Brazilian Journal of Physics Print version ISSN 0103-9733 Braz. J. Phys. vol. 28 n. 3 São Paulo Sept. 1998  Plasma Processes in Pulsar Magnetospheres and Eclipsing Binary Pulsar Systems Qinghuan Luo Department of Physics & Mathematical Physics The University of Adelaide, SA 5005, Australia Received on 30 March, 1998; Revised version on 17 August, 1998 Plasma processes that may be responsible for pulsar radio emission and for eclipses observed for binary pulsars are discussed. High brightness temperature of pulsar radio emission implies that the radiation mechanism must be coherent. Several emission mechanisms are discussed. The high brightness temperature of radio emission also implies that nonlinear effect on wave propagation through pulsar magnetospheric plasmas is important and may result in radio pulse microstructure or cause fluctuation in dispersion measure. The discovery of eclipsing binary pulsars provides us with an opportunity to study nonlinear wave-wave interactions in electron-ion plasmas in the winds (or magnetospheres) of companion stars. I. Introduction Pulsars are thought to be rotating, strongly magnetized neutron stars, emitting radio waves. Radio emission from pulsars is in the form of periodic sequences of pulses like a light beam from lighthouse. The periods range from 10-3 to about 4 seconds [1]. The observed pulses also have secular variations which are usually described in terms of the period derivative (the time derivative of the period), typically in the range from 10-15 to 10-20. The observed pulse period can be associated with the rotation period of a neutron star and the secular change of the pulse period (increasing) can be associated with slowdown of the rotation. The slowdown can be attributed to the energy loss due to a large scale flux of electromagnetic energy or particle kinetic energy being carried away from the (magnetospheric) system (e.g. [2]). The current models for pulsars (e.g.  [3,4,5]) include (1) electron-positron pair production in polar region, and (2) magnetospheres filled with electron-positron pair plasmas. A neutron star consists mainly of degenerated neutron gas and can be regarded as perfect conductor [6]. Rotation can induce an electric field which has a component parallel to the magnetic field and can accelerate particles to very high energies. These ultrarelativistic particles emit high energy photons through curvature radiation or synchrotron radiation (e.g.  [7,3]) or inverse Compton scattering [8,9]. High energy photons initiate an electron-positron pair cascade, which then limits the acceleration zone to a specific size [3,4,5,9]. These outflowing pairs form a magnetosphere, which are replenished continuously with electron-positron pairs produced through pair cascades by energetic particles accelerated in the acceleration zone. A pulsar magnetosphere can be roughly divided into two zones: (1) open field line region near the magnetic pole, where pair plasmas are outflowing and pass through the light cylinder (where the co-rotation speed approaches the speed of light), and (2) closed field line region, where pair plasmas are trapped. The region further outside the light cylinder is called the wind zone, where plasmas move approximately radially. Observation of pulsar radio emission appears to suggest that emission is produced in the open field line region, where electron-positron pair plasmas flow out along field lines (e.g. [14]). Investigation of plasma properties, and in particular, of radiation processes in that region is the main subject of interest in pulsar theory (e.g.  [11,15]). The main feature of magnetospheric pair plasmas is that the particle distribution in momentum space is highly anisotropic. Due to strong magnetic fields, particles rapidly radiate away their perpendicular energy and move essentially along the magnetic field lines. Since electron and positron have the same mass, when the plasma is neutral or quasineutral in charge, some wave modes that appear in conventional electron-ion plasmas are modified and some even disappear (e.g. [11,12,15]; for discussion of nonrelativistic pair plasma, cf. [16,17]). As an example, some pulsars also emit circularly polarized waves, but for an electron-positron pair plasma with charge quasineutrality, wave modes are mainly linearly polarized [11]. In the study of plasma processes relevant for pulsar radio emission, the following two areas have drawn considerable interest from researchers: production of coherent radio emission, and propagation effects in the magnetospheric plasma. Despite a wide range of emission models have been proposed for coherent pulsar radio emission, none of them can explain satisfactorily the observed properties of pulsar radio emission. Here, I will concentrate on the basic plasma processes which include various types of plasma instabilities that may be relevant for pulsar radio emission and that have been explored in considerably details but are still not well understood in the context of pulsar radio emission. Discovery of eclipsing binary pulsars provides us with opportunity to study nonlinear wave-wave interactions in electron-ion plasmas under the conditions that are more similar to laboratory plasmas. In such a system, the pulsar and its companion are bound by gravitational force and orbit each other. Radio emission from the pulsar is periodically eclipsed by plasmas in the companion wind 18, 19, 20]. The plasma in the companion wind is possibly nonrelativistic and consists mainly of electrons and protons. When intense radio waves propagate through such a plasma, nonlinear wave-wave interaction can be important and can even disperse the radio beam, resulting in eclipses. In Sec. II, coherent nature of pulsar radio emission is discussed. Dispersion properties of pair plasmas and relevant instabilities are discussed in Sec. III. The propagation effects on radio waves in pulsar magnetospheres are considered in Sec. IV. In Sec. V-VII, eclipsing binary pulsars and the eclipse mechanism due to three-wave interaction are discussed. II. Pulsar radio emission The most important information that pulsar radio emission can immediately tell us is the coherent nature of radiation processes in pulsar magnetospheres. Although the observed flux density, typically within the range 10-3 to a few Jansky (1 Jy = 10-26 W m-2Hz-1), is relatively weak compared with other radio sources in astrophysics, due to the compactness of the source the inferred effective temperature is extremely high. Indeed, since the typical duration of the pulse is about 10-3 s, the linear dimension of the source is smaller than 3×105 m. The effective brightness temperature Teff can be defined by analogy with thermal emission, for which the specific intensity at low frequency is described by Rayleigh-Jeans law. For radio emission, the effective brightness temperature of a source region is defined by writing the radio intensity, 2pIn ( W m-2Hz-1sr-1), in the form lufo1.gif (417 bytes) where k is the Boltzmann constant and n is the radio frequency. The specific intensity In can be related to the flux density Fn by In = Fn/DW0, where DW0 is the beam solid angle. As an example, for the Crab pulsar the mean flux density at 400 MHz is about F400 » 480 mJy ( = 4.8×10-27W m-2Hz-1) and the distance is D0 » 2 kpc ( = 6.172×1019 m) [1]. If the band width is taken to be Dw » 400 MHz, one estimates that the radio luminosity is about Lr » Fn D20Dn » 1021 J s-1. If the linear size of the source region is 104 m, then the effective brightness temperature is estimated to be Teff » 3×1027 K. For incoherent emission where particles radiate independently of each other, thermodynamics implies that kTeff be less than the kinetic energy of the radiating particles. For the brightness temperature as high as Teff = 3×1027 K, to avoid self-absorption, the radiating particles must have very high energy 1018 MeV! In polar cap models (e.g. [3, 4, 5]), particles can only be accelerated up to energies ~ 106 - 108 MeV. Therefore, a coherent emission mechanism is required to produce radio emission with such an extremely high brightness temperature. To explain the extremely high brightness temperature of pulsar radio emission, a large number of coherent emission mechanisms have been proposed. The majority of these models have remained at the stage of explaining the coherent nature, i.e. to achieve the required high intensity, and are not detailed enough to compare with observation. On the other hand, the current observational data are unable to tightly constrain modeling of pulsar radio emission. Emission models which have been explored in considerable details include coherent curvature emission by bunching, curvature maser emission, linear and nonlinear plasma instabilities (for a review, cf. [21]). In the model of emission by bunches, the size of the bunch of emitting particles is assumed to be smaller than the wavelength, and then, the phases of the spontaneous radiation fields by individual particles are coherent. The total radiation intensity then exceeds the sum of the spontaneous radiation intensities from each particle. For example, N particles in a bunch can radiate up to N2 times the radiation per individual particle. The main drawback of this type of model is that there is no satisfactory theory or model for producing and maintaining the particle bunches [29]. Thus, this type of model will not be discussed further here, and my following discussion will emphasize on those models based on maser emission or plasma instabilities. III. Electron-positron pair plasmas Dispersion properties of pair plasmas in pulsar magnetospheres can be derived using the one-dimensional approximation. In a pulsar magnetosphere, the particle motion can be separated into perpendicular and parallel parts with respect to the magnetic field lines. The perpendicular component of particle motion can be quantized into discrete energy levels called the Landau levels. The time-scale for a particle to lose all its perpendicular energy to synchrotron radiation and fall to the lowest Landau level is t ~ 10-14 (108 T/B)2(g/102) s, where g and B are the Lorentz factor and the magnetic field, respectively. For strong pulsar magnetic fields ( ~ 108 T), this time-scale is so short that all the particles should be in their lowest Landau levels and the particle motion in the pulsar magnetosphere (well within the light cylinder) is essentially one-dimensional. The linear dispersion properties of a plasma can be described by permittivity tensor Kij=dij+(i/we0)sij with sij defined by writing induced current in terms of perturbed electric fields: dJi(w,k) = sij(w,k)dEj(w,k). The permittivity tensor for a strongly magnetized, electron-positron pair plasma can be derived in a way similar to that discussed by Baldwin, Bernstein & Weenink [22] except that there may be modification of vacuum polarization effect (e.g. [11, 23]) which is µ (af/4p)(lu193-1.gif (115 bytes)We/mec2)2 < 1 where af is the fine structure constant and We = 1019 s-1 (B/108 T) is the nonrelativistic gyrofrequency. This quantum effect will be neglected in the following discussion of the permittivity tensor. In the one dimensional approximation, Bessel function Jn(z) in Kij can be expanded in z = k^ c u^/We, where the particle momentum (in unit of mec), wave vector are all separated into perpendicular and parallel components, u^ and u||, k^ and k||. To the lowest significant order in z, and for |w-k||cb||| << |We| where b|| is the parallel velocity (in c), one has lufo2.gif (5602 bytes) where wps is the plasma frequency, n|| = k|| c/w, n^ = k^ c/w, hs is the sign of the particle charge, fs(g) is the particle distribution, and the sum is made over all particle species (electrons, positrons, etc) and all components (including the beam components). Eq. (2a-e) apply to uniform magnetic fields. When magnetic fields are inhomogeneous, for example, there is a spatial gradient or field line curvature, particles have drift motion such as curvature drift, which will be discussed in Sec. III.3. The outflowing plasma consists of primary beams of electrons, positrons or ions from polar caps and a background electron-positron pair plasma produced through pair cascades. Their densities can be estimated as follows. Suppose that pairs have average Lorentz factor gb, primary particles have the Lorentz factor gp, and that the system can somehow adjust itself to equipartition n±g± » nbgb, where n±, nb are the densities of pairs and primary particles, respectively. Since the density of primary particles is the Goldreich-Julian density [10], given by nb = 2e0WB/e » 7×1016 m-3(B/108 T)(1 s/P), the background plasma density is n± » nbgb/g±. The Lorentz factor of primary particles depends on the specific acceleration model, and is less than gb,max, the value that can be achieved by acceleration through the potential drop across the polar cap. The specific form of distribution function for pair plasmas, f±(g), depends on specific polar cap models. In Arons model [5], one has f±(g) µ g-1.5exp(-g0/g). The distribution has a peak at g0 = 50 - 100 (the value depends on other physical parameters as well, cf. [5]). Using (2a-e), one can solve Maxwell's equations to derive dispersion relations for various types of waves. In the strong field approximation B®¥, (2a-e) are simplified to Kij » 0 for i ¹ j, Kij » 1 for i = j = 1, 2, and K33 » 1-DK where lufo3.gif (831 bytes) Then, the dispersion relation reduces to n = 1 for extraordinary mode, and (1-n2||)(1-DK)-n2^ = 0,     (4) for ordinary mode, where propagation of the ordinary mode waves strongly depends on the angle between k and B (magnetic field). When k||B, the waves split into two types. The first type is transverse, with refractive index equal to unity. The second type is a Langmuir wave. According to their phase speed, the ordinary mode waves described by (4) have two branches: superluminal waves whose phase speed is faster than c, and subluminal waves, whose phase speed is slower than c. In the approximation described above, instabilities as the result of the lowest order wave-particle interaction can occur only for subluminal branch of ordinary mode since these waves have phase speed less than c and the Cerenkov resonance condition can be satisfied. In the region near the polar cap, the plasma density is so dense that the condition w/wp << 1 is satisfied. Then, one has a solution of subluminal waves with the dispersion relation given by lufo5.gif (752 bytes) where ágñ is the average Lorentz factor of the pair plasma. Waves described by (5) are also called (modified) Alfvén waves (e.g. [15]). Various linear plasma instabilities were considered for coherent pulsar emission, and these can be broadly classified into three types, which include instabilities for (a) electrostatic waves, e.g. Langmuir waves, (b) electrostatic-electromagnetic waves, e.g. modified Alfvén waves given by Eq. 5, which have both electrostatic and electromagnetic components, and (c) transverse waves, e.g. cyclotron instability, which is electromagnetic and can escape directly to interstellar medium. III.1 Two-stream Instability A widely-discussed instability for electrostatic waves in the context of pulsar emission is the two-stream stability [3,33,25,26,27,29]. Pulsar magnetospheric plasmas can be regarded as a beam-plasma system since they consist of a pair plasma and energetic electron or positron (or ion) beams. In plasma theory, it is well-known that such a system is unstable to the development of what is often called two-stream instability [24]. There are two types of two-stream instabilities, the counter-streaming instability and the weak beam instability. The counter-streaming instability is due to two components of the plasma counter streaming through each other, with the same density and opposite velocities. The weak beam instability is due to a beam traveling through a background plasma, where the beam is less dense than the background so that the wave modes are determined by the background and interact with the fast beam. The underlining physics of these two types of instabilities is similar, that is, instabilities occur due to that beam particles satisfy the Cerenkov resonance condition. The two-stream instability normally occurs for electrostatic waves. Therefore, the instability itself can not produce radiation directly. Ruderman & Sutherland [3] proposed that the instability of such a beam-plasma system can result in particle (electron and positron) bunching, and that the bunched particles can radiate in phase to produce coherent curvature emission. However, Benford & Buschauer [25] concluded that the growth rate of such an electrostatic instability is not sufficient to explain the level of the pulsar radio emission (also [26,27,29]). Cheng & Ruderman [33] suggested that the counter-streaming instability with a relatively larger growth rate (compared with Ruderman & Sutherland's model [3]) may occur due to relative motion of the electrons and positrons of the pair plasma. The relative motion of the electrons and positrons is due to the presence of the primary beam and the rotation of the magnetosphere. However, the counter-streaming instability may not be effective if the electrons and positrons of the pair plasma have a broad distribution of parallel momenta. III.2 Cyclotron Instability When terms µ 1/(w-k|| v||±We/g) are retained in (2a-e), cyclotron instability may occur. Consider energetic particles in the primary beam or in the tail of the distribution of the secondary pair plasma. It has been suggested that these particles may satisfy the anomalous doppler resonance condition, allowing a cyclotron instability to develop [34]. The anomalous doppler effect and the associated instability can be understood as follows. On emission of a photon p|| changes to p||-lu193-1.gif (115 bytes) k|| through conservation of momentum and the particle energy e = (m2c4+p2|| c2+2nL e Blu193-1.gif (115 bytes) c)1/2 (where the spin effect is ignored and nL is the Landau levels) changes to e-lu193-1.gif (115 bytes)w by conservation of energy. Writing e-lu193-1.gif (115 bytes)w = [m2c4+(p||-lu193-1.gif (115 bytes) k||)2c2+2nL¢e Blu193-1.gif (115 bytes) c]1/2 and l = nL-nL¢, and then taking lu193-1.gif (115 bytes)® 0 one derives the doppler condition w-k|| v||-lWe/g = 0. The normal doppler effect corresponds to l > 0, and the anomalous doppler effect corresponds to l < 0 [23, 35]. The physics of the cyclotron instability, i.e. the anomalous doppler effect, is that the parallel energy serves as free energy such that a particle can radiate a photon while transiting from a lower level nL to an excited level nL¢, viz. l < 0. The parallel energy decreases more than the perpendicular energy increases, allowing overall energy conservation. In the case of pulsars, since B is large, one needs only to consider the transition between nL = 0 and nL¢ = 1. Then the anomalous doppler condition can be rewritten as w-k|| v||+We/g = 0. Because of the very strong magnetic field in pulsar magnetospheres, the cyclotron instability can develop only near or beyond the light cylinder. However, observation favors the suggestion that emission comes from the region well inside the light cylinder [14]. III.3 Instability induced by curvature drift Instabilities may occur for electrostatic-electromagnetic waves described by (5) when an energetic beam traveling through the dense background electron-positron pair plasma in a magnetic field with curved field lines. When magnetic field lines have curvature, electrons or positrons can drift across field lines with the drift speed vd = v2||g/(We Rc) where v|| is the parallel (to B) velocity, Rc is the radius of field line curvature [36, 37, 38]. The cyclotron terms in (2a-e) are modified to w-k|| v||-k^ vd±We/g. One expects that the `hydrodynamic' instability due to curvature drift can occur, since the inclusion of curvature drift modifies the Cerenkov resonance condition. In the case of uniformly magnetized plasma, the Cerenkov resonance condition corresponds to parallel phase velocity of waves equaling the parallel velocity of the particles (when the particles move along the field lines). Therefore the resonance condition is symmetric about the field line direction, i.e. it corresponds to the surface of the Cerenkov cone (about the magnetic field line) defined by the wave vector. In the presence of curvature drift the resonance condition requires that the parallel phase velocity of the waves be either larger or smaller than the parallel velocity of the particles, and this implies that the resonance condition depends explicitly on the signs of both the particle charge and the viewing angle (the angle between the field line and the wave vector). This feature can result in growth of the waves with dispersion relation (5), which have significant transverse component. Other form of coherent curvature emission involving field line curvature is called curvature maser emission, which relies on an effective particle population inversion, i.e. number of particles with higher energy is significantly larger than those with lower energies  [37,39]. This type of distribution can be the source of free energy to sustain the maser emission (negative absorption). The growth of waves can be described by absorption coefficient which can be calculated using Einstein coefficient method [23]. III.4 Nonlinear interaction Apart from linear plasma instabilities, plasma processes involving nonlinear interaction can be important in pulsar radio emission. There are two main reasons why nonlinear instabilities should be considered as well: (a) an instability, initially in the linear regime, may grow to the nonlinear regime, and (b) electromagnetic radiation can be produced through conversion of other types of wave, e.g. electrostatic waves. For (a), when the amplitude of waves as the result of instability exceeds a critical value, the variation of the zero-order orbit of the particles becomes important and the energy transfer between the fast particles and the waves can be oscillatory. Therefore, nonlinear effects must be considered. There are possibilities that strong, oscillating electric fields are generated near polar caps, e.g. as the result of the polar gap oscillation [28], though no detailed models have been developed yet. Alternatively, electrostatic waves can be produced through beam streaming, but its effectiveness of growth remains unclear (e.g. Sec. III.1). If strong electrostatic waves exist near polar caps, electromagnetic waves can be produced through either nonlinear interaction or induced scattering. A model based on coherent emission by particles accelerated in a large amplitude, oscillating electric field was proposed by Melrose [29], and further discussed by Rowe [30, 31]. In the model, electromagnetic radiation is produced through induced scattering of electrostatic waves (which can be assumed to be superluminal) by relativistic particles. Asseo et al. [32] proposed a model based on electromagnetic radiation by Langmuir solitons. In their model, Langmuir turbulence is assumed to be present, e.g. due to two-stream instability (cf. Sec. III.1), and the perpendicular component of electric field of the soliton can be excited as the result of irregularities in the perpendicular direction. The parallel electric field acts as a source of radiation. An advantage of the model is that the mechanism can produce pulse microstructure (cf. Sec. IV). However, an effective mechanism for generating Langmuir turbulence is required for the model. IV. Propagation of pulsar radio emission If radio emission is produced well inside the light cylinder as apparently favored by observation (e.g. [14]), the intense radio waves must propagate through the magnetospheric plasma, and the propagation effect on radio waves within the magnetosphere can be significant. One of the important aspects of propagation effects is refraction of rays, i.e. propagation direction changes because of inhomogeneity or anisotropy of plasmas. IV.1 Refraction of rays in an anisotropic plasma The standard description of refraction rays is the geometric optical approximation, in which the characteristic length scale is much larger than the wavelength c/w. In this approximation, the rays (propagation of waves) are described by the Hamilton form: dx/dt = ¶w/k, dk/dt = -¶w/x, where k is the wave vector, t is a parameterized distance along the ray path, and w(k,x) is the dispersion in a locally homogeneous plasma. An example of bending of rays in the pulsar magnetospheric plasma was discussed by Melrose [11]. He specifically considered the low-density limit, in which there are two natural modes with the refraction indices close to 1. Because of magnetic fields, the two modes propagate in a different manner, and initially two identical rays of the two natural modes will split. Let n1, n2 be the refraction indices of extraordinary and ordinary modes. In the strong magnetic field limit, assuming the plasma is neutral and symmetric, we have n1 = 1. The index of ordinary mode can be derived from (3). One then has lufo6.gif (621 bytes) where w >> wp/g and we assume the cold plasma approximation. The angular separation of the two rays is DQ = (n2-n1)/¶q, that is, lufo7.gif (652 bytes) In the low-density approximation, we have DQ << 1 for q << 1. The angular separation strongly depends on the frequency. IV.2 Nonlinear dispersion Radio waves of different frequencies travel through plasmas with different group velocities, and the difference between the propagation time is an integration of (1/bg1-1/bg2) along D, where bg1 and bg2 are the group speeds in c, and the two radio frequencies are w1 and w2. Let D* and D0 be the distances of the source region and the observer to the star's center, respectively. One may define the dispersion measure lufo8.gif (1122 bytes) For an unmagnetized, nonrelativistic plasma with linear dispersion relation n2 = (1-2w2p/w2), one has the group velocity vg = c(1-w2p/w2). In this case, the dispersion measure is (DM)l = lu197-1.gif (158 bytes) nedD, where the subscript l denotes the usual linear dispersion measure. For ISM, cyclotron frequency is much less than the radio frequency and (8) is then applicable. In practice, dispersion measure (8) is often used to estimate pulsar distance provided that electron density of ISM is given or vice versa [1]. Due to high brightness temperature of pulsar radio emission, nonlinear effects on wave propagation may modify dispersion measure [43]. A nonlinear effect can be characterized by a dimensionless, Lorentz invariant parameter bQ = eE/(mecw), where E is the electric field of radio emission, w is the radio frequency. Assuming that the source size is R*, and the luminosity (J s-1) of radio emission is L* ~ e0E2cR*2. Then, one has bQ » (c/R*w)[(4pL*/mec2)(re/c)]1/2 » 14.3(L*/1021 J s-1)1/2(400 MHz/n)(104 m/R*) where re = e2/4pe0mec2 » 2.8×10-15 m is the classical electron radius, the refraction index is n » 1, and n = w/2p. As waves propagate away from the source region, bQ decreases as bQ µ 1/D where D is the distance of the relevant region to the star's center (R* in bQ is replaced by D). Dispersion measure including nonlinear effect can be derived by evaluation of bg1 and bg2 using the nonlinear dispersion relation n2 = 1-2w2p/(gQg±w2), where g± is the Lorentz factor of pair plasma and where gQ = (1+b2Q)1/2 (e.g. [40, 41, 42]). Since the nonlinear dispersion relation depends on the intensity of radio emission, any temporal fluctuation in the intensity can result in fluctuation in dispersion measure, which may be potentially observable. For illustrative purposes, here we consider the approximation n > We/2p, which may not be a good approximation in the region deep inside the magnetosphere. Using Eq. (8), one finds the fluctuation in dispersion measure due to nonlinear dispersion, that is,     lufo9.gif (1676 bytes) where gb is the Lorentz factor of the primary electrons (or positrons), I » 1/(bQx2s) » 4.4×10-8/bQ with x* » RL/R0 » c/(R0W) » 4.78×103, RL = c/W is the radius of the light cylinder, R0 is the star's radius, nb is the G-J density as defined earlier. (Compared to the result derived by Wu & Chian [43], the right-hand side is smaller by factor of g2±.) Assuming gb = 107, g± = 10, we obtain (DM)n » 10-4  (cm-3 pc), which is superposed to the usual linear dispersion measure (due to ISM). The observational consequence of the fluctuation due to nonlinear dispersion was discussed in details by Wu & Chian [43]. IV.3 Modulational instabilities Observation of individual pulses shows intensity variations over very short time scale, typically microseconds [44]. This phenomena is called the pulse microstructure. Chian & Kennel first suggested that intense radio waves propagating through magnetospheric plasmas may undergo modulational instability, and this may provide a mechanism for causing pulse microstructure [45]. In nonlinear plasma theory, it is well known that amplitude modulation of waves of frequency w/2p over time scale much less than 2p/w can be unstable if the group dispersion P = 0.5vg/k = 0.52w/k2 and the nonlinear frequency shift Q = -¶w/¶|A|2 satisfy the condition PQ > 0, where vg is the group velocity, A(x,t) is the envelope of modulated wave amplitude, the wave vector potential is expressed as A(x,t)exp[i(k·x-wt)]+c.c (e.g. [46]). In general, the envelope A is determined by the nonlinear Schrödinger equation. Thus, the slow modulation A can be regarded as quasiparticles in the sense that they are described by the wave function, A, and the instability can be interpreted as bunching of these quasiparticles towards the potential well. When the condition is satisfied, the quasiparticles bunching enhances the potential well and attracts more quasiparticles, resulting in a self-modulation instability [46]. Initial growth of modulational instability can be treated as perturbation; a dispersion relation for modulation can be derived in close analogy with linear analysis of wave instabilities except that the dispersion relation depends on the large wave amplitude [45, 47, 48, 51, 52]. The growth of modulational instability requires that the amplitude exceeds the threshold (i.e. the amplitude must be large enough to allow the instability to occur). From the dispersion relation, the growth rate of modulational instability can be calculated. Nonlinear analysis of wave propagation in electron-positron plasmas was discussed by Chian & Kennel [45], Kates & Kaup [49], and recently by Gratton et al. [50]. There was similar, earlier work by Sakai & Kawata [13] on nonlinear propagation of Alvén waves in ultrarelativistic electron-positron plasmas. In general, the full solution of nonlinear Shrödinger equation leads to solitons or turbulence, and the nonlinear solution reduces to the result of linear analysis in the weak modulation limit. The problem of nonlinear propagation of electromagnetic waves was also studied by several other authors, e.g. the case with magnetic fields was considered by Stenflo, Shukla & Yu [55], Mofiz et al. [54], the case with ion components was discussed by Rizzato [56], Rizzato, Schneider & Dillenburg [57], and the case of relativistic plasmas was studied by Mikhailovskii, Onishchenko & Tatarinov [58], Mofiz [53]. V. Eclipsing binary pulsars Several eclipsing binary pulsars have been discovered. Among these eclipsing binary pulsar systems, both PSR B1957 20 and PSR B174424A are millisecond pulsars with short orbital period and a low-mass companion [59, 19]. (Eclipsing binary pulsar PSR J2051-082 was recently discovered to have similar parameters to PSR B1957 20 [79, 80]) The observed properties of these two eclipsing pulsars can be summarized as follows. (1) The eclipse radius is larger than the inferred Roche lobe radius and is much larger than the radius of the companion. (2) The eclipse radius is frequency dependent, RE ~ n-d with d ~ 0.41 for PSR B1957 20 and d ~ 0.63 for PSR B174424A. (3) The propagation time of the pulsar signal at egress (exit of eclipse) is longer than at ingress (entrance of eclipse). (4) Eclipsing is approximately symmetric about orbital phase 0.25 at which the pulsar is behind the companion. (5) There are continuum eclipses at lower frequencies. For PSR B1957 20, the average continuum flux density at 318 MHz was observed to drop dramatically during pulsed eclipse, while the 1.4 GHz continuum flux density was almost unchanged [60]. For PSR B174424A, the similar continuum or partial eclipses were observed for frequencies below 1.6 GHz. All these properties indicate that the eclipsing is due to the plasma of the companion winds rather than the material inside the Roche lobe. V.1 Plasma conditions in eclipse region Through accurate timing, one of the most remarkable advantages of pulsar observations, and optical observation of the companion, we can estimate in details some important physical parameters that characterize the plasma conditions near or in the eclipse region. The plasma density can be estimated from the observed delays in propagation time of pulsar signal. The propagation time increases with increasing plasma density and decreases with increasing frequency. Near the eclipse region the plasma density increases with decreasing radial distance from the companion star, and this causes excessive time delays for radio waves traveling through the plasma. For PSR B1957+20, the observed delays in propagation time give the electron column density near the ingress of about 4×1019 m-2 at 318 MHz [61]. If the characteristic length of the eclipsing material along the line of sight is ~ RE = 0.68Rlu199-1.gif (130 bytes) = 4.7×108 m at 318 MHz [59], the plasma density is estimated to be 1011 m-3, corresponding a plasma frequency of np = wp/2p = 5.2 MHz. The average parallel (to the line of sight) magnetic field can be estimated by measuring the delay between the right and left circularly polarized signals. For PSR B1957 20, the inferred B is ~ 10-4 T [59]. The presence of much stronger field is possible further inside the eclipse region (e.g. [62]). The plasma temperature of eclipsing plasmas can be as high as ~ 106 K [59, 60]. For a companion star with a mass of a few per cent of a solar mass, as in PSR B1957 20 and PSR B174424A, the minimum temperature is > 106 K, which may be estimated by equating the gravitational potential at the surface to the electron thermal energy. The optical observations also suggest that the outflowing plasma from the companion is sustained and heated by the pulsar wind [63, 64]. V.2 Eclipse mechanisms Although there are extensive observational data on eclipsing pulsars, PSR B1957 20 [59, 61] and PSR B174424A [66, 67], two fundamental issues remain unsolved: what physical process causes the eclipse, and how the pulsar wind interacts with the companion star. Several mechanisms have been proposed to explain the eclipses. These include (1) refractive/reflection model (e.g. [68]), (2) absorption models, e.g. free-free absorption [69, 70, 71] and cyclotron absorption [72], and (3) induced scattering models, e.g. Raman scattering [73,74,72,62] and Brillouin scattering [72], in which the eclipse is attributed to nonlinear wave-wave interactions in the plasma that lead to an effective scattering of the beam of pulsar radio emission. For both the systems, the inferred plasma frequency is far below the observed frequency and this in fact rules out refraction/reflection as the cause for eclipses. Moreover, the refraction/reflection model cannot predict a correct frequency dependence of eclipse duration. Among the absorption models, the free-free absorption requires rather a cool wind with temperature ~ 300 K, which appears implausible in view of the strong irradiation from the pulsar wind. The cyclotron absorption model by Thompson et al. [72] requires a strong magnetic field and a very hot wind with Te > 108 K. This type of model predicts much stronger frequency dependence than that inferred from the observations. Among these proposed mechanisms, induced scattering appear the most plausible for pulsar eclipses. The arguments in favor of this include (1) pulsar radio emission has a high brightness temperature, and (2) the radio emission is highly beamed. Condition (1) favors nonlinear interactions of radio waves with other low-frequency waves in the plasma. Condition (2) implies that the radio emission can be regarded as a photon beam, and such a beam can produce an instability in which the low-frequency waves grow due to the nonlinear interaction. The low-frequency waves can then scatter the high-frequency photon beam, resulting in diffusion of the photon beam in k-space, and thus producing the eclipse. VI. Three-wave interactions In the random phase approximation, the radio beam can be modeled as a collection of photons, whose distribution in the wave vector space is confined to a small solid angle and described by the occupation number N(k), whose integration over k gives the photon density. The nonlinear interaction is described by a set of kinetic equations of N(k) [75, 76]. The evolution of N(k) is assumed to be much slower than the reciprocal of the lowest frequency of all relevant waves, and thus, the weak turbulence theory is applicable [76]. The waves in a three-wave interaction satisfy the beat conditions w(k) = (k¢)±w¢¢(k¢¢),   k = k¢±k¢¢,     (10) which correspond to energy and momentum conservations in the semiclassical formalism. VI.1 Small angle scattering In the small angle scattering approximation |k| ~ |k¢| >> |k¢¢|, w ~ w¢ >> w¢¢, the kinetic equations reduced to a pair of equations [35, 77, 62, 78] lufo11.gif (451 bytes) lufo12.gif (800 bytes) where the sum over the repeated subscript indices i, j is implied and where N and NL represent the photon and low-frequency wave occupation numbers, respectively. The first equation describes two effects on the low-frequency waves: absorption (or instabilities) with the absorption coefficient G, and the production of low-frequency waves through induced photon decay, described by SL. Note that SL is the counterpart of "spontaneous emission'' in wave-particle interaction but here it is, in fact, the induced processes, e.g. [35]. When G < 0, instabilities occur. All these quantities are related to three wave probability w(k, k¢¢), which describes the probability of emission of low-frequency wave with k¢¢ by a high-frequency photon with k. The calculation of G is analogous to that for the absorption coefficient for waves due to resonant interaction with particles. The high-frequency photons play the role of the particles, with the particle distribution function f(p), replaced by the photon occupation number N. Just as particle-wave interaction can lead to instability of the low-frequency waves under the appropriate conditions, so the three-wave interactions can lead to a photon-beam-induced instability, described here by G < 0. Scattering effects on the high-frequency photon beam by low-frequency waves is described by diffusion coefficient Dij. The diffusion of the photon beam is similar to diffusion of a particle beam due to wave-particle interaction [77, 62]. The quantities G, Dij, SL and Gi can be derived using the method discussed by Melrose [35, 74], Luo & Melrose [77, 62]. VI.2 Large angle scattering For large angle scattering, one needs to consider the full kinetic equations, which take the following form lufo13.gif (544 bytes) lufo14.gif (1028 bytes) where W = w2lu200-1.gif (106 bytes) /4p2c3 and = (2p)3(h/2p) R¢¢|e*ie¢je¢¢laijl|2/(ww¢w¢¢). The ratio of the electric to the total energy in low-frequency waves is represented by R¢¢. The quadratic response tensor is given by aijl, and polarization is given by ei for high-frequency waves, e¢j for scattered high-frequency waves with frequency , e¢¢l for low-frequency waves with frequency w¢¢. Calculation of lu200-1.gif (106 bytes) is given in Luo & Melrose [62]. In (14), NL± represents the occupation number of low-frequency waves, where the plus sign corresponds to the low-frequency waves emitted by scattering high-frequency photons from k to k¢ (w > w¢), and the minus sign corresponds to the low-frequency waves being absorbed (w < w¢). Integration is made over the solid angle of k¢. The term lu200-2.gif (183 bytes) corresponds to production of low-frequency waves through induced photon decay, = ó dWWD1D2N(kk¢¢) N(k),     (15) with D1 = |e·e¢|2, D2 = 1-k·k¢/kk¢. VI.3 Application to pulsar eclipses Small angle scattering is applicable for unmagnetized plasmas. The relevant low-frequency waves are plasma waves. For the parameters appropriate for eclipsing binary pulsars, Landau damping constrains on k¢¢ and one has |k¢¢| << |k| ~ |k¢|. For small angle scattering the estimated growth rate for low-frequency waves is quite large [62]. The energy density in the low-frequency waves can grow to the saturated level provided that the damping rate is slow. The saturated level can be controlled by the perpendicular diffusion whose effects are to reduce the angular anisotropy of the photon beam. The characteristic time for perpendicular diffusion is t^ ~ Dii/k2^ with k^ » kq0. We assume that the photon beam has an angular width q0. For the maximum growth Gmax, one has t^ ~ 1/Gmax. In the application to PSR B1957+20 and PSR B1744-24A, small angle scattering model appears to predict more strong frequency dependence of eclipse radius than that inferred from observation. For magnetized plasmas which may be the case in eclipsing binary pulsars, large angle scattering involving Bernstein waves may occur. For large angle scattering, one may estimate the scattering effect by calculating G = -(1/N)dN/dt. The scattering effect is important if the optical depth t » GRE/c is larger than 1. Possible application of large angle scattering involving low-frequency Bernstein waves was considered by Luo & Melrose [62]. VII. Summary Pulsar magnetospheres are natural laboratory for studying plasma physics under unusual conditions, e.g. very strong magnetic field and highly relativistic electrons and positrons. Plasma processes in pulsar magnetospheres are not well understood, in particular the processes relevant for production of pulsar radio emission. These problems continue to challenge both physicists working in astrophysics and plasma physics. Recent discovery of eclipsing binary pulsars provides us with an opportunity to study nonlinear wave-wave interaction in electron-ion plasma in the winds (or magnetospheres) of companion stars. High brightness temperature of pulsar radio emission implies that nonlinear wave-wave interaction can be important in nonrelativistic electron-ion plasmas. Wave-wave interaction may disrupt propagation of highly beamed radio emission and result in pulsar eclipses. The author thanks Abraham Chian for helpful discussion and FAPESP of Brazil for financial support during his visit in INPE where the work was done. Financial support of ARC through a fellowship is also acknowledged. R. N. Manchester and J. H. Taylor, Pulsars, (Freeman, San Francisco, 1977).         [ Links ] F. C. Michel, Theory of Neutron Star Magnetospheres, (University Chicago Press, 1991).         [ Links ] M. A. Ruderman and P. G. Sutherland, Astrophys. J. 196, 51 (1975).         [ Links ] J. Arons and E. T. Scharlemann, Astrophys. J. 231, 854 (1979).         [ Links ] J. Arons, Astrophys. J. 248, 1099 (1981).         [ Links ] S. L. Shapiro and S. A. Teukolsky, Black Holes, White Dwarfs, and Neutron Stars, (John Wiley, New York, 1983).         [ Links ] P. A. Sturrock, Astrophys. J. 164, 529 (1971).         [ Links ] X. Y. Xia, G. J. Qiao, X. J. Wu and Y. Q. Hou, Astron. Astrophys. 152, 93 (1985).         [ Links ] Q. Luo, Astrophys. J. 468, 338 (1996).         [ Links ] P. Goldreich and W. H. Julian, Astrophys. J. 157, 869 (1969).         [ Links ] D. B. Melrose, Aust. J. Phys. 32, 61 (1979).         [ Links ] J. -I. Sakai and T. Kawata, J. Phys. Soc. Japan 49, 747 (1980).         [ Links ] J. -I. Sakai and T. Kawata, J. Phys. Soc. Japan 49, 753 (1980).         [ Links ] M. Blaskiewicz, J. M. Cordes and I. Wasserman, Astrophys. J. 370, 643 (1991).         [ Links ] J. Arons and J. Barnard, Astrophys. J. 302, 120 (1986).         [ Links ] G. P. Zank and R. G. Greaves, Phys. Rev. E 51, 6079 (1995).         [ Links ] Q. Luo and D. B. Melrose, J. Plasma Phys. 58, 345 (1997).         [ Links ] A. S. Fruchter, D. R. Stinebring and J. H. Taylor, Nature, 333, 237 (1988).         [ Links ] A. G. Lyne, et al., Nature 347, 650 (1990).         [ Links ] A. G. Lyne, J. D. Biggs, P. A. Harrison and M. Bailes, Nature 361, 47 (1993).         [ Links ] D. B. Melrose, in The Magnetospheric Structure and Emission Mechanisms of Radio Pulsars, ed. T. H. Hankins, J. M. Rankin & J. A. Gil, (Pedagogical University Press, 1992) p. 105.         [ Links ] D. E. Baldwin, I. B. Bernstein and M. P. H. Weenink, Adv. Plasma Phys. 3, 1 (1969).         [ Links ] D. B. Melrose, Plasma Astrophysics, Vol. 1, 2, (Gordon and Breach, New York, 1980).         [ Links ] N. A. Krall and A. W. Trivelpiece, Principles of Plasma Physics, (McGraw-Hill, New York, 1973).         [ Links ] G. Benford and R. Buschauer, Mon. Not. R. Astron. Soc. 179, 189 (1977).         [ Links ] E. Asseo, R. Pellat and M. Rosado, Astrophys. J. 239, 661 (1983).         [ Links ] V. V. Usov, Astrophys. J. 320, 333 (1987).         [ Links ] S. Shibata, J. Miyazaki and F. Takahara, Mon. Not. R. Astron. Soc. 295, L53 (1998).         [ Links ] D. B. Melrose, Astrophys. J. 225, 557 (1978).         [ Links ] E. T. Rowe, Aust. J. Phys. 45, 1 and 45, 21 (1992).         [ Links ] E. T. Rowe, Astron. & Astrophys. 269, 275 (1995). E. Asseo, G. Pelletier and H. Sol, Mon. Not. R. Astron. Soc. 247, 529 (1990).         [ Links ] A. F. Cheng and M. A. Ruderman, Astrophys. J. 212, 800 (1977).         [ Links ] G. Z. Machabeli and V. V. Usov, Sov. Astron. Lett. 15, 393 (1989).         [ Links ] D. B. Melrose, Instabilities in Space and Laboratory Plasmas, (Cambridge University Press, 1986).         [ Links ] V. V. Zheleznyakov and V. E. Shaposhnikov, Aust. J. Phys. 32, 49 (1979).         [ Links ] Q. Luo and D. B. Melrose, Mon. Not. R. Astron. Soc. 258, 616 (1992).         [ Links ] Q. Luo, D. B. Melrose and G. Z. Machabeli, Mon. Not. R. Astron. Soc. 268, 159 (1994).         [ Links ] Q. Luo and D. B. Melrose, Mon. Not. R. Astron. Soc. 276, 372 (1995).         [ Links ] C. E. Max, Phys. Fluids, 16, 1277 (1973).         [ Links ] P. C. Clemmow, J. Plasma Phys. 12, 287 (1974).         [ Links ] A. C. -L. Chian, Lett. Nuov. Cimento, 29, 393 (1980).         [ Links ] X. J. Wu & A. C. -L. Chian, Astrophys. J. 443, 261 (1995).         [ Links ] Cordes, J. M. Space Sci. Rev. 24, 567 (1979).         [ Links ] A. C. -L. Chian and C. F. Kennel, Astrophys. & Space Sciences, 80, 261 (1983). A. Hasegawa, Plasma Instabilities and Nonlinear Effects (Springer-Verlage, Berlin, 1975), p. 194.         [ Links ] R. T. Gangadhara, V. Krishan and P. K. Shukla, Mon. Not. R. Astron. Soc. 262, 151 (1993).         [ Links ] L. Gomberoff, V. Muñoz and R. M. O. Galvão, Phys. Rev. E56, 4581 (1997).         [ Links ] R. E. Kates and D. J. Kaup, J. Plasma Phys. 42, 521 (1989).         [ Links ] F. T. Gratton, et al., Phys. Rev. E55, 3381 (1997).         [ Links ] L. Gomberoff and R. M. O. Galvão, Phys. Rev. E56, 4574 (1997).         [ Links ] V. Muñoz and L. Gomberoff, Phys. Rev. E57, 994 (1998).         [ Links ] U. A. Mofiz, Phys. Rev. A40, 2203 (1989).         [ Links ] U. A. Mofiz, et al. Phys. Rev. A38, 5935 (1988).         [ Links ] L. Stenflo, P. K. Shukla and M. Y. Yu, Astrophys. Space Sci. 117, 303 (1985).         [ Links ] F. B. Rizzato, J. Plasma Phys. 40, 289 (1988).         [ Links ] F. B. Rizzato, R. S. Schneider and D. Dillenburg, Phys. Lett. A133, 59 (1988).         [ Links ] A. B. Mikhailovskii, O. G. Onishchenko and E. G. Tatarinov, Plasma Phys. Contr. Fusion 27, 539 (1985).         [ Links ] A. S. Fruchter, et al., Astrophys. J. 351, 642 (1990).         [ Links ] A. S. Fruchter and W. M. Goss, Astrophys. J. 384, L47 (1992).         [ Links ] M. F. Ryba and J. H. Taylor, Astrophys. J. 380, 557 (1991).         [ Links ] Q. Luo and D. B. Melrose, Astrophys. J. 452, 346 (1995).         [ Links ] A. S. Fruchter, J. E. Gunn, T. R. Lauer and A. Dressler, Nature 334, 686 (1988).         [ Links ] A. S. Fruchter, J. Bookbinder and C. D. Bailyn, Astrophys. J. 443, L21 (1995).         [ Links ] F. Arzoumanian, A. S. Fruchter and J. H. Taylor, Astrophys. J. 426, L85 (1994).         [ Links ] D. J. Nice and S. E. Thorsett, Astrophys. J. 397, 249 (1992).         [ Links ] D. J. Nice, S. E. Thorsett and J. H. Taylor, Astrophys. J. 361, L61 (1990).         [ Links ] E. S. Phinney, et al. Nature 333, 823 (1988).         [ Links ] I. Wasserman and J. M. Cordes, Astrophys. J. 333, L91 (1988).         [ Links ] F. A. Rasio, S. L. Shapiro and S. A. Teukolsky, Astrophys. J. 342, 934 (1989).         [ Links ] F. A. Rasio, S. L. Shapiro and S. A. Teukolsky, Astron. Astrophys. 241, L25 (1991).         [ Links ] C. Thompson, et al. Astrophys. J. 422, 304 (1994).         [ Links ] M. Gedalin and D. Eichler, Astrophys. J. 406, 629 (1993).         [ Links ] D. B. Melrose, J. Plasma Phys. 51, 13 (1994).         [ Links ] R. C. Davidson, Methods in Nonlinear Plasma Theory (Academic, New York, 1972).         [ Links ] V. N. Tsytovich, Nonlinear Effects in Plasma (Plenum, New York, 1970).         [ Links ] Q. Luo and D. B. Melrose, Publ. Astron. Soc. Australia 12, 71 (1995).         [ Links ] Q. Luo and A. C.-L. Chian, Mon. Not. R. Astron. Soc. 289, 52 (1997).         [ Links ] B. W. Stappers, et al. Astrophys. J. 465, L119 (1996).         [ Links ] B. W. Stappers, M. S. Bessel and M. Bailes, Astrophys. J. 473, L119 (1996).
adc9045221d0f86a
Advanced | Help | Encyclopedia Physics portal Physics (from the Greek, φυσικός (phusikos), "natural", and φύσις (phusis), "nature") is the science of nature in the broadest sense. Physicists study the behavior and properties of matter in a wide variety of contexts, ranging from the sub-nuclear particles from which all ordinary matter is made (particle physics) to the behavior of the material Universe as a whole (cosmology). Some of the properties studied in physics are common to all material systems, such as the conservation of energy. Such properties are often referred to as laws of physics. Physics is sometimes said to be the "fundamental science", because each of the other natural sciences (biology, chemistry, geology, etc.) deals with particular types of material systems that obey the laws of physics. For example, chemistry is the science of molecules and the chemicals that they form in the bulk. The properties of a chemical are determined by the properties of the underlying molecules, which are accurately described by areas of physics such as quantum mechanics, thermodynamics, and electromagnetism. Physics is also closely related to mathematics. Physical theories are almost invariably expressed using mathematical relations, and the mathematics involved is generally more complicated than in the other sciences. The difference between physics and mathematics is that physics is ultimately concerned with descriptions of the material world, whereas mathematics is concerned with abstract patterns that need not have any bearing on it. However, the distinction is not always clear-cut. There is a large area of research intermediate between physics and mathematics, known as mathematical physics, devoted to developing the mathematical structure of physical theories. Table of contents Overview of physics research Theoretical and experimental physics The culture of physics research differs from the other sciences in the separation of theory and experiment. Since the 20th century, most individual physicists have specialized in either theoretical physics or experimental physics, and in the twentieth century, very few physicists have been successful in both forms of research 1. In contrast, almost all the successful theorists in biology and chemistry have also been experimentalists. Roughly speaking, theorists seek to develop theories that can explain existing experimental results and successfully predict future results, while experimentalists devise and perform experiments to explore new phenomena and test theoretical predictions. Although theory and experiment are developed separately, they are strongly dependent on each other. Progress in physics frequently comes about when experimentalists make a discovery that existing theories cannot account for, necessitating the formulation of new theories. In the absence of experiment, theoretical research can go in the wrong direction; this is one of the criticisms that have been levelled against M-theory, a popular theory in high-energy physics for which no practical experimental test has ever been devised. Central theories While physics deals with a wide variety of systems, there are certain theories that are used by all physicists. Each of these theories is believed to be basically correct, within a certain domain of validity. For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at much less than the speed of light. These theories continue to be areas of active research; for instance, a remarkable aspect of classical mechanics known as chaos was investigated in the 20th century, three centuries after its formulation by Isaac Newton. However, few physicists expect any of them to prove fundamentally misguided. They are important tools for research into more specialized topics, and any student of physics, regardless of his or her specialization, is expected to be well-versed in them. Theory Major subtopics Concepts Classical mechanics Newton's laws of motion, Lagrangian mechanics, Hamiltonian mechanics, Chaos theory, Fluid dynamics, Continuum mechanics Dimension, Space, Time, Motion, Length, Velocity, Mass, Momentum, Force, Energy, Angular momentum, Torque, Conservation law, Harmonic oscillator, Wave, Work, Power, Electromagnetism Electrostatics, Electricity, Magnetism, Maxwell's equations Electric charge, Current, Electric field, Magnetic field, Electromagnetic field, Electromagnetic radiation, Magnetic monopole Thermodynamics and Statistical mechanics Heat engine, Kinetic theory Boltzmann's constant, Entropy, Free energy, Heat, Partition function, Temperature Quantum mechanics Path integral formulation, Schrödinger equation, Quantum field theory Hamiltonian, Identical particles, Planck's constant, Quantum entanglement, Quantum harmonic oscillator, Wavefunction, Zero-point energy Theory of relativity Special relativity, General relativity Equivalence principle, Four-momentum, Reference frame, Spacetime, Speed of light Major fields of physics Contemporary research in physics is divided into several distinct fields that study different aspects of the material world. Condensed matter physics, by most estimates the largest single field of physics, is concerned with how the properties of bulk matter, such as the ordinary solids and liquids we encounter in everyday life, arise from the properties and mutual interactions of the constituent atoms. The field of atomic, molecular, and optical physics deals with the behavior of individual atoms and molecules, and in particular the ways in which they absorb and emit light. The field of particle physics, also known as "high-energy physics", is concerned with the properties of submicroscopic particles much smaller than atoms, including the elementary particles from which all other units of matter are constructed. Finally, the field of astrophysics applies the laws of physics to explain astronomical phenomena, ranging from the Sun and the other objects in the solar system to the universe as a whole. Field Subfields Major theories Concepts Astrophysics Cosmology, Planetary science, Plasma physics Big Bang, Cosmic inflation, General relativity, Law of universal gravitation Black hole, Cosmic background radiation, Galaxy, Gravity, Gravitational radiation, Planet, Solar system, Star Atomic, molecular, and optical physics Atomic physics, Molecular physics, Optics, Photonics Quantum optics Diffraction, Electromagnetic radiation, Laser, Polarization, Spectral line Particle physics Accelerator physics, Nuclear physics Standard Model, Grand unification theory, M-theory Fundamental force (gravitational, electromagnetic, weak, strong), Elementary particle, Antimatter, Spin, Spontaneous symmetry breaking, Theory of everything Vacuum energy Condensed matter physics Solid state physics, Materials physics, Polymer physics BCS theory, Bloch wave, Fermi gas, Fermi liquid, Many-body theory Phases (gas, liquid, solid, Bose-Einstein condensate, superconductor, superfluid), Electrical conduction, Magnetism, Self-organization, Spin, Spontaneous symmetry breaking Related fields There are many areas of research that mix physics with other disciplines. For example, the wide-ranging field of biophysics is devoted to the role that physical principles play in biological systems, and the field of quantum chemistry studies how the theory of quantum mechanics gives rise to the chemical behavior of atoms and molecules. Some of these are listed below. Acoustics – Astronomy – Biophysics – Computational physics – Electronics – Engineering – Geophysics – Materials science – Mathematical physics – Medical physics – Physical chemistry – Physics of computation – Quantum chemistry – Vehicle dynamics Fringe theories Cold fusion – Dynamic theory of gravity – Luminiferous aether – Steady state theory – Wave Structure Matter Main article: History of physics. See also Famous physicists and Nobel Prize in Physics. Since antiquity, people have tried to understand the behavior of matter: why unsupported objects drop to the ground, why different materials have different properties, and so forth. Also a mystery was the character of the universe, such as the form of the Earth and the behavior of celestial objects such as the Sun and the Moon. Several theories were proposed, most of which were wrong. These theories were largely couched in philosophical terms, and never verified by systematic experimental testing as is popular today. There were exceptions and there are anachronisms: for example, the Greek thinker Archimedes derived many correct quantitative descriptions of mechanics and hydrostatics. The works of Ptolemy (Astronomy) and Aristotle (Physics) were also found to not always match everyday observations. An example of this is an arrow flying through the air after leaving a bow contradicts with Aristotle's assertion that the natural state of all objects is at rest. The willingness to question previously held truths and search for new answers resulted in a period of major scientific advancements, now known as the Scientific Revolution. Its origins can be found in the European re-discovery of Aristotle in the twelfth and thirteenth centuries. This period culminated with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Isaac Newton (dates disputed). The Scientific Revolution is held by most historians (e.g., Howard Margolis) to have begun in 1543, when there was brought to the Polish astronomer Nicolaus Copernicus the first printed copy of the book De Revolutionibus he had written about a dozen years earlier. The thesis of this book is that the Earth moves around the Sun. Other significant scientific advances were made during this time by Galileo Galilei, Christiaan Huygens, Johannes Kepler, and Blaise Pascal. During the early 17th century, Galileo pioneered the use of experimentation to validate physical theories, which is the key idea in the scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the Law of Inertia. In 1687, Newton published the Principia Mathematica, detailing two comprehensive and successful physical theories: Newton's laws of motion, from which arise classical mechanics; and Newton's Law of Gravitation, which describes the fundamental force of gravity. Both theories agreed well with experiment. The Principia also included several theories in fluid dynamics. Classical mechanics was extended by Leonhard Euler, Joseph-Louis de Lagrange, William Rowan Hamilton, and others, who produced new results and new formulations of the theory. The law of universal gravitation initiated the field of astrophysics, which describes astronomical phenomena using physical theories. After Newton defined classical mechanics, the next great field of inquiry within physics was the nature of electricity. Observations in the 17th and 18th century by scientists such as Robert Boyle, Stephen Gray, and Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical charge and current. In 1821, Michael Faraday integrated the study of magnetism with the study of electricity. This was done by demonstrating that a moving magnet induced an electric current in a conductor. Faraday also formulated a physical conception of electromagnetic fields. James Clerk Maxwell built upon this conception, in 1864, with an interlinked set of 20 equations that explained the interactions between electric and magnetic field. These 20 equations were later reduced, using vector calculus, to a set of four equations. In addition to other electromagnetic phenomena, Maxwell's equations also can be used to describe light. Confirmation of this observation was made with the 1888 discovery of radio by Heinrich Hertz and in 1895 when Wilhelm Roentgen detected X rays. The ability to describe light in electromagnetic terms helped serve as a springboard for Albert Einstein's publication of his theory of special relativity. This theory combined classical mechanics with Maxwell's equations. The theory of special relativity unifies space and time into a single entity, spacetime. Relativity prescribes a different transformation between reference frames than classical mechanics; this necessitated the development of relativistic mechanics as a replacement for classical mechanics. In the regime of low (relative) velocities, the two theories agree. Einstein built further on the special theory by including gravity into his calculations, and published his theory of general relativity in 1915. One part of the theory of general relativity is Einstein's field equation. This describes how the stress-energy tensor creates curvature of spacetime and forms the basis of general relativity. Further work on Einstein's field equation produced results which predicted the Big Bang2,3 black holes, and the expanding universe. Einstein believed in a static universe and tried (and failed) to fix his equation to allow for this. However, by 1929 Edwin Hubble argued that astronomical observations demonstrate that the universe is expanding. From the 18th century onwards, thermodynamics was developed by Boyle, Young, and many others. In 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of mechanical work into heat, and in 1847 Joule stated the law of conservation of energy, in the form of heat as well as mechanical energy. Ludwig Boltzmann, in the 19th century, is responsible for the modern form of statistical mechanics. In 1895, Roentgen discovered X-rays, which turned out to be high-frequency electromagnetic radiation. Radioactivity was discovered in 1896 by Henri Becquerel, and further studied by Marie Curie, Pierre Curie, and others. This initiated the field of nuclear physics. In 1897, Joseph J. Thomson discovered the electron, the elementary particle which carries electrical current in circuits. In 1904, he proposed the first model of the atom, known as the plum pudding model. (The existence of the atom had been proposed in 1808 by John Dalton.) Henri Becquerel accidentally discovered radioactivity in 1896. The next year Thomson discovered the electron. These discoveries revealed that the assumption of many physicists that atoms were the basic unit of matter was flawed, and prompted further study into the structure of atoms. In 1911, Rutherford deduced from scattering experiments the existence of a compact atomic nucleus, with positively charged constituents dubbed protons. Neutrons, the neutral nuclear constituents, were discovered in 1932 by Chadwick. The equivalence of mass and energy (Einstein, 1905) was spectacularly demonstrated during World War II, as research was conducted by each side into nuclear physics, for the purpose of creating a nuclear bomb. The German effort, led by Heisenberg, did not succeed, but the Allied Manhattan Project reached its goal. In America, a team led by Fermi achieved the first man-made nuclear chain reaction in 1942, and in 1945 the world's first nuclear explosive was detonated at Trinity site, near Alamogordo, New Mexico. In 1900, Max Planck published his explanation of blackbody radiation. This equation assumed that radiators are quantized in nature, which proved to be the opening argument in the edifice that would become quantum mechanics. Beginning in 1900, Planck, Einstein, Niels Bohr, and others developed quantum theories to explain various anomalous experimental results by introducing discrete energy levels. In 1925, Heisenberg and 1926, Schrödinger and Paul Dirac formulated quantum mechanics, which explained the preceding heuristic quantum theories. In quantum mechanics, the outcomes of physical measurements are inherently probabilistic; the theory describes the calculation of these probabilities. It successfully describes the behavior of matter at small distance scales. During the 1920s Erwin Schrödinger, Werner Heisenberg, and Max Born were able to formulate a consistent picture of the chemical behavior of matter, a complete theory of the electronic structure of the atom, as a byproduct of the quantum theory. Quantum field theory was formulated in order to extend quantum mechanics to be consistent with special relativity. It was devised in the late 1940s with work by Richard Feynman, Julian Schwinger, Sin-Itiro Tomonaga, and Dyson. They formulated the theory of quantum electrodynamics, which describes the electromagnetic interaction, and successfully explained the Lamb shift. Quantum field theory provided the framework for modern particle physics, which studies fundamental forces and elementary particles. Chen Ning Yang and Tsung-Dao Lee, in the 1950s, discovered an unexpected asymmetry in the decay of a subatomic particle4. In 1954, Yang and Robert Mills then developed a class of gauge theories5,6 which provided the framework for understanding the nuclear forces. The theory for the strong nuclear force was first proposed by Murray Gell-Mann. The electroweak force, the unification of the weak nuclear force with electromagnetism, was proposed by Sheldon Lee Glashow, Abdus Salam and Steven Weinberg and confirmed in 1964 by James Watson Cronin and Val Fitch. This led to the so-called Standard Model of particle physics in the 1970s, which successfully describes all the elementary particles observed to date. Quantum mechanics also provided the theoretical tools for condensed matter physics, whose largest branch is solid state physics. It studies the physical behavior of solids and liquids, including phenomena such as crystal structures, semiconductivity, and superconductivity. The pioneers of condensed matter physics include Bloch, who created a quantum mechanical description of the behavior of electrons in crystal structures in 1928. The transistor was developed by physicists John Bardeen, Walter Houser Brattain and William Bradford Shockley in 1947 at Bell Telephone Laboratories. The two themes of the 20th century, general relativity and quantum mechanics, appear inconsistent with each other. General relativity describes the universe on the scale of planets and solar systems while quantum mechanics operates on sub-atomic scales. This challenge is being attacked by string theory, which treats spacetime as composed, not of points, but of one-dimensional objects, strings. Strings have properties like a common string (e.g., tension and vibration). The theories yield promising, but not yet testable results. The search for experimental verification of string theory is in progress. The United Nations have declared the year 2005, the centenary of Einstein's annus mirabilis, as the World Year of Physics. Future directions Main article: unsolved problems in physics. As of 2004, research in physics is progressing on a large number of fronts. In condensed matter physics, the biggest unsolved theoretical problem is the explanation for high-temperature superconductivity. Strong efforts, largely experimental, are being put into making workable spintronics and quantum computers. In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost amongst this are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem in solar physics. The physics of massive neutrinos is currently an area of active theoretical and experimental research. In the next several years, particle accelerators will begin probing energy scales in the TeV range, in which experimentalists are hoping to find evidence for the Higgs boson and supersymmetric particles. Theoretical attempts to unify quantum mechanics and general relativity into a single theory of quantum gravity, a program ongoing for over half a century, have not yet borne fruit. The current leading candidates are M-theory, superstring theory and loop quantum gravity. Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena, involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics, like the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, or self-sorting in shaken heterogeneous collections are unsolved. These complex phenomena have received growing attention since the 1970s for several reasons, not least of which has been the availability of modern mathematical methods and computers which enabled complex systems to be modelled in new ways. The interdisciplinary relevance of complex physics has also increased, as exemplified by the study of turbulence in aerodynamics or the observation of pattern formation in biological systems. In 1932, Horace Lamb correctly prophesized: Suggested readings Basic Physics • Paul Hewitt, Conceptual Physics with Practicing Physics Workbook (9th Edition), Addison Wesley Publishing Company, 2001, hardcover, 790 pages, ISBN 0321052021. A non-mathematical introduction to physics. • Douglas C. Giancoli, Physics: Principles with Applications, 6/E, Prentice Hall, 2005, 1040 pages, ISBN: 0130606200. This is an algebra-based physics textbook. • Jerry D. Wilson & Anthony J. Buffa, College Physics (5th edition), Prentice Hall, 2002, 2 volumes, 1040 pages, ISBN 0130676446. This is a calculus-based physics textbook. See also External links Wikibooks has more about this subject: Wikibooks Wikiversity has more about this subject: School of Physics General subfields within physics Classical mechanics | Condensed matter physics | Continuum mechanics | Electromagnetism | General relativity | Particle physics | Quantum field theory | Quantum mechanics | Solid state physics | Special relativity | Statistical mechanics | Thermodynamics General subfields within Natural sciences Astronomy | Biology | Chemistry | Earth Sciences | Ecology | Physics Add URL | About Slider | FREE Slider Toolbar - Simply Amazing Copyright © 2000-2008 All rights reserved. Content is distributed under the GNU Free Documentation License.
10c3c8c56cfc2101
The Book Read other articles by the authors by Dr Adrian Klein and Dr Robert Neil Boyd PhD In the last Issue, we briefly examined how physical objects (including Brains) are harmonically integrated into our planet's energy systems. Keeping in mind that any quantifiable energy is coupled by its SubQuantum structure, into Information fields, we are becoming increasingly more familiar with the fundamental concept of Information-driven phenomena, in the physical world. Let's now make a further step down into the more subtle background of the connections that matter can establish to the Information realm, by way of microphysical structures, where the actual energetic coupling modalities reside. The crystalline structure of matter itself (which we previously referred to for exemplification purposes) is defined in terms of an advanced ordering of its atomic "cell structure", seen as periodic arrangements at the microscopic level (known as the atomic lattice structure). Where does such order in the atomic lattice (an Information-related concept par excellence) come from? Are crystalline structures to be seen merely as objects displaying flat surfaces, intersecting at some characteristic angle? Even were we to limit our understandings in this manner, a given crystalline angulation constant, specific for each kind of "regular" crystal structure, inevitably requires that some definite Informational background, must already be in place. In 1982, on observing a new kind of diffraction diagram, Shechtman put in evidence an internal crystalline structure with icosahedronic symmetry along with 10 axes of threefold symmetry and 15 axes of twofold symmetry. Such quasiperiodical crystals posses an aperiodical rotational symmetry yielding 5, 8, 10 and 12-sided prisms, with such crystals displaying unique physical properties, which properties are obviously directly related to the underlying Informational blueprints which are producing these discrete diffraction patterns. In a quasi-crystal there are more then three basic wave vectors, which act together to index all the internal diffraction peaks. Quasicrystals are thus physical manifestations resulting from a different degree, or type, of Informational matrix, being distinctly differentiated from regular periodic crystals. In matter, wave trains propagate in the infinite one-dimensional Fermi-Popov-Ulam (FPU) lattice, creating branching patterns. Using invariant theory as applied to different symmetries in the FPU lattice, mathematical bifurcation equations may be found for these branching patterns, where a generic nonlinearity selects a specific pair of two-parameter families of mixed-mode wave trains (S.Guo & All). Such selection occurs at the background of Information control, supplied at the sub quantum level of the given vibratory event. Information gradients appear as basic proactive agents in structuring matter at its fundamental levels. This gets still more evident when we examine the symmetrical arrangements of chiral compounds. Non-superimposable stereoisomers (enantiomers), result in complete mirror images of each other. In symmetric environments, they rotate polarized light by equal amounts, but in opposite directions. If in equilibrium with optically active isomers, enantiomers yield a "racemic", producing a zero-value net rotation of interacting plane-polarized light. Such fine-tuned equilibrium states, obviously require that an exceedingly accurate Information control, must already be in place, before the fact. Homochiral, enantio-enriched or heterochiral variants of chemical compounds are at the basis of various enentio-selective preparations, with different medical applications at organic level, which vary according to discrete Information parameters. It's well known that Thalidomide's antiemetic effect, as related to one of its enantiomers, may be downgraded by another one's toxic, terratogenic side-effect. Nevertheless, the Information-controlled mutual interconversion effects of these enantiomers in vivo, efficiently reduce the biotoxicity of the compound. Obviously, organic survival interests are able to Informationally control (at least in part), subtle microchemical features at their fundamental level, per a Bohmian implication mechanism of downward informational regulation.Such Sub Quantum control mechanisms can be followed all the way down, starting at their cosmic source. Huge electric fields with very high dv/dt are created by separations in the stellar plasma, leading to atomic and subatomic particle dissociations, which produce aether fluxes. Similar atomic dissociations occur by impacts of gamma radiation along resonance parameters with the given atomic structure. A slightly off-centering of the gamma ray from the photoresonant frequency of the atom will result in its dissociation into a shower of subatomic particles, while a perfect tuning will disintegrate it totally into Sub quantum units. Proper combinations of aether fluxes and photonuclear resonances will find an endless field of applications in a new energetic technology, once the required funding for such developments would be achieved. Aether particle fluxes, when involved in Reichenbach's Odic energy, appear to be the fundamental constituents of biofields and bioenergy, which evolve according to the Informational gradients carried by these SubQuantum dynamics. Life and Information are thus ontological correlates, just as the Informational and energetic backgrounds of observable reality (as implemented and modulated under Information control), are correlated. At a still deeper level of analytical resolution, let's contemplate some pertinent data regarding vibration aspects as related to self-reinforcing solitary wave pulses, resulting by cancellations of non-linear and dispersive effects in transmission medias - the solitons. Pressure solitons have been recently associated to neural signal conductions because of their specific stability features, which are due to topological constraints. Soliton descriptions are highly relevant for all consciousness-involving systems, large scale material structures included (such as rocks, mountains, stars etc) - especially the ones implying Fermi-Popov-Ulam lattices. They are closely coupled to the sentience operating in aether flux organizations, as we shall see in various future installments of our scientific adventure, and are manifesting in a wide range of physical and non-physical events (screw dislocations in crystalline lattices, Dirac strings, magnetic monopoles in electromagnetism, cosmic strings as domain walls in cosmology, and so on.). Since the FPU lattice is a multidimensional Hamiltonian system of nonlinearly coupled oscillators, the holographic FPU soliton displays a hyper dimensional freedom of movement, and thus can't be specifically related to any particular location in the 3-D space. Under certain energetic conditions internal to the structure, the FPU soliton can relocate over large distances. The multidimensional structure of the FPU soliton results in its coherent features, regardless of any environmental disturbances which may be influencing a particular set of internal dimensions. The long-term memory of Space itself has been instrumentally determined to be directly related to these solitonic features, as we mentioned previously regarding Gariaev's DNA Phantom experiments. There are also certain kinds of multidimensional non-coherent low-energy Solitons, which are evanescent over time, or upon disturbances. Such "short-term solitons" are involved in the short term memory of the physical vacuum, as experimentally shown by Gariaev's "DNA Phantom" effect. All these considerations lead to a fully justified conjecture of a sentient Universe, possible connected by the Quantum Potential, with multiple parallel universes (Hawking). All of this together is working as an endless loop of Information, governing the All under the umbrella of the overarching cosmic harmony, through the physical vacuum's short-term, and long-term memory programs. Mathematical analysis confirms such conclusions, showing stable and robust embedded solitons resulting in the third-order nonlinear Schrödinger equation. Technology brings in further confirmation standards. Besides Quantum Information transport by single-photon or single-spin implementations, optical solitons can be experimentally handled due to their single-quantum like behaviors. For example, solitons may supply specific Quantum signatures in Information transmission chains. Further on, an entanglement effect between Solitons, or correlated information transmissions and receptions produced by propagating subquantum entities, being mutually absorbed and radiated by far-distant objects, leads to quantum-mechanical nonlocal correlations, as established between distant objects. Here we have the physical basis for instant Information transfers, and the basis for complex Information matrix replications, as observed in Extrasensory transmission tracks, bypassing Brain mediation. Further on: An electric charge radiates, but does not absorb, light waves, despite the fact that the Maxwell equations are invariant under time reversal. This is because electric charge is constantly being replenished by subquantal entities from the vacuum, which entities are then coherently integrated and re-emitted as observable photons. Coherent integration of such subquantal "virtual" energy, evolves under Information control, which resides at the next implication level. Observable photons are emitted when the ceaseless perturbations of charged particles by vacuum fluctuations (zitterbewegung), reaches the Quantum threshold. At this point, both the undifferentiated and the organized Informations, of sub quantum origins, enters the realms of normal matter and energy. Subsequently, this information is carried along, for example, by photons, towards the infinite possible energy configurations we know of as our physical world. According to Leyton's "hierarchy of symmetries", a broken symmetry at a given level preserves the Information in order to regenerate the symmetry at the next higher level, thus increasing the total symmetry in the system along negentropic (organizing, ordering) lines. This process allows permanent transductions of previously unorganized "virtual" Vacuum energies, at the sub quantum level, into observable organized electromagnetic energy quanta. The ultimate physical source of charge - hence of energy - proves to be the Information itself, at the deep sub quantum levels of the vacuum. As we will see in due time, complex nonlocal subquantal Information fields may result in, and purposefully modulate, Brain's Quantum activity, thus coupling to neurally mediated environmental energy patterns in the bio-structures-correlated modus operandi of the Self. Quantum Potential directly affects permeability and permitivity of free space via hyper-dimensional magnetic fields related to sub quantum vortices. As gravity is a gradient of the combination of these space properties, it has to be controlled by hyper-dimensional Informational components, inherent in the sub quantum, via the Quantum Potential. If gravity is seen as a gradient of aether density, weird Informational effects easily find their proper explanatory framework. In Podkletnov's amazing experiments, the impact of a "gravity-like" unshieldable, superluminally proceeding beam. This was produced by a high-powered electrical pulse, impacting on a superconductor. The Podkletnov apparatus also produces a back-acting ray, involving a certain time-delay (in the millisecond domain) due to the "Vacuum compensation effect". ("Vacuum compensation" is just a deeper understanding of Newton's 3rd law of action and reaction, where the physical vacuum is acting on disturbances of the aether media (previously in equilibrium), by producing compensating aether motions, which motions act to produce physical fields and forces, such that the original perturbations of the media are exactly compensated, after some small time delay, restoring local equilibrium.) Podkletnov's back-acting beam produces extremely weird effects on matter. For example, technicians unwittingly caught in the back-acting beam emanating from Podkletnov's device, found themselves embarrassingly "welded" to whatever object they happened to be touching at the time the back-acting beam passed through them, presenting one of the clues whereby the back-acting beam was first discovered. These effects are similar to the several odd effects reportedly witnessed during the famous "Philadelphia Experiment". (Which, though strongly denied by officials, is disclosed to be still running today in the "Montauk experiments" framework). Similarly, many other experimentally supported insights strongly support the LaPlacian model of gravitation, where a superluminal SubQuantum Aether is the originating cause of gravity. Such understandings meaningfully meet our pan-energetic Sentient SubQuantum tenets. Sentience IS ultimately a branch of physics, though not one exclusively limited to matter. In more advanced installments, we will comment upon the tremendous philosophical implications of the new model we are proposing, with a strong potential to ultimately replace hitherto accepted paradigmatic views. A new philosophy of science has to emerge, one that is able to account for the whole spectrum of experimental observables, the "paranormal" ones included. Experimental proofs and evidences, for our being on the right track, are constantly piling up. 24th October 2008
2850b9e409470cdd
Friday, April 20, 2012 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Schrödinger has never met Newton Sabine Hossenfelder believes that Schrödinger meets Newton. But is the story about the two physicists' encounter true? Yes, these were just jokes. I don't think that Sabine Hossenfelder misunderstands the history in this way. Instead, what she completely misunderstands is the physics, especially quantum physics. She is in a good company. Aside from the authors of some nonsensical papers she mentions, e.g. van Meter, Giulini and Großardt, Harrison and Moroz with Tod, Diósi, and Carlip with Salzman, similar basic misconceptions about elementary quantum mechanics have been promoted by Penrose and Hameroff. Hameroff is a physician who, along with Penrose, prescribed supernatural abilities to the gravitational field. It's responsible for the gravitationally induced "collapse of the wave function" which also gives us consciousness and may be even blamed for Penrose's (not to mention Hameroff's) complete inability to understand rudimentary quantum mechanics, among many other wonderful things; I am sure that many of you have read the Penrose-Hameroff crackpottery and a large percentage of those readers even fail to see why it is a crackpottery, a problem I will try to fix (and judging by the 85-year-long experience, I will fail). It's really Penrose who should be blamed for the concept known as the Schrödinger-Newton equations So what are the equations? Sabine Hossenfelder reproduces them completely mindlessly and uncritically. They're supposed to be the symbiosis of quantum mechanics combined with the Newtonian limit of general relativity. They say:\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) &= \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \\ \Delta \Phi(t,\vec x) &= 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Don't get misled by the beautiful form they take in \(\rm\LaTeX\) implemented by MathJax; superficial beauty of the letters doesn't guarantee the validity. Sabine Hossenfelder and others immediately talk about mechanically inserting numbers into these equations, and so on, but they never ask a basic question: Are these equations actually right? Can we prove that they are wrong? And if they are right, can they be responsible for anything important that shapes our observations? Of course that the second one is completely wrong; it fundamentally misunderstands the basic concepts in physics. And even if you forgot the reasons why the second equation is completely wrong, they couldn't be responsible for anything important we observe – e.g. for well-defined perceptions after we measure something – because of the immense weakness of gravity (and because of other reasons). Analyzing the equations one by one So let us look at the equations, what they say, and whether they are the right equations describing the particular physical problems. We begin with the first one,\[ i\hbar \pfrac {}{t} \Psi(t,\vec x) = \zav{-\frac{\hbar^2}{2m} \Delta + m\Phi(t,\vec x)} \Psi(t,\vec x) \] Is it right? Yes, it is a conventional time-dependent Schrödinger equation for a single particle that includes the gravitational potential. When the gravitational potential matters, it's important to include it in the Hamiltonian as well. The gravitational potential energy is of course as good a part of the energy (the Hamiltonian) as the kinetic energy, given by the spatial Laplacian term, and it should be included in the equations. In reality, we may of course neglect the gravitational potential in practice. When we study the motion of a few elementary particles, their mutual gravitational attraction is negligible. For two electrons, the gravitational force is more than \(10^{40}\) times weaker than the electrostatic force. Clearly, we can't measure the transitions in a Hydrogen atom with the relative precision of \(10^{-40}\). The "gravitational Bohr radius" of an atom that is only held gravitationally would be comparably large to the visible Universe because the particles are very weakly bound, indeed. Of course, it makes no practical sense to talk about energy eigenstates that occupy similarly huge regions because well before the first revolution (a time scale), something will hit the particles so that they will never be in the hypothetical "weakly bound state" for a whole period. But even if you consider the gravity between a microscopic particle (which must be there for our equation to be relevant) such as a proton and the whole Earth, it's pretty much negligible. For example, the protons are running around the LHC collider and the Earth's gravitational pull is dragging them down, with the usual acceleration of \(g=9.8\,\,{\rm m}/{\rm s}^2\). However, there are so many forces that accelerate the protons much more strongly in various directions that the gravitational pull exerted by the Earth can't be measured. But yes, it's true that the LHC magnets and electric fields are also preventing the protons from "falling down". The protons circulate for minutes if not hours and as skydivers know, one may fall pretty far down during such a time. An exceptional experiment in which the Earth's gravity has a detectable impact on the quantum behavior of particles are the neutron interference experiments, those that may be used to prove that gravity cannot be an entropic force. To describe similar experiments, one really has to study the neutron's Schrödinger equation together with the kinetic term and the gravitational potential created by the Earth. Needless to say, much of the behavior is obvious. If you shoot neutrons through a pair of slits, of course that they will accelerate towards the Earth much like everything else so the interference pattern may be found again; it's just shifted down by the expected distance. People have also studied neutrons that are jumping on a trampoline. There is an infinite potential energy beneath the trampoline which shoots the neutrons up. And there's also the Earth's gravity that attracts them down. Moreover, neutrons are described by quantum mechanics which makes their energy eigenstates quantized. It's an interesting experiment that makes one sure that quantum mechanics does apply in all situations, even if the Earth's gravity plays a role as well, and that's where the Schrödinger equation with the gravitational potential may be verified. I want to say that while the one-particle Schrödinger equation written above is the right description for situations similar to the neutron interference experiments, it already betrays some misconceptions by the "Schrödinger meets Newton" folks. The fact that they write a one-particle equation is suspicious. The corresponding right description of many particles wouldn't contain wave functions that depend on the spacetime, \(\Psi(t,\vec x)\). Instead, the multi-particle wave function has to depend on positions of all the particles, e.g. \(\Psi(t,\vec x_1,\vec x_2)\). However, the Schrödinger equation above already suggests that the "Schrödinger meets Newton" folks want to treat the wave function as an object analogous to the gravitational potential, a classical field. This totally invalid interpretation of the objects becomes lethal in the second equation. Confusing observables with their expectation values, mixing up probability waves with classical fields The actual problem with the Schrödinger-Newton system of equations is the second equation, Poisson's equation for the gravitational potential,\[ \Delta \Phi(t,\vec x) = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Is this equation right under some circumstances? No, it is never right. It is a completely nonsensical equation which is nonlinear in the wave function \(\Psi\) – a fatal inconsistency – and which mixes apples with oranges. I will spend some time with explaining these points. First, let me start with the full quantum gravity. Quantum gravity contains some complicated enough quantum observables that may only be described by the full-fledged string/M-theory but in the low-energy approximation of an "effective field theory", it contains quantum fields including the metric tensor \(\hat g_{\mu\nu}\). I added a hat to emphasize that each component of the tensor field at each point is a linear operator (well, operator distribution) acting on the Hilbert space. I have already discussed the one-particle Schrödinger equation that dictates how the gravitational field influences the particles, at least in the non-relativistic, low-energy approximation. But we also want to know how the particles influence the gravitational field. That's given by Einstein's equations,\[ \hat{R}_{\mu\nu} - \frac{1}{2} \hat{R} \hat{g}_{\mu\nu} = 8\pi G \,\hat{T}_{\mu\nu} \] In the quantum version, Einstein's equations become a form of the Heisenberg equations in the Heisenberg picture (Schrödinger's picture looks very complicated for gravity or other field theories) and these equations simply add hats above the metric tensor, Ricci tensor, Ricci scalar, as well as the stress-energy tensor. All these objects have to be operators. For example, the stress-energy tensor is constructed out of other operators, including the operators for the intensity of electromagnetic and other fields and/or positions of particles, so it must be an operator. If an equation relates it to something else, this something else has to be an operator as well. Think about Schrödinger's cat – or any other macroscopic physical system, for that matter. To make the thought experiment more spectacular, attach the whole Earth to the cat so if the cat dies, the whole Earth explodes and its gravitational field changes. It's clear that the values of microscopic quantities such as the decay stage of a radioactive nucleus may imprint themselves to the gravitational field around the Earth – something that may influence the Moon etc. (We may subjectively feel that we have already perceived one particular answer but a more perfect physicist has to evolve us into linear superpositions as well, in order to allow our wave function to interfere with itself and to negate the result of our perceptions. This more perfect and larger physicist will rightfully deny that in a precise calculation, it's possible to treat the wave function as a "collapsed one" at the moment right after we "feel an outcome".) Because the radioactive nucleus may be found in a linear superposition of dictinct states and because this state is imprinted onto the cat and the Earth, it's obvious that even the gravitational field around the (former?) Earth is generally found in a probabilistic linear superposition of different states. Consequently, the values of the metric tensors at various points have to be operators whose values may only be predicted probabilistically, much like the values of any observable in any quantum theory. Let's now take the non-relativistic, weak-gravitational-field, low-energy limit of Einstein's equations written above. In this non-relativistic limit, \(\hat g_{00}\) is the only important component of the metric tensor (the gravitational redshift) and it gets translated to the gravitational potential \(\hat \Phi\) which is clearly an operator-valued field, too. We get\[ \Delta \hat\Phi(t,\vec x) = 4\pi G \hat\rho(t,\vec x). \] It looks like the Hossenfelder version of Poisson's equation except that the gravitational potential on the left hand side has a hat; and the source \(\hat\rho\), i.e. the mass density, has replaced her \(m \abs{\Psi(t,\vec x)}^2\). Fine. There are some differences. But can I make special choices that will produce her equation out of the correct equation above? What is the mass density operator \(\hat\rho\) equal to in the case of the electron? Well, it's easy to answer this question. The mass density coming from an electron blows up at the point where the electron is located; it's zero everywhere else. Clearly, the mass density is a three-dimensional delta-function:\[ \hat\rho(t,\vec x) = m \delta^{(3)}(\hat{\vec X} - \vec x) \] Just to be sure, the arguments of the field operators such as \(\hat\rho\) – the arguments that the fields depend on – are ordinary coordinates \(\vec x\) which have no hats because they're not operators. In quantum field theories, whether they're relativistic or not, they're as independent variables as the time \(t\); after all, \((t,x,y,z)\) are mixed with each other by the relativistic Lorentz transformations which are manifest symmetries in relativistic quantum field theories. However, the equation above says that the mass density at the point \(\vec x\) blows up iff the eigenvalue of the electron's position \(\hat X\), an eigenvalue of an observable, is equal to this \(\vec x\). The equation above is an operator equation. And yes, it's possible to compute functions (including the delta-function) out of operator-valued arguments. Semiclassical gravity isn't necessarily too self-consistent an approximation. It may resemble the equally named song by Savagery above. Clearly, the operator \(\delta^{(3)}(\hat X - \vec x)\) is something different than Hossenfelder's \(\abs{\Psi(t,\vec x)}^2\) – which isn't an operator at all – so her equation isn't right. Can we obtain the squared wave function in some way? Well, you could try to take the expectation value of the last displayed equation:\[ \bra\Psi \Delta \hat\Phi(t,\vec x)\ket\Psi = 4\pi G m \abs{\Psi(t,\vec x)}^2 \] Indeed, if you compute the expectation value of the operator \(\delta^{(3)}(\hat X - \vec x)\) in the state \(\ket\Psi\), you will obtain \(\abs{\Psi(t,\vec x)}^2\). However, note that the equation above still differs from the Hossenfelder-Poisson equation: our right equation properly sandwiches the gravitational potential, which is an operator-valued field, in between the two copies of the wave functions. Can't you just introduce a new symbol \(\Delta\Phi\), one without any hats, for the expectation value entering the left hand side of the last equation? You may but it's just an expectation value, a number that depends on the state. The proper Schrödinger equation with the gravitational potential that we started with contains the operator \(\hat\Phi(t,\vec x)\) that is manifestly independent of the wave function (either because it is an external classical field – if we want to treat it as a deterministically evolving background field – or because it is a particular operator acting on the Hilbert space). So they're different things. At any rate, the original pair of equations is wrong. Nonlinearity in the wave function is lethal Those deluded people are obsessed with expectation values because they don't want to accept quantum mechanics. The expectation value of an operator "looks like" a classical quantity and classical quantities are the only physical quantities they have really accepted – and 19th century classical physics is the newest framework for physics that they have swallowed – so they try to deform and distort everything so that it resembles classical physics. An arbitrarily silly caricature of the reality is always preferred by them over the right equations as long as it looks more classical. But Nature obeys quantum mechanics. The observables we can see – all of them – are indeed linear operators acting on the Hilbert space. If something may be measured and seen to be equal to something or something else (this includes Yes/No questions we may answer by an experiment), then "something" is always associated with a linear operator on the Hilbert space (Yes/No questions are associated with Hermitian projection operators). If you are using a set of concepts that violate this universal postulate, then you contradict basic rules of quantum mechanics and what you say is just demonstrably wrong. This basic rule doesn't depend on any dynamical details of your would-be quantum theory and it admits no loopholes. Two pieces of the wave function don't attract each other at all You could say that one may talk about the expectation values in some contexts because they may give a fair approximation to quantum mechanics. The behavior of some systems may be close to the classical one, anyway, so why wouldn't we talk about the expectation values only? However, this approximation is only meaningful if the variations of the physical observables (encoded in the spread of the wave function) are much smaller than their characteristic values such as the (mean) distances between the particles which we want to treat as classical numbers, e.g.\[ \abs{\Delta \vec x} \ll O(\abs{\vec x_1-\vec x_2}) \] However, the very motivation that makes those confused people study the Schrödinger-Newton system of equations is that this condition isn't satisfied at all. What they typically want to achieve is to "collapse" the wave function packets. They're composed of several distant enough pieces, otherwise they wouldn't feel the need to collapse them. In their system of equations, two distant portions of the wave function attract each other in the same way as two celestial bodies do – because \(m \abs{\Psi}^2\) enters as the classical mass density to Poisson's equation for the gravitational potential. They write many papers studying whether this self-attraction of "parts of the electron" or another object may be enough to "keep the wave function compact enough". Of course, it is not enough. The gravitational force is extremely weak and cannot play such an essential role in the experiments with elementary particles. In Ghirardi-Rimini-Weber: collapsed pseudoscience, I have described somewhat more sophisticated "collapse theories" that are trying to achieve a similar outcome: to misinterpret the wave function as a "classical object" and to prevent it from spreading. Of course, these theories cannot work, either. To keep these wave functions compact enough, they have to introduce kicks that are so large that we are sure that they don't exist. You simply cannot find any classical model that agrees with observations in which the wave function is a classical object – simply because the wave function isn't a classical object and this fact is really an experimentally proven one as you know if you think a little bit. But what the people studying the Schrödinger-Newton system of equations do is even much more stupid than what the GRW folks attempted. It is internally inconsistent already at the mathematical level. You don't have to think about some sophisticated experiments to verify whether these equations are viable. They can be safely ruled out by pure thought because they predict things that are manifestly wrong. I have already said that the Hossenfelder-Poisson equation for the gravitational potential treats the squared wave function as if it were a mass density. If your wave function is composed of two major pieces in two regions, they will behave as two clouds of interplanetary gas and these two clouds will attract because each of them influences the gravitational potential that influences the motion of the other cloud, too. However, this attraction between two "pieces" of a wave function definitely doesn't exist, in a sharp contrast with the immensely dumb opinion held by pretty much every "alternative" kibitzer about quantum mechanics i.e. everyone who has ever offered any musings that something is fundamentally wrong with the proper Copenhagen quantum mechanics. There would only be an attraction if the matter (electron) existed at both places because the attraction is proportional to \(M_1 M_2\). However, one may easily show that the counterpart of \(M_1M_2\) is zero: the matter is never at both places at the same time. Imagine that the wave function has the form\[ \ket\psi = 0.6\ket \phi+ 0.8 i \ket \chi \] where the states \(\ket\phi\) and \(\ket\chi\) are supported by very distant regions. As you know, this state vector implies that the particle has 36% odds to be in the "phi" region and 64% odds to be in the "chi" region. I chose probabilities that are nicely rational, exploiting the famous 3-4-5 Pythagorean triangle, but there's another reason why I didn't pick the odds to be 50% and 50%: there is absolutely nothing special about wave functions that predict exactly the same odds for two different outcomes. The number 50 is just a random number in between \(0\) and \(100\) and it only becomes special if there is an exact symmetry between \(p\) and \((1-p)\) which is usually not the case. Much of the self-delusion by the "many worlds" proponents is based on the misconception that predictions with equal odds for various outcomes are special or "canonical". They're not. Fine. So if we have the wave function \(\ket\psi\) above, do the two parts of the wave function attract each other? The answer is a resounding No. The basic fact about quantum mechanics that all these Schrödinger-Newton and many-worlds and other pseudoscientists misunderstand is the following point. The wave function above doesn't mean that there is 36% of an object here AND 64% of an object there. (WRONG.) Note that there is "AND" in the sentence above, indicating the existence of two objects. Instead, the right interpretation is that the particle is here (36% odds) OR there (64% odds). (RIGHT.) The correct word is "OR", not "AND"! However, unlike in classical physics, you're not allowed to assume that one of the possibilities is "objectively true" in the classical sense even if the position isn't measured. On the other hand, even in quantum mechanics, it's still possible to strictly prove that the particle isn't found at both places simultaneously; the state vector is an eigenstate of the "both places" projection operator (product of two projection operators) with the eigenvalue zero. (The same comments apply to two slits in a double-slit experiment.) The mutually orthogonal terms contributing to the wave function or density matrix aren't multiple objects that simultaneously exist, as the word "AND" would indicate. You would need (tensor) products of Hilbert spaces and/or wave functions, not sums, to describe multiple objects! Instead, they are mutually excluding alternatives for what may exist, alternative properties that one physical system (e.g. one electron) may have. And mutually excluding alternatives simply cannot interact with each other, gravitationally or otherwise. Imagine you throw dice. The result may be "1" or "2" or "3" or "4" or "5" or "6". But you know that only one answer is right. There can't be any interaction that would say that because both "1" and "6" may occur, they attract each other which is why you probably get "3" or "4" in the middle. It's nonsense because "1" and "6" are never objects that simultaneously exist. If they don't simultaneously exist, they can't attract each other, whatever the rules are. They can't interact with one another at all! While the expectation value of the electron's position may be "somewhere in between" the regions "phi" and "chi", we may use the wave function to prove with absolute certainty that the electron isn't in between. The proponents of the "many-worlds interpretation" often commit the same trivial mistake. They are imagining that two copies of you co-exist at the same moment – in some larger "multiverse". That's why they often talk about one copy's thinking how the other copy is feeling in another part of a multiverse. But the other copy can't be feeling anything at all because it doesn't exist if you do! You and your copy are mutually excluding. If you wanted to describe two people, you would need a larger Hilbert space (a tensor product of two copies of the space for one person) and if you produced two people out of one, the evolution of the wave function would be quadratic i.e. nonlinear which would conflict with quantum mechanics (and its no-xerox theorem), too. These many-worlds apologists, including Brian Greene, often like to say (see e.g. The Hidden Reality) that the proper Copenhagen interpretation doesn't allow us to treat macroscopic objects by the very same rules of quantum mechanics with which the microscopic objects are treated and that's why they promote the many worlds. This proposition is what I call chutzpah. In reality, the claim that right after the measurement by one person, there suddenly exist several people is in a striking contradiction with facts that may be easily extracted from quantum mechanics applied to a system of people. The quantum mechanical laws – laws meticulously followed by the Copenhagen school, regardless of the size and context – still imply that the total mass is conserved, at least at a 1-kilogram precision, so it is simply impossible for one person to evolve into two. It's impossible because of the very same laws of quantum mechanics that, among many other things, protect Nature against the violation of charge conservation in nuclear processes. It's them, the many-worlds apologists, who are totally denying the validity of the laws of quantum mechanics for the macroscopic objects. In reality, quantum mechanics holds for all systems and for macroscopic objects, one may prove that classical physics is often a valid approximation, as the founding fathers of quantum mechanics knew and explicitly said. The validity of this approximation, as they also knew, is also a necessary condition for us to be able to make any "strict valid statements" of the classical type. The condition is hugely violated by interfering quantum microscopic (but, in principle, also large) objects before they are measured so one can't talk about the state of the system before the measurement in any classical language. In Nature, all observables (as well as the S-matrix and other evolution operators) are expressed by linear operators acting on the Hilbert space and Schrödinger's equation describing the evolution of any physical system has to be linear, too. Even if you use the density matrix, it evolves according to the "mixed Schrödinger equation" which is also linear:\[ i\hbar \ddfrac{}{t}\hat\rho = [\hat H(t),\hat \rho(t)]. \] It's extremely important that the density matrix \(\hat \rho\) enters linearly because \(\hat \rho\) is the quantum mechanical representation of the probability distribution, even the initial one. And the probabilities of final states are always linear combinations of the probabilities of the initial states. This claim follows from pure logic and will hold in any physical system, regardless of its laws. Why? Classically, the probabilities of final states \(P({\rm final}_j)\) are always given by\[ P({\rm final}_j) = \sum_{i=1}^N P({\rm initial}_i) P({\rm evolution}_{i\to j}) \] whose right hand side is linear in the probabilities of the initial states and the left hand side is linear in the probabilities of the final states. Regardless of the system, these dependences are simply linear. Quantum mechanics generalizes the probability distributions to the density matrices which admit states arising from superpositions (by having off-diagonal elements) and which are compatible with the non-zero commutators between generic observables. However, whenever your knowledge about a system may be described classically, the equation above strictly holds. It is pure maths; it is as questionable or unquestionable (make your guess) as \(2+2=4\). There isn't any "alternative probability calculus" in which the final probabilities would depend on the initial probabilities nonlinearly. If you carefully study the possible consistent algorithms to calculate the probabilities of various final outcomes or observations, you will find out that it is indeed the case that the quantum mechanical evolution still has to be linear in the density matrix. The Hossenfelder-Poisson equation fails to obey this condition so it violates totally basic rules of the probability calculus. Just to connect the density matrix discussion with a more widespread formalism, let us mention that quantum mechanics allows you to decompose any density matrix into a sum of terms arising from pure states,\[ \hat\rho = \sum_{k=1}^M p_k \ket{\psi_k}\bra{\psi_k} \] and it may study the individual terms, pure states, independently of others. When we do so, and we often do, we find out that the evolution of \(\ket\psi\), the pure states, has to be linear as well. The linear maps \(\ket\psi\to U\ket\psi\) produce \(\hat\rho\to U\hat\rho \hat U^\dagger\) for \(\hat\rho=\ket\psi\bra\psi\) which is still linear in the density matrix, as required. If you had a more general, nonlinear evolution – or if you represented observables by non-linear operators etc. – then these nonlinear rules for the wave function would get translated to nonlinear rules for the density matrix as well. And nonlinear rules for the density matrix would contradict some completely basic "linear" rules for probabilities that are completely independent of any properties of the laws of physics, such as\[ P(A\text{ or }B) = P(A)+P(B) - P(A\text{ and }B). \] So the linearity of the evolution equations in the density matrix (and, consequently, also the linearity in the state vector which is a polotovar for the density matrix) is totally necessary for the internal consistency of a theory that predicts probabilities, whatever the internal rules that yield these probabilistic predictions are! That's why two pieces of the wave function (or the density matrix) can never attract each other or otherwise interact with each other. As long as they're orthogonal, they're mutually exclusive possibilities of what may happen. They can never be interpreted as objects that simultaneously exist at the same moment. The product of their probabilities (and anything that depends on its being nontrivial) is zero because at least one of them equals zero. And the wave functions and density matrix cannot be interpreted as classical objects because it's been proven, by the most rudimentary experiments, that these objects are probabilistic distributions or their polotovars rather than observables. These statements depend on no open questions at the cutting edge of the modern physics research; they're parts of the elementary undergraduate material that has been understood by active physicists since the mid 1920s. It now trivially follows that all the people who study Schrödinger-Newton equations are profoundly deluded, moronic crackpots. And that's the memo. Single mom: totally off-topic Totally off-topic. I had to click somewhere, not sure where (correction: e-mail tip from Tudor C.), and I was led to this "news article"; click to zoom in. Single mom Amy Livingston of Plzeň, 87, is making $14,000 a month. That's not bad. First of all, not every girl manages to become a mom at the age of 87. Second of all, it is impressive for a mom with such a name – who probably doesn't speak Czech at all – to survive in my hometown at all. Her having 12 times the average salary makes her achievements even more impressive. ;-) Add to Digg this Add to reddit snail feedback (3) : reader Ervin Goldfain said... Your points are well taken: the Schrodinger-Newton equation is fundamentally flawed. Expanding on these issues, I'd like to know your views on the validity of: 1)the WKB approximation, 2)semiclassical gravity, 3)quantum chaos and quantization of classically chaotic dynamical systems? reader Luboš Motl said... Dear Ervin, thanks for your listening. All the entries in your systems are obviously legitimate and interesting approximations (1,2) or topics that may be studied (3). That doesn't mean that all people say correct things about them and use them properly, of course. ;-) The WKB approximation is just the "leading correction coming from quantum mechanics" to classical physics. Various simplified Ansaetze may be written down in various contexts. Semiclassical gravity either refers to general relativity with the first (one-loop) quantum corrections; or it represents the co-existence of quantized matter fields with non-quantized gravitational fields. This is only legitimate if the gravitational fields aren't affected by the matter fields - if the spacetime geometry solve the classical Einstein equations with sources that don't depend on the microscopic details of the matter fields and particles which are studied in the quantum framework. The matter fields propagate on a fixed classical background in this approximation but they don't affect the background by their detailed microstates. Indeed, if the dependence of the gravitational fields on the properties of the matter fields is substantial or important, there's no way to use the semiclassical approximation. Some people would evolve the gravitational fields according to the expectation values of the stress-energy tensor but that's the same mistake as discussed in this article in the context of the Poisson-Hossenfelder equation. Classical systems may be chaotic - unpredictable behavior very sensitive on initial conditions. Quantum chaos is about the research of the complicated wave functions etc. in systems that are analogous to (hatted) classically chaotic systems. reader Ervin Goldfain said... Thanks Lubos. I also take classical approximations with a grain of salt. For instance, mixing classical gravity with quantum behavior is almost always questionable a way or another. Here is a follow up question. What would you say if experiments on carefully prepared quantum systems could be carried out in highly accelerated frames of references? Could this be a reliable way of falsifying predictions of semiclassical gravity, for example?
8bb86f1f2f9e8449
PIRSA:C09016 - Reconstructing Quantum Theory - 2009PODCAST Subscribe to podcast Reconstructing Quantum Theory Organizer(s): Philip Goyal   Lucien Hardy   Collection URL: http://pirsa.org/C09016  | 2 Quantum Mechanics as a Theory of Systems with Limited Information Content Speaker(s): Caslav Brukner Abstract: I will consider physical theories which describe systems with limited information content. This limit is not due observer's ignorance about some “hidden” properties of the system - the view that would have to be confronted with Bell's theorem - but is of fundamental nature. I wil... read more Date: 09/08/2009 - 11:00 am The quantum logical reconstruction from Rovelli's axioms and its limits Speaker(s): Alexei Grinbaum Abstract: What belongs to quantum theory is no more than what is needed for its derivation. Keeping to this maxim, we record a paradigmatic shift in the foundations of quantum mechanics, where the focus has recently moved from interpreting to reconstructing quantum theory. We present a quantum logical derivat... read more Date: 09/08/2009 - 2:30 pm Quantum Theory from Entropic Inference Speaker(s): Ariel Caticha Abstract: Non-relativistic quantum theory is derived from information codified into an appropriate statistical model. The basic assumption is that there is an irreducible uncertainty in the location of particles so that the configuration space is a statistical manifold with a natural information metric. The d... read more Date: 09/08/2009 - 4:30 pm Exact uncertainty, quantum mechanics and beyond Speaker(s): Michael Hall Abstract: The fact that quantum mechanics admits exact uncertainty relations is used to motivate an ‘exact uncertainty’ approach to obtaining the Schrödinger equation. In this approach it is assumed that an ensemble of classical particles is subject to momentum fluctuations, with ... read more Date: 10/08/2009 - 9:00 am Exact uncertainty, bosonic fields, and interacting classical-quantum systems Speaker(s): Marcel Reginatto Abstract: The quantum equations for bosonic fields may be derived using an 'exact uncertainty' approach [1]. This method of quantization can be applied to fields with Hamiltonian functionals that are quadratic in the momentum density, such as the electromagnetic and gravitational fields. The approach, when ap... read more Date: 10/08/2009 - 11:00 am The power of epistemic restrictions in reconstructing quantum theory Speaker(s): Robert Spekkens Abstract: A significant part of quantum theory can be obtained from a single innovation relative to classical theories, namely, that there is a fundamental restriction on the sorts of statistical distributions over classical states that can be prepared.  (Such a restriction is termed “epistem... read more Date: 10/08/2009 - 4:30 pm Steps Towards a Unified Basis. Speaker(s): Inge Helland Abstract: A new foundation of quantum mechanics for systems symmetric under a compact symmetry group is proposed. This is given by a link to classical statistics and coupled to the concept of a statistical parameter. A vector phi of parameters is called an inaccessible c-variable if experiments can be provide... read more Date: 11/08/2009 - 9:00 am Why Quantum Theory is Complex Speaker(s): Philip Goyal Abstract: Complex numbers are an intrinsic part of the mathematical formalism of quantum theory, and are perhaps its most mysterious feature. We show that it is possible to derive the complex nature of the quantum formalism directly from the assumption that a pair of real numbers is associated to each sequenc... read more Date: 11/08/2009 - 11:00 am Quantum Mechanics as a Real-Vector-Space Theory with a Universal Auxiliary Rebit Speaker(s): William Wootters Abstract: In a 1960 paper, E. C. G. Stueckelberg showed how one can obtain the familiar complex-vector-space structure of quantum mechanics by starting with a real-vector-space theory and imposing a superselection rule. In this talk I interpret Stueckelberg’s construction in terms of a single auxili... read more Date: 11/08/2009 - 2:30 pm Reconstructing quantum theory from Brownian motion: an assessment Speaker(s): Lee Smolin Abstract: TBA Date: 11/08/2009 - 4:30 pm  | 2 Valid XHTML 1.0!
1a5819f30cc576e0
Viewpoint: Weyl electrons kiss • Leon Balents, Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, USA Physics 4, 36 Figure 1: Schematic image of the structure of the Weyl semimetal in momentum space. Two diabolical points are shown in red, within the bulk three-dimensional Brillouin zone. The excitations near each diabolical point behave like Weyl fermions. Each point is a source or sink of the flux (i.e., a monopole in momentum space) of the U(1) Berry connection, defined from the Bloch wave functions, as indicated by the blue arrows. The grey plane indicates the surface Brillouin zone, which is a projection of the bulk one. Wan et al. show that an odd number of surface Fermi arcs terminate at the projection of each diabolical point, as drawn here in yellow. In the iridium pyrochlores studied in the paper, a nonschematic picture would be significantly more complex, as there are 24 diabolical points rather than 2.Schematic image of the structure of the Weyl semimetal in momentum space. Two diabolical points are shown in red, within the bulk three-dimensional Brillouin zone. The excitations near each diabolical point behave like Weyl fermions. Each point is a ... Show more Topology, the mathematical description of the robustness of form, appears throughout physics, and provides strong constraints on many physical systems. It has long been known that it plays a key role in understanding the exotic phenomena of the quantum Hall effect. Recently, it has been found to generate robust and interesting bulk and surface phenomena in “ordinary” band insulators described by the old Bloch theory of solids. Such “topological insulators,” insulating in the bulk and metallic on the surface, occur in the presence of strong spin-orbit coupling in certain crystals, with unbroken time-reversal symmetry [1]. It is usually believed that such topological physics is obliterated in materials where magnetic ordering breaks time-reversal symmetry. This is by far the most common fate for transition-metal compounds that manage to be insulators—so called “Mott insulators,” which owe their lack of conduction to the strong Coulomb repulsion between electrons. In an article appearing in Physical Review B, Xiangang Wan from Nanjing University, China, and collaborators from the University of California and the Lawrence Berkeley National Laboratory, US, show that this is not necessarily the case, and describe a remarkable electronic structure with topological aspects that is unique to such (antiferro-)magnetic materials [2]. The state they describe is remarkable in possessing interesting low-energy electron states in the bulk and at the surface, linked by topology. In contrast, topological insulators, like quantum Hall states, possess low-energy electronic states only at the surface. The theory of Wan et al., which uses the LDA+U numerical method, is a type of mean field theory. As such, the low-energy quasiparticle excitations are described simply by noninteracting electrons in a background electrostatic potential and, in the case of a magnetically ordered phase, by a spatially periodic exchange field. It is possible to follow the evolution of the electronic states as a function of the U parameter, which is used to model the strength of Coulomb correlations. They apply the technique to iridium pyrochlores, R2Ir2O7, where R is a rare earth element. These materials are known to exhibit metal-insulator transitions (see, e.g., Ref. [3]), indicating substantial correlations, and are characterized by strong spin-orbit coupling due to the heavy element Ir (iridium). In the intermediate range of U, which they suggest is relevant for these compounds, Wan et al. find an antiferromagnetic ground state with the band structure of a “zero-gap semimetal,” in which the conduction and valence bands “kiss” at a discrete number ( 24!) of momenta. The dispersion of the bands approaching each touching point is linear, reminiscent of massless Dirac fermions such as those observed in graphene. This would be interesting in itself, but there are important differences from graphene. Because of the antiferromagnetism, time-reversal symmetry is broken, and as a consequence, despite the centrosymmetric nature of the crystals in question, the bands are nondegenerate. Thus two—and only two—states are degenerate at each touching point, unlike in graphene where there are four. In fact, the kissing bands found by Wan et al. are an example of accidental degeneracy in quantum mechanics, a subject discussed in the early days of quantum theory by von Neumann and Wigner (1929), and applied to band theory by Herring (1937). The phenomenon of level repulsion in quantum mechanics tends to prevent such band crossings. To force two levels to be degenerate, one must consider the 2×2 Hamiltonian matrix projected into this subspace: not only must the two diagonal elements be made equal, the two off-diagonal elements must be made to vanish. This requires three real parameters to be tuned to achieve degeneracy. Thus, without additional symmetry constraints, such accidental degeneracies are vanishingly improbable in one and two dimensions, but can occur as isolated points in momentum space in three dimensions (the three components of the momentum serving as tuning parameters). An accidental touching of this type is called a diabolical point. The 2×2 matrix Schrödinger equation in the vicinity of this point is mathematically similar to a two-component Dirac-like one, known as the Weyl equation. Thus the low-energy electrons in this state behave like Weyl fermions. A property of such a diabolical point is that it cannot be removed by any small perturbation, but may only disappear by annihilation with another diabolical point. Actually such diabolical points were suggested previously in a very similar context by Murakami [4], who argued that such a semimetallic state would naturally arise as an intermediate between a topological insulator and a normal band insulator. That theory does not directly apply here, since Murukami assumed time-reversal symmetry, which is broken in Wan et al.’s calculations. However, inversion symmetry plays a similar role, and indeed the latter authors find that the Weyl semimetal is intermediate between an “ordinary” Mott insulator and an “axion insulator,” somewhat analogous to the topological insulator. The axion insulator has a quantized magnetoelectric response identical to that of a (strong) topological insulator, but lacks the protected surface states of the topological insulator. What is really new and striking about the recent paper is the implications for surface states. Remarkably, they find that certain surfaces (e.g., <110> and <111> faces) have bound states at the Fermi energy, and that these states do not form the usual closed Fermi surfaces found in 2d or 3d metals. Instead, the states at the Fermi energy form open “arcs,” terminating at the projection of diabolical points onto the surface Brillouin zone (see Fig. 1). Fermi arcs have appeared before in physics in experimental studies of high-temperature cuprate superconductors. However, in that context they are mysterious and puzzling, because they would seem to be prohibited by topology. The Fermi surface is by definition the boundary between occupied and unoccupied states, and if it terminates then one could go “around” the termination point and smoothly change from one to another, which is impossible at zero temperature. At a surface, this paradox is avoided because the surface states may unbind into the bulk when going around the end of a Fermi arc. This theoretical proposal provides plenty of motivation for future experiments. Observation of the Fermi arcs would be striking, and provide a useful metric to gauge the empirical ones seen in the cuprates. The bulk Weyl fermions are also interesting and would be exciting to try to observe in transport. Simple phase-space considerations suggest that the low-energy electrons should be remarkably resistant to scattering by impurities. Most of this theory could apply to many other materials—the only necessary conditions are significant spin-orbit coupling and antiferromagnetic order preserving inversion symmetry (and the latter is only essential for some of the physics). More generally, the work widens the range of unusual states of matter that have been proposed to arise in the regime of strong spin-orbit coupling and intermediate correlation. Despite the uncommon aspects described above, Wan et al.’s work is a mean-field theory, and yet more exotic possibilities have been suggested that are not describable in this way (e.g., Ref. [5]). Hopefully this is just the beginning of the theoretical and experimental exploration of this fascinating regime. 2. X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011) 3. K. Matsuhira and et al, J. Phys. Soc. Jpn. 76, 043706 (2007) 4. S. Murakami, New J. Phys. 9,356 (2007) 5. D. A. Pesin and L. Balents, Nature Phys. 6, 376 (2010) About the Author Image of Leon Balents Subject Areas Strongly Correlated Materials Related Articles Viewpoint: Sensing Magnetic Fields with a Giant Quantum Wave Strongly Correlated Materials Viewpoint: Sensing Magnetic Fields with a Giant Quantum Wave Viewpoint: Rescuing the Quasiparticle Strongly Correlated Materials Viewpoint: Rescuing the Quasiparticle Viewpoint: Orbital Engineering, By Design Materials Science Viewpoint: Orbital Engineering, By Design More Articles
49fd872d3002b269
Skip to main content Chemistry LibreTexts Group Theory and its Application to Chemistry • Page ID • Group Theory is the mathematical application of symmetry to an object to obtain knowledge of its physical properties. What group theory brings to the table, is how the symmetry of a molecule is related to its physical properties and provides a quick simple method to determine the relevant physical information of the molecule. The symmetry of a molecule provides you with the information of what energy levels the orbitals will be, what the orbitals symmetries are, what transitions can occur between energy levels, even bond order to name a few can be found, all without rigorous calculations. The fact that so many important physical aspects can be derived from symmetry is a very profound statement and this is what makes group theory so powerful. To a fully understand the math behind group theory one needs to take a look at the theory portion of the Group Theory topic or refer to one of the reference text listed at the bottom of the page. Never the less as Chemist the object in question we are examining is usually a molecule. Though we live in the 21st century and much is known about the physical aspects that give rise to molecular and atomic properties. The number of high level calculations that need to be performed can be both time consuming and tedious. To most experimentalist this task is takes away time and is usually not the integral part of their work. When one thinks of group theory applications one doesn't necessarily associated it with everyday life or a simple toy like a Rubik's cube. A Rubik's cube is an a cube that has a \(3 \times 3\) array of different colored tiles on each of its six surfaces, for a total of 54 tiles. Since the cube exist in 3D space, the three axis are \(x\), \(y\), \(z\). Since the rubik's cube only allows rotation which are called operations, there are three such operations around each of the \(x\), \(y\), \(z\) axis. Figure: Reubik's cube. Use with permission from Wikipedia Of course the ultimate challenge of a Rubik's cube is to place all six colors on each of the six faces. By performing a series of such operations on the Rubik's cube one can arrive at a solution (A link of a person solving a Rubik's cube1 in 10.4s with operations performed noted, the operations performed will not translate to chemistry applications but it is a good example of how symmetry operations arrive at a solution). The operations shown in the Rubik's cube case are inherent to the make up of the cube, i.e., the only operations allowed are the rotations along the x, y, z axis. Therefore the Rubik's cube only has x,y,z rotation operations. Similarly the operations that are specific to a molecule are dependent on its symmetry. These operations are given in the top row of the character table. \(C_{3v}\) \(E\) \(2C_3\) \(3\sigma_v\) rotation and translation \(A_1\) +1 +1 +1 \(z\) \(x^2+y^2\), \(z^2\) \(A_2\) +1 +1 -1 \(R_z\) - \(E\) +2 -1 0 (\(x\), \(y\)) (\(R_x\), \(R_y\)) (\(x^2-y^2\), \(xy\)) (\(xz\), \(yz\)) The character table contains a wealth of information, for a more detailed discussion of the character table can be found in Group Theory Theoretical portion of the chemWiki. All operations in the character table are contained in the first row of the character table, in this case \(E\), \(C_3\), & \(\sigma_v\), these are all of the operations that can be preformed on the molecule that return the original structure. The first column contains the three irreducible representations from now on denoted as \(\Gamma_{ir}\), here they are \(A_1\), \(A_2\) & \(E\). The value of the \(\Gamma_{ir}\) denotes what the operation does. A value of 1 represents no change, -1 opposite change and 0 is a combination of 1 & -1 (0’s are found in degenerate molecules. The final two columns Rotation and Translation represented by \(R_x\),\(R_y\), \(R_z\) & \(x\), \(y\), \(z\) respectively. Where R's refer to rotation about an axis and the \(x\), \(y\), \(z\) refers to a translation about an axis, the \(\Gamma_{ir}\) the each \(R_x\), \(R_y\), \(R_z\) & \(x\), \(y\), \(z\) term is the irreducible symmetry of a rotation or translation operation. Like wise the final column the orbital symmetries relates the orbital wavefunction to a irreducible representation. Direct Products This is a quick rule to follow for calculating Direct Products of irreproducible representations, such a calculation will be necessary for working through transition moment integrals. Following the basic rules given by the table given below. One can easily work through symmetry calculations very quickly. "Symmetric" \(\times\) "Symmetric" is "Symmetric" "Symmetric" \(\times\) "AntiSymmetric" is "AntiSymmetric" "AntiSymmetric" \(\times\) "Symmetric" is "AntiSymmetric" "AntiSymmetric" \(\times\) "AntiSymmetric" is "Symmetric" \(g \times g = g\) \(g \times u = u\) \(u \times g = u\) \(u \times u = g\) \( ' \times ' = '\) \( ' \times '' = ''\) \( '' \times ' = ''\) \( '' \times '' = '\) \(A \times A= A\) \(A \times B= B\) \(B \times A= B\) \(B \times B= A\) All molecules vibrate. While these vibrations can originate from several events, which will be covered later, the most basic of these occurs when an electron is excited within the electronic state from one eigenstate to another. The Morse potential (electronic state) describes the energy of the eigenstate as a function of the interatomic distance. When an electron is excited form one eigenstate to another within the electronic state there is a change in interatomic distance, this result in a vibration occurring. Vibrational energies arise from the absorption of polarizing radiation. Each vibrational state is assigned a \(\Gamma_{ir}\). A vibration occurs when an electron remains within the electronic state but changes from one eigenstate to another (The vibrations for the moment are only IR active vibrations, there are also Raman vibrations which will be discussed later in electronic spectroscopy), in the case of the Morse diagram above the eigenstates are denoted as \(\nu\). As you can see from the diagram the eigenstate is a function of energy versus interatomic distance. To predicting whether or not a vibrational transition, or for that matter a transition of any kind, will occur we use the transition moment integral. \[\int \Psi_i*\mu \Psi_f d\tau=\langle \Psi_i | \mu| \Psi_f \rangle\] The transition moment integral is written here in standard integral format, but this is equivalent to Bra & Ket format which is standard in most chemistry quantum mechanical text (The \(\langle \Psi_i |\) is the Bra portion, \(| \Psi_f \rangle\) is the Ket portion). The transition moment operator \(\mu\) is the operator the couples the initial state \(\Psi_i\) to the final state \(\Psi_f\) , which is derived from the time independent Schrödinger equation. However using group theory we can ignore the detailed mathematical methods. We can use the \(\Gamma_{ir}\) of the vibrational energy levels and the symmetry of the transition moment operator to find out if the transition is allowed by selection rules. The selection rules for vibrations or any transition is that is allowed, for it to by allowed by group theory the answer must contain the totally symmetric \(\Gamma_{ir}\), which is always the first \(\Gamma_{ir}\) in the character table for the molecule in question. Let’s work through an example: Ammonia (\(NH_3\)) with a \(C_{3v}\) symmetry. Consequently, all of the properties contained in the \(C_{3v}\) character table above are pertinent to the ammonia molecule. The principle axis is the axis that the highest order rotation can be preformed. In this case the z-axis pass through the lone pairs (pink sphere), which contains a \(C_3\) axis. The ?’s or mirror planes (\(\sigma_v\) parallel to z-axis & \(\sigma_h\) perpendicular to the z-axis). In ammonia there is no \(\sigma_h\) only three \(\sigma_v\)’s. The combination of \(C_3\) & \(\sigma_v\) leads to \(C_{3v}\) point group, which leads to the C3v character table. The number of transitions is dictated by 3N-6 for non-linear molecules and 3N-5 for linear molecules, where N is the number of atoms. The 6 & the 5 derive from three translations in the x,y,z plan and three rotations also in the x,y,z plan. Where a linear molecule only has two rotations in the x & y plans since the z axis has infinite rotation. This leads to only 5 degrees of freedom in the rotation and translation operations. In the case of Ammonia there will be 3(4)-6=6 vibrational transitions. This can be confirmed by working through the vibrations of the molecule. This work is shown in the table below. C3v, vib.JPG The vibrations that are yielded 2A1 & 2E (where E is doubly degenerate, meaing two vibration modes each) which total 6 vibrations. This calculation was done by using the character table to find out the rotation and translation values and what atoms move during each operation. Using the character table we can characterize the A1 vibration as IR active along the z-axis and raman active as well. The E vibration is IR active along both the x & y axis and is Raman active as well. From the character table the IR symmetries correspond to the x, y & z translations. Where the Raman active vibrations correspond to the symmetries of the d-orbitals. Vibrational Spectroscopy Infrared Spectroscopy Infrared Spectroscopy (IR) measures the vibrations that occur within a single electronic state, such as the one shown above. Because the transition occurs within a single electronic state there is a variation in interatomic distance. The dipole moment is dictate by the equation. \[ \vec{\mu} = \alpha\vec{E} \] Where \( \vec{\mu} \) is the magnitude of dipole moment; \( \alpha \) is the polarizability constant (actually a tensor) & \( E \) is the magnitude of the electric field which can be described as the electronegitivity.3 Therefore when a vibration occurs within a single electronic state there is a change in the dipole moment, which is the definition of an active IR transition. \[ \left ( \frac{\mathrm{d\mu} }{\mathrm{d} q} \right )_{eq} \neq 0 \] In terms of group theory a change in the dipole is a change from one vibrational state to another, as shwon by the equation above. A picture of the vibrational states with respect ot the rotational states and electronic states is given below. In IR spectroscopy the transition occurs only from on vibrational state to another all within the same electronic state, shown below as B. Where group theory comes in to play is weather or not this transition is allowed by symmetry. This can be determined by the transition moment integral described above. For example if one works through a transition from the ground vibrational state, which is always the totally symmetric in this example A1g, to a excited vibrational state, B2u. The possible symmetries for the transition moment operator are A1g,B2u,B2g, for x,y,z transitions repectively (one obtains the transition moment operator from the character table for the ?ir of the x,y,z translations). <A1glMlB2u>, M=A1g,B2u or B2g From the direct product rules one can work through each of the transiton moment operators and see if the awnser contains the totally symmetric ?ir. The first direct product gives a A1gxA1gxB2u=B2u, so this transition is not x polarized. The y polarized transition moment operator gives A1gxB2uxB2u=A1g, this transition is allowed by symmetry. The final polarization z, gives A1gxB2gxB2u=A1u, this transition is also not allowed by symmetry. So this IR transition is allowed by y polarized light in this molecule. Electronic Transitions When an electron is excited from one electronic state to another, this is what is called an electronic transition. A clear example of this is part C in the energy level diagram shown above. Just as in a vibrational transition the selection rules for electronic transitions are dictated by the transition moment integral. However we now must consider both the electronic state symmetries and the vibration state symmetries since the electron will still be coupled between two vibrational states that are between two electronic states. This gives us this modified transition moment integral: Transition Moment Integral, electronic.JPG Where you can see that the symmetry of the initial electronic state & vibrational state are in the Bra and the final electronic and vibrational states are in the Ket. Though this appears to be a modified version of the transition moment integral, the same equation holds true for a vibrational transition. The only difference would be the electronic state would be the same in both the initial and final states. Which the dot product of yields the totally symmetric representation, making the electronic state irrelevant for purely vibrational spectroscopy. In Resonance Raman spectroscopy transition that occurs is the excitation from one electronic state to another and the selection rules are dictated by the transition moment integral discussed in the electronic spectroscopy segment. However mechanically Raman does produce a vibration like IR, but the selection rules for Raman state there must be a change in the polarization, that is the volume occupied by the molecule must change. But as far as group theory to determine whether or not a transition is allowed one can use the transition moment integral presented in the electronic transition portion. Where one enters the starting electronic state symmetry and vibrational symmetry and final electronic state symmetry and vibrational state, perform the direct product with the different M's or polarizing operators For more information about this topic please explore the Raman spectroscopy portion of the Chemwiki For the purposes of Group Theory Raman and Fluorescence are indistinguishable. They can be treated as the same process and in reality they are quantum mechanically but differ only in how Raman photons scatter versus those of fluorescence. Phosphorescence is the same as fluorescence except upon excitation to a singlet state there is an interconversion step that converts the initial singlet state to a triplet state upon relaxation. This process is longer than fluorescence and can last microseconds to several minutes. However despite the singlet to triplet conversion the transition moment integral still holds true and the symmetry of ground state and final state still need to contain the totally symmetric representation. Molecular Orbital Theory and Symmetry Molecular Orbitals also follow the symmetry rules and indeed have their own ?ir. Below are the pi molecular orbitals for trans-2-butene and the corresponding symmetry of each molecular orbital. The ?ir of the molecular orbitals are created by simply preforming the operations of that molecule's character table on that orbital. In the case of trans-2-butene the point group is C2h, the operations are: E, C2, i & ?h. Each operation will result in a change in phase (since were dealing with p-orbitals) or it will result in no change. The first molecular orbital results in the totally symmetric representation, working through all four operations E, C2, i, ?h will only result in 1's meaning there is no change, giving the Ag symmetry state. These molecular orbitals also represent different electronic states and can be arranged energetically. Putting the orbital that has the lowest energy, the orbital with the fewest nodes at the bottom of the energy diagram and like wise working up form lowest energy to highest energy. The highest energy orbital will have the most nodes. Once you've set up your MO diagram and place the four pi electrons in the orbitals you see that the first two orbitals listed (lowest energy) are HOMO orbitals and the bottom two (highest energy) and LUMO orbitals. With this information if you have a transition from the totally symmetric HOMO orbital to the totally symmetric LUMO orbital the transition moment operator would need to have Ag symmetry (using the C2h) to give a result containing the totally symmetric representation. These four molecular orbitals represent four different electronic states. So transitions from one MO into another would be something that is measured typically with UV-Vis spectrometer. References and Further Reading 1. Daniel Harris & Michael Bertolucci Symmetry and Spectroscopy New York, Dover Publications 1989 ($19.95), [Highly recommended, great text for explaining Group Theory for molecules and Application of Group Theory in various spectroscopy's] 2. Albert Cotton Chemical Applications of Group Theory 3rd New York, Wiley Inter-science Publication 1990 ($148.50),[Mathematical approach to group theory in chemistry] 3. Donald A. McQuarrie Quantum Chemistry Sausalito, University Science Books 1983 ($88.00) [Classic Quantum chemistry text very clear and thorough] 4. Douglas Skoog, James Holler & Stanley Crouch Principles of Instrumental Analysis (6th ed) Thomson Brooks Cole 2007 ($170.04), [Covers basics of all types of analytical chemistry methods and theory] 1. Follow the links above to Water, Ammonia and Benzene and work out the ?ir of the vibrations. Using the method laid out by the character table. (Follow the example of ammonia for help) 2. From problem 1. work out what possible are the possible transition moment operators for each vibration. 3. Work through the P-orbital molecular orbitals for cis-butdiene. (Note the conservation of "stuff", start by combining four p-orbitals and finish with four molecular orbitals) What is the point group? what are the ?ir of each MO? Finally how many vibrations are there for cis-butadiene and what are their ?ir. • Jim Hughes (UCD) • Was this article helpful?
8c8164029ebdc938
Durham e-Theses You are in: Quantum field theories with fermions in the Schrödinger representation Nolland, David John (2000) Quantum field theories with fermions in the Schrödinger representation. Unspecified thesis, Durham University. This thesis is concerned with the Schrödinger representation of quantum field theory. We describe techniques for solving the Schrödinger equation which supplement the standard techniques of field theory. Our aim is to develop these to the point where they can readily be used to address problems of current interest. To this end, we study realistic models such as gauge theories coupled to dynamical fermions. For maximal generality we consider particles of all physical spins, in various dimensions, and eventually, curved spacetimes. We begin by considering Gaussian fields, and proceed to a detailed study of the Schwinger model, which is, amongst other things, a useful model for (3+1) dimensional gauge theory. One of the most important developments of recent years is a conjecture by Mal-dacena which relates supergravity and string/M-theory on anti-de-Sitter spacetimes to conformal field theories on their boundaries. This correspondence has a natural interpretation in the Schrödinger representation, so we solve the Schrödinger equation for fields of arbitrary spin in anti-de-Sitter spacetimes, and use this to investigate the conjectured correspondence. Our main result is to calculate the Weyl anomalies arising from supergravity fields, which, summed over the supermultiplets of type JIB supergravity compactified on AdS(_s) x S(^5) correctly matches the anomaly calculated in the conjecturally dual N = 4 SU{N) super-Yang-Mills theory. This is one of the few existing pieces of evidence for Maldacena's conjecture beyond leading order in TV. [brace not closed] Item Type:Thesis (Unspecified) Thesis Date:2000 Copyright:Copyright of this thesis is held by the author Deposited On:01 Aug 2012 11:49 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
5054ec506559fb68
The three-dimensional Laplacian can be defined as $$\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}.$$ Expressed in spherical coordinates, it does not have such a nice form. But I could define a different operator (let's call it a "Laspherian") which would simply be the following: $$\bigcirc^2=\frac{\partial^2}{\partial \rho^2}+\frac{\partial^2}{\partial \theta^2}+\frac{\partial^2}{\partial \phi^2}.$$ This looks nice in spherical coordinates, but if I tried to express the Laspherian in Cartesian coordinates, it would be messier. Mathematically, both operators seem perfectly valid to me. But there are so many equations in physics that use the Laplacian, yet none that use the Laspherian. So why does nature like Cartesian coordinates so much better? Or has my understanding of this gone totally wrong? • 69 $\begingroup$ your laspherian is not dimensionally consistent $\endgroup$ – wcc Apr 26, 2019 at 15:25 • 3 $\begingroup$ That's true but: the Laplacian wouldn't be dimensionally consistent either except we happen to have given x,y, and z all the same units. We could equally well give the same units to $\rho$, $\theta$, and $\phi$. I think @knzhou's answer of rotational symmetry justifies why, at least in our universe, we only do the former. I've never made that connection before, though! $\endgroup$ – Sam Jaques Apr 26, 2019 at 15:35 • 32 $\begingroup$ You can't give the same units to distance and angle. $\endgroup$ Apr 26, 2019 at 17:43 • 9 $\begingroup$ @SamJaques Your original question is good, but the above comment comes off as you being stubborn. You are asking what is more confusing about a convention where angles and distance have the same units than a system where they have different units? Come on, man. $\endgroup$ Apr 26, 2019 at 17:48 • 13 $\begingroup$ "Mathematically, both operators seem perfectly valid to me." Mathematically, it's perfectly valid for gravity to disappear every Tuesday or for the electric force to drop off linearly with distance. Most things that are mathematically valid are not the way the universe works. $\endgroup$ – Owen Apr 27, 2019 at 23:57 4 Answers 4 Nature appears to be rotationally symmetric, favoring no particular direction. The Laplacian is the only translationally-invariant second-order differential operator obeying this property. Your "Laspherian" instead depends on the choice of polar axis used to define the spherical coordinates, as well as the choice of origin. Now, at first glance the Laplacian seems to depend on the choice of $x$, $y$, and $z$ axes, but it actually doesn't. To see this, consider switching to a different set of axes, with associated coordinates $x'$, $y'$, and $z'$. If they are related by $$\mathbf{x} = R \mathbf{x}'$$ where $R$ is a rotation matrix, then the derivative with respect to $\mathbf{x}'$ is, by the chain rule, $$\frac{\partial}{\partial \mathbf{x}'} = \frac{\partial \mathbf{x}}{\partial \mathbf{x}'} \frac{\partial}{\partial \mathbf{x}} = R \frac{\partial}{\partial \mathbf{x}}.$$ The Laplacian in the primed coordinates is $$\nabla'^2 = \left( \frac{\partial}{\partial \mathbf{x}'} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}'} \right) = \left(R \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left(R \frac{\partial}{\partial \mathbf{x}} \right) = \frac{\partial}{\partial \mathbf{x}} \cdot (R^T R) \frac{\partial}{\partial \mathbf{x}} = \left( \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}} \right)$$ since $R^T R = I$ for rotation matrices, and hence is equal to the Laplacian in the original Cartesian coordinates. To make the rotational symmetry more manifest, you could alternatively define the Laplacian of a function $f$ in terms of the deviation of that function $f$ from the average value of $f$ on a small sphere centered around each point. That is, the Laplacian measures concavity in a rotationally invariant way. This is derived in an elegant coordinate-free manner here. The Laplacian looks nice in Cartesian coordinates because the coordinate axes are straight and orthogonal, and hence measure volumes straightforwardly: the volume element is $dV = dx dy dz$ without any extra factors. This can be seen from the general expression for the Laplacian, $$\nabla^2 f = \frac{1}{\sqrt{g}} \partial_i\left(\sqrt{g}\, \partial^i f\right)$$ where $g$ is the determinant of the metric tensor. The Laplacian only takes the simple form $\partial_i \partial^i f$ when $g$ is constant. Given all this, you might still wonder why the Laplacian is so common. It's simply because there are so few ways to write down partial differential equations that are low-order in time derivatives (required by Newton's second law, or at a deeper level, because Lagrangian mechanics is otherwise pathological), low-order in spatial derivatives, linear, translationally invariant, time invariant, and rotationally symmetric. There are essentially only five possibilities: the heat/diffusion, wave, Laplace, Schrodinger, and Klein-Gordon equations, and all of them involve the Laplacian. The paucity of options leads one to imagine an "underlying unity" of nature, which Feynman explains in similar terms: Is it possible that this is the clue? That the thing which is common to all the phenomena is the space, the framework into which the physics is put? As long as things are reasonably smooth in space, then the important things that will be involved will be the rates of change of quantities with position in space. That is why we always get an equation with a gradient. The derivatives must appear in the form of a gradient or a divergence; because the laws of physics are independent of direction, they must be expressible in vector form. The equations of electrostatics are the simplest vector equations that one can get which involve only the spatial derivatives of quantities. Any other simple problem—or simplification of a complicated problem—must look like electrostatics. What is common to all our problems is that they involve space and that we have imitated what is actually a complicated phenomenon by a simple differential equation. At a deeper level, the reason for the linearity and the low-order spatial derivatives is that in both cases, higher-order terms will generically become less important at long distances. This reasoning is radically generalized by the Wilsonian renormalization group, one of the most important tools in physics today. Using it, one can show that even rotational symmetry can emerge from a non-rotationally symmetric underlying space, such as a crystal lattice. One can even use it to argue the uniqueness of entire theories, as done by Feynman for electromagnetism. • $\begingroup$ In other words, the Cartesian form of the Laplacian is nice because the Cartesian metric tensor is nice. $\endgroup$ Apr 26, 2019 at 15:42 • 1 $\begingroup$ I think it's also probably valid to talk about the structure of spacetime; it is Lorentzian and in local inertial frames it always looks like Minkowski space. So if we were to ignore the time coordinates and just consider the spatial components of spacetime then the structure always possesses Riemann geometry and appears Euclidean in a local inertial frame. Cartesian coordinates are then the most natural way to simply describe Euclidean geometry, which is why the Laplacian appears the way it does. Nature favours the Laplacian because space appears Euclidean in local inertial frames. $\endgroup$ – Ollie113 Apr 26, 2019 at 15:53 • 1 $\begingroup$ Are you drawing a distinction between the heat/diffusion and Schrödinger equations because the latter contains terms depending on the fields themselves, rather than just their derivatives? (And similarly for "wave" vs. "Klein-Gordon"?) Or is there another reason that you're differentiating between cases that have the same differential operators in them? $\endgroup$ Apr 28, 2019 at 19:24 • 2 $\begingroup$ The third block-set equation makes explicit use of the notion that an inner product is taken between a space and its dual, but the notation associated with that idea appears halfway through as if out of nowhere. It might be better to include the ${}^T$ in the first two dot products as well. $\endgroup$ Apr 28, 2019 at 19:46 • 1 $\begingroup$ yes, please explain where that $^T$ suddenly comes from. Just give us the general public some names we could search for. $\endgroup$ – Will Ness Apr 29, 2019 at 5:36 This is a question that hunted me for years, so I'll share with you my view about the Laplace equation, which is the most elemental equation you can write with the laplacian. If you force the Laplacian of some quantity to 0, you are writing a differential equation that says "let's take the average value of the surrounding". It's easier to see in cartesian coordinates: $$\nabla ^2 u = \frac{\partial^2 u}{\partial x ^2} + \frac{\partial^2 u}{\partial y ^2} $$ If you approximate the partial derivatives by $$ \frac{\partial f}{\partial x }(x) \approx \frac{f(x + \frac{\Delta x}{2}) - f(x-\frac{\Delta x}{2})}{\Delta x} $$ $$ \frac{\partial^2 f}{\partial x^2 }(x) \approx \frac{ \frac{\partial f}{\partial x } \left( x+ \frac{\Delta x}{2} \right) - \frac{\partial f}{\partial x } \left( x - \frac{\Delta x}{2} \right) } { \Delta x} = \frac{ f(x + \Delta x) - 2 \cdot f(x) + f(x - \Delta x) } { \Delta x ^2 } $$ for simplicity let's take $\Delta x = \Delta y = \delta$, then the Laplace equation $$\nabla ^2 u =0 $$ becomes: $$ \nabla ^2 u (x, y) \approx \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) } { \delta ^2 } + \frac{ u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ $$ \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) + u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ from which you can solve for $u(x, y)$ to obtain $$ u(x, y) = \frac{ u(x + \delta, y) + u(x - \delta, y) + u(x, y+ \delta)+ u(x, y - \delta) } { 4 } $$ That can be read as: "The function/field/force/etc. at a point takes the average value of the function/field/force/etc. evaluated at either side of that point along each coordinate axis." Laplace equation function Of course this only works for very small $\delta$ for the relevant sizes of the problem at hand, but I think it does a good intuition job. I think what this tell us about nature is that at first sight and at a local scale, everything is an average. But this may also tell us about how we humans model nature, being our first model always: "take the average value", and maybe later dwelling into more intricate or detailed models. • 1 $\begingroup$ Out of curiosity, is that (very nice) figure a scan of a hand sketch, or do you have a software tool that supports such nice work? $\endgroup$ Apr 28, 2019 at 19:43 • 2 $\begingroup$ Your nice idea that the potential u(x,y) is the average of it's surroundings is exactly the way a spreadsheet (like Excel) is used to solve the Poisson equation for electrostatic problems that are 2-dimensional like a long metal pipe perpendicular to the spreadsheet. Each cell is programmed equal to the average of it's surrounding 4 cells. Fixed numbers (=voltage) are then put into any boundary or interior cells that are held at fixed potentials. The spreadsheet is then iterated until the numbers stop changing at the accuracy you are interested in. $\endgroup$ Apr 28, 2019 at 20:59 • 1 $\begingroup$ @dmckee thank you for the compliment! I wish it was a software tool but it's my hand. Graphing software draws very nice rendered 3d graphics but I have yet to find one that draws in a more organic way. If you know some that does please recommend! $\endgroup$ Apr 29, 2019 at 1:14 • $\begingroup$ I've been experimenting with a Wacom tablet from time to time. But I cheaped out and bought the USD200 one instead of the USD1000 one that is also a high-resolution display. And the result is that I'm having to do a lot of art-school style exercises again to learn to draw on one surface while looking at another and in the mean time I'm just not able to do some of the more sophisticated things I would like to do. But the pressure sensitivity is very nice. If you have the funds the pro version might be a better investment. $\endgroup$ Apr 29, 2019 at 3:45 • $\begingroup$ The numerical technique @GaryGodfrey mentioned is an example of a relaxation method. You can learn more about it from Per Brinch Hansen's report on "Numerical Solution of Laplace's Equation" (surface.syr.edu/eecs_techreports/168), and from many other places too. $\endgroup$ – Vectornaut Apr 29, 2019 at 15:36 For me as a mathematician, the reason why Laplacians (yes, there is a plethora of notions of Laplacians) are ubiquitous in physics is not any symmetry of space. Laplacians also appear naturally when we discuss physical field theories on geometries other than Euclidean space. I would say, the importance of Laplacians is due to the following reasons: (i) the potential energy of many physical systems can be modeled (up to errors of third order) by the Dirichlet energy $E(u)$ of a function $u$ that describes the state of the system. (ii) critical points of $E$, that is functions $u$ with $DE(u) = 0$, correspond to static solutions and (iii) the Laplacian is essentially the $L^2$-gradient of the Dirichlet energy. To make the last statement precise, let $(M,g)$ be a compact Riemannian manifold with volume density $\mathrm{vol}$. As an example, you may think of $M \subset \mathbb{R}^3$ being a bounded domain (with sufficiently smooth boundary) and of $\mathrm{vol}$ as the standard Euclidean way of integration. Important: The domain is allowed to be nonsymmetric. Then the Dirichlet energy of a (sufficiently differentiable) function $u \colon M \to \mathbb{R}$ is given by $$E(u) = \frac{1}{2}\int_M \langle \mathrm{grad} (u), \mathrm{grad} (u)\rangle \, \mathrm{vol}.$$ Let $v \colon M \to \mathbb{R}$ be a further (sufficiently differentiable) function. Then the derivative of $E$ in direction of $v$ is given by $$DE(u)\,v = \int_M \langle \mathrm{grad}(u), \mathrm{grad}(v) \rangle \, \mathrm{vol}.$$ Integration by parts leads to $$\begin{aligned}DE(u)\,v &= \int_{\partial M} \langle \mathrm{grad}(u), N\rangle \, v \, \mathrm{vol}_{\partial M}- \int_M \langle \mathrm{div} (\mathrm{grad}(u)), v \rangle \, \mathrm{vol} \\ &= \int_{\partial M} \langle \mathrm{grad}(u), N \rangle \, v \, \mathrm{vol}_{\partial M}- \int_M g( \Delta u, v ) \, \mathrm{vol}, \end{aligned}$$ where $N$ denotes the unit outward normal of $M$. Usually one has to take certain boundary conditions on $u$ into account. The so-called Dirichlet boundary conditions are easiest to discuss. Suppose we want to minimize $E(u)$ subject to $u|_{\partial M} = u_0$. Then any allowed variation (a so-called infinitesimal displacement) $v$ of $u$ has to satisfy $v_{\partial M} = 0$. That means if $u$ is a minimizer of our optimization problem, then it has to satisfy $$ 0 = DE(u) \, v = - \int_M g( \Delta u, v ) \, \mathrm{vol} \quad \text{for all smooth $v \colon M \to \mathbb{R}$ with $v_{\partial M} = 0$.}$$ By the fundamental lemma of calculus of variations, this leads to the Poisson equation $$ \left\{\begin{array}{rcll} - \Delta u &= &0, &\text{in the interior of $M$,}\\ u_{\partial M} &= &u_0. \end{array}\right.$$ Notice that this did not require the choice of any coordinates, making these entities and computations covariant in the Einsteinian sense. This argumentation can also be generalized to more general (vector-valued, tensor-valued, spinor-valued, or whatever-you-like-valued) fields $u$. Actually, this can also be generalized to Lorentzian manifolds $(M,g)$ (where the metric $g$ has signature $(\pm , \mp,\dotsc, \mp)$); then $E(u)$ coincides with the action of the system, critical points of $E$ correspond to dynamic solutions, and the resulting Laplacian of $g$ coincides with the wave operator (or d'Alembert operator) $\square$. • 1 $\begingroup$ Bit late but I think this has knzhou's answer hidden in it: How is the inner product of gradients defined? You're taking the usual inner product on $\mathbb{R}^3$, right? So I can be pedantic and ask: why not take a different inner product? Rotation and translation invariance seems to be still be the answer. $\endgroup$ – Sam Jaques May 4, 2020 at 11:11 • $\begingroup$ Well, if you weaken "rotation-invariance" to "isotropy" (rotation-invariance per tangent space) and abandon the translation invariance, I am with you. My point is that a general (pseudo-)Riemannian manifold need not have any global isometries. But the Laplacians/d'Alembert operator is still well-defined. $\endgroup$ May 4, 2020 at 11:56 The expression you've given for the Laplacian, $$ \nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}, $$ is a valid way to express it, but it is not a particularly useful definition for that object. Instead, a much more useful way to see the Laplacian is to define it as $$ \nabla^2 f = \nabla \cdot(\nabla f), $$ i.e., as the divergence of the gradient, where: • The gradient of a scalar function $f$ is the vector $\nabla f$ which points in the direction of fastest ascent, and whose magnitude is the rate of growth of $f$ in that direction; this vector can be cleanly characterized by requiring that if $\boldsymbol{\gamma}:\mathbb R \to E^3$ is a curve in Euclidean space $E^3$, the rate of change of $f$ along $\boldsymbol\gamma$ be given by $$ \frac{\mathrm d}{\mathrm dt}f(\boldsymbol{\gamma}(t)) = \frac{\mathrm d\boldsymbol{\gamma}}{\mathrm dt} \cdot \nabla f(\boldsymbol{\gamma}(t)). $$ • The divergence of a vector field $\mathbf A$ is the scalar $\nabla \cdot \mathbf A$ which characterizes how much $\mathbf A$ 'flows out of' an infinitesimal volume around the point in question. More explicitly, the divergence at a point $\mathbf r$ is defined as the normalized flux out of a ball $B_\epsilon(\mathbf r)$ of radius $\epsilon$ centered at $\mathbf r$, in the limit where $\epsilon \to 0^+$, i.e. as $$ \nabla \cdot \mathbf A(\mathbf r) = \lim_{\epsilon\to0^+} \frac{1}{\mathrm{vol}(B_\epsilon(\mathbf r)} \iint_{\partial B_\epsilon(\mathbf r))} \mathbf A \cdot \mathrm d \mathbf S. $$ Note that both of these definitions are completely independent of the coordinate system in use, which also means that they are invariant under translations and under rotations. It just so happens that $\nabla^2$ happens to coincide with $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2},$ but that is a happy coincidence: the Laplacian occurs naturally in multiple places because of its translational and rotational invariance, and that then implies that the form $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}$ happens frequently. But that's just hanging on from the properties of the initial definition. • $\begingroup$ It makes sense to me why a gradient defined in that way would be simpler for Cartesian coordinates, since they also form a basis in the strict sense of a vector space, which spherical coordinates don't. In the definition you gave, normalization/units are sneaking in, I think: The dot product implies the units must be the same to be added together. Which is weird because the left-hand side definition, $\frac{d}{dt}f(\gamma(t))$, doesn't seem to use units at all. But: The derivative of $f$ can't be defined without a metric on $E^3$, and the metric sneaks in the necessary normalization. $\endgroup$ – Sam Jaques Apr 29, 2019 at 7:37 • $\begingroup$ An attempted summary of your answer: The Laplacian looks nice with Cartesian coordinates because they play nice with the $L^2$ norm, and we want that because real-life distance uses the $L^2$ norm. $\endgroup$ – Sam Jaques Apr 29, 2019 at 7:39 Your Answer
997a7a4c0978d18a
- Art Gallery - In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity and quantum mechanics,[1]:xi but not general relativity's description of gravity. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles. QFT treats particles as excited states (also called quanta) of their underlying quantum fields, which are more fundamental than the particles. Interactions between particles are described by interaction terms in the Lagrangian involving their corresponding quantum fields. Each interaction can be visually represented by Feynman diagrams according to perturbation theory in quantum mechanics. Main article: History of quantum field theory As a successful theoretical framework today, quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory—quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Theoretical background Magnetic field lines visualized using iron filings. When a piece of paper is sprinkled with iron filings and placed above a bar magnet, the filings align according to the direction of the magnetic field, forming arcs. Quantum field theory is the result of the combination of classical field theory, quantum mechanics, and special relativity.[1]:xi A brief overview of these theoretical precursors is in order. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Newton is an "action at a distance"—its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact."[2]:4 It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields—a numerical quantity (a vector) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.[3]:18 Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.[2][4]:301[5]:2 The theory of classical electromagnetism was completed in 1864 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.[2]:19 Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths.[6] Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit Electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.[7]:Ch.2 Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the Electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.[6] In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave-particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances.[6] Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from Max Planck, de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.[3]:22-23 In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformation, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.[3]:19 It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity—it treats time as an ordinary number while promoting spatial coordinates to linear operators.[6] Quantum electrodynamics Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.[8]:1 Through the works of Born, Heisenberg, and Pascual Jordan in 1925–1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.[8]:1 With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.[3]:22 In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence, as well as non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.[6]:71 In 1928, Dirac wrote down a wave equation that described relativistic electrons—the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein–Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.[6]:71–72 The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand, photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for beta decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.[3]:22-23 It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.[6]:72[3]:23 QFT naturally incorporated antiparticles in its formalism.[3]:24 Infinities and renormalization Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields,[6] suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.[3]:25 It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers was published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Unfortunately, such achievements were not understood and recognized by the theoretical community.[6] Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.[3]:26 In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.[6][3]:28 Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.[6] The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed by Julian Schwinger, Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the initial, so-called "bare" parameters (mass, electric charge, etc.), which have no physical meaning, by their finite measured values. To cancel the apparently infinite parameters, one has to introduce additional, infinite, "counterterms" into the Lagrangian. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory.[6] By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarisation. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".[6] At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.[8]:2 The latter can be used to visually and intuitively organise and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.[1]:5 It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.[8]:2 Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.[3]:30 The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.[3]:30 The second major problem stemmed from the limited validity of the Feynman diagram method, which is based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant α ≈ 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.[3]:31 With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.[3]:31 Standard Model Elementary particles of the Standard Model: six types of quarks, six types of leptons, four types of gauge bosons that carry fundamental interactions, as well as the Higgs boson, which endow elementary particles with mass. In 1954, Yang Chen-Ning and Robert Mills generalised the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups.[9]:5 In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.[3]:32[10] Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.[11] Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.[9]:5-6 By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,[11][9]:6 until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.[11] Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) [9]:11 Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.[3]:32 These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles.[12] The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.[8]:3 The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.[13] Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory.[8]:4 Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973.[8]:7 Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,[8]:6 itself a type of two-dimensional QFT with conformal symmetry.[14] Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.[15] Condensed matter physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.[16] Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.[17] Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.[17] For simplicity, natural units are used in the following sections, in which the reduced Planck constant ħ and the speed of light c are both set to one. Classical fields See also: Classical field theory A classical field is a function of spatial and time coordinates.[18] Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom.[18][19] Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantisation and path integrals are two common formulations of QFT.[20]:61 To motivate the fundamentals of QFT, an overview of classical field theory is in order. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as ϕ(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field, L, is \( {\displaystyle L=\int d^{3}x\,{\mathcal {L}}=\int d^{3}x\,\left[{\frac {1}{2}}{\dot {\phi }}^{2}-{\frac {1}{2}}(\nabla \phi )^{2}-{\frac {1}{2}}m^{2}\phi ^{2}\right],} \) where \( {\mathcal {L}} \) is the Lagrangian density, ϕ ˙ {\displaystyle {\dot {\phi }}} {\dot \phi } is the time-derivative of the field, ∇ is the gradient operator, and m is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian:[1]:16 \( {\displaystyle {\frac {\partial }{\partial t}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial t)}}\right]+\sum _{i=1}^{3}{\frac {\partial }{\partial x^{i}}}\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial \phi /\partial x^{i})}}\right]-{\frac {\partial {\mathcal {L}}}{\partial \phi }}=0,} \) we obtain the equations of motion for the field, which describe the way it varies in time and space: \( {\displaystyle \left({\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}+m^{2}\right)\phi =0.} \) This is known as the Klein–Gordon equation.[1]:17 The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: \( {\displaystyle \phi (\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left(a_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+a_{\mathbf {p} }^{*}e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right),} \) where a is a complex number (normalised by convention), * denotes complex conjugation, and ωp is the frequency of the normal mode: \( {\displaystyle \omega _{\mathbf {p} }={\sqrt {|\mathbf {p} |^{2}+m^{2}}}.} \) Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ωp.[1]:21,26 Canonical quantisation Main article: Canonical quantisation The quantisation procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by \( {\displaystyle x(t)={\frac {1}{\sqrt {2\omega }}}ae^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}a^{*}e^{i\omega t},} \) where a is a complex number (normalised by convention), and ω is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label x of a quantum field. For a quantum harmonic oscillator, x(t) is promoted to a linear operator x ^ ( t ) {\displaystyle {\hat {x}}(t)} {\displaystyle {\hat {x}}(t)}: \) \( {\displaystyle {\hat {x}}(t)={\frac {1}{\sqrt {2\omega }}}{\hat {a}}e^{-i\omega t}+{\frac {1}{\sqrt {2\omega }}}{\hat {a}}^{\dagger }e^{i\omega t}.} \) Complex numbers a and a* are replaced by the annihilation operator a ^ {\displaystyle {\hat {a}}} {\hat {a}} and the creation operator \( {\hat a}^{\dagger } \) , respectively, where † denotes Hermitian conjugation. The commutation relation between the two is \( {\displaystyle [{\hat {a}},{\hat {a}}^{\dagger }]=1.} \) The vacuum state \( |0\rang \) , which is the lowest energy state, is defined by \( {\displaystyle {\hat {a}}|0\rangle =0.} \) Any quantum state of a single harmonic oscillator can be obtained from \( |0\rang \) by successively applying the creation operator \( {\hat a}^{\dagger } \):[1]:20 \( {\displaystyle |n\rangle =({\hat {a}}^{\dagger })^{n}|0\rangle .} \) By the same token, the aforementioned real scalar field ϕ, which corresponds to x in the single harmonic oscillator, is also promoted to a quantum field operator \( {\hat \phi }, \) while the annihilation operator \( {\displaystyle {\hat {a}}_{\mathbf {p} }} \), the creation operator \( {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }} \) and the angular frequency \( {\displaystyle w_{\mathbf {p}}} \) are now for a particular p: \( {\displaystyle {\hat {\phi }}(\mathbf {x} ,t)=\int {\frac {d^{3}p}{(2\pi )^{3}}}{\frac {1}{\sqrt {2\omega _{\mathbf {p} }}}}\left({\hat {a}}_{\mathbf {p} }e^{-i\omega _{\mathbf {p} }t+i\mathbf {p} \cdot \mathbf {x} }+{\hat {a}}_{\mathbf {p} }^{\dagger }e^{i\omega _{\mathbf {p} }t-i\mathbf {p} \cdot \mathbf {x} }\right).} \) Their commutation relations are:[1]:21 \( {\displaystyle [{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }^{\dagger }]=(2\pi )^{3}\delta (\mathbf {p} -\mathbf {q} ),\quad [{\hat {a}}_{\mathbf {p} },{\hat {a}}_{\mathbf {q} }]=[{\hat {a}}_{\mathbf {p} }^{\dagger },{\hat {a}}_{\mathbf {q} }^{\dagger }]=0,} \) where δ is the Dirac delta function. The vacuum state \( |0\rang \) is defined by \( {\displaystyle {\hat {a}}_{\mathbf {p} }|0\rangle =0,\quad {\text{for all }}\mathbf {p} .} Any quantum state of the field can be obtained from \( |0\rang \) by successively applying creation operators \( {\displaystyle {\hat {a}}_{\mathbf {p} }^{\dagger }}, \) e.g.[1]:22 \( {\displaystyle ({\hat {a}}_{\mathbf {p} _{3}}^{\dagger })^{3}{\hat {a}}_{\mathbf {p} _{2}}^{\dagger }({\hat {a}}_{\mathbf {p} _{1}}^{\dagger })^{2}|0\rangle .} \) Although the quantum field appearing in the Lagrangian is spatially continuous, the quantum states of the field are discrete. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems.[21] The process of quantising an arbitrary number of particles instead of a single particle is often also called second quantisation.[1]:19 The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantise (complex) scalar fields, Dirac fields,[1]:52 vector fields (e.g. the electromagnetic field), and even strings.[22] However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:[1]:77 \( {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi )(\partial ^{\mu }\phi )-{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {\lambda }{4!}}\phi ^{4},} \) where μ is a spacetime index, \( {\displaystyle \partial _{0}=\partial /\partial t,\ \partial _{1}=\partial /\partial x^{1}} \) , etc. The summation over the index μ has been omitted following the Einstein notation. If the parameter λ is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals Main article: Path integral formulation The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state \( {\displaystyle |\phi _{I}\rangle } \) at time t = 0 to some final state \( {\displaystyle |\phi _{F}\rangle } \) at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then[20]:10 \( {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int d\phi _{1}\int d\phi _{2}\cdots \int d\phi _{N-1}\,\langle \phi _{F}|e^{-iHT/N}|\phi _{N-1}\rangle \cdots \langle \phi _{2}|e^{-iHT/N}|\phi _{1}\rangle \langle \phi _{1}|e^{-iHT/N}|\phi _{I}\rangle .} \) Taking the limit N → ∞, the above product of integrals becomes the Feynman path integral:[1]:282[20]:12 ⟨\( {\displaystyle \langle \phi _{F}|e^{-iHT}|\phi _{I}\rangle =\int {\mathcal {D}}\phi (t)\,\exp \left\{i\int _{0}^{T}dt\,L\right\},} \) where L is the Lagrangian involving ϕ and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transformation. The initial and final conditions of the path integral are respectively \( {\displaystyle \phi (0)=\phi _{I},\quad \phi (T)=\phi _{F}.} \) In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function Main article: Correlation function (quantum field theory) Now we assume that the theory contains interactions whose Lagrangian terms are a small perturbation from the free theory. In calculations, one often encounters such expressions: \( {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle ,} \) where x and y are position four-vectors, T is the time ordering operator (namely, it orders x and y according to their time-component, later time on the left and earlier time on the right), and \( {\displaystyle |\Omega \rangle } \) is the ground state (vacuum state) of the interacting theory. This expression, known as the two-point correlation function or the two-point Green's function, represents the probability amplitude for the field to propagate from y to x.[1]:82 In canonical quantisation, the two-point correlation function can be written as:[1]:87 \( {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\langle 0|T\left\{\phi _{I}(x)\phi _{I}(y)\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}|0\rangle }{\langle 0|T\left\{\exp \left[-i\int _{-T}^{T}dt\,H_{I}(t)\right]\right\}|0\rangle }},} \) where ε is an infinitesimal number, ϕI is the field operator under the free theory, and HI is the interaction Hamiltonian term. For the ϕ4 theory, it is[1]:84 \( {\displaystyle H_{I}(t)=\int d^{3}x\,{\frac {\lambda }{4!}}\phi _{I}(x)^{4}.} \) Since λ is a small parameter, the exponential function exp can be expanded into a Taylor series in λ and computed term by term. This equation is useful in that it expresses the field operator and ground state in the interacting theory, which are difficult to define, in terms of their counterparts in the free theory, which are well defined. In the path integral formulation, the two-point correlation function can be written as:[1]:284 \( {\displaystyle \langle \Omega |T\{\phi (x)\phi (y)\}|\Omega \rangle =\lim _{T\to \infty (1-i\epsilon )}{\frac {\int {\mathcal {D}}\phi \,\phi (x)\phi (y)\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}{\int {\mathcal {D}}\phi \,\exp \left[i\int _{-T}^{T}d^{4}z\,{\mathcal {L}}\right]}},} \) where \( {\mathcal {L}} \)is the Lagrangian density. As in the previous paragraph, the exponential factor involving the interaction term can also be expanded as a series in λ. According to Wick's theorem, any n-point correlation function in the free theory can be written as a sum of products of two-point correlation functions. For example, \( {\displaystyle {\begin{aligned}\langle 0|T\{\phi (x_{1})\phi (x_{2})\phi (x_{3})\phi (x_{4})\}|0\rangle =&\langle 0|T\{\phi (x_{1})\phi (x_{2})\}|0\rangle \langle 0|T\{\phi (x_{3})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{3})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{4})\}|0\rangle \\&+\langle 0|T\{\phi (x_{1})\phi (x_{4})\}|0\rangle \langle 0|T\{\phi (x_{2})\phi (x_{3})\}|0\rangle .\end{aligned}}} \) Since correlation functions in the interacting theory can be expressed in terms of those in the free theory, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.[1]:90 Either through canonical quantisation or path integrals, one can obtain: \( {\displaystyle D_{F}(x-y)\equiv \langle 0|T\{\phi (x)\phi (y)\}|0\rangle =\lim _{\epsilon \to 0}\int {\frac {d^{4}p}{(2\pi )^{4}}}{\frac {i}{p_{\mu }p^{\mu }-m^{2}+i\epsilon }}e^{-ip_{\mu }(x^{\mu }-y^{\mu })}.} \) This is known as the Feynman propagator for the real scalar field.[1]:31,288[20]:23 Feynman diagram Main article: Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the λ1 term in the two-point correlation function in the ϕ4 theory is \( {\displaystyle {\frac {-i\lambda }{4!}}\langle 0|T\{\phi (x)\phi (y)\int d^{4}z\,\phi (z)\phi (z)\phi (z)\phi (z)\}|0\rangle .} \) After applying Wick's theorem, one of the terms is \( {\displaystyle 12\cdot {\frac {-i\lambda }{4!}}\int d^{4}z\,D_{F}(x-z)D_{F}(y-z)D_{F}(z-z),} \) whose corresponding Feynman diagram is Phi-4 one-loop.svg Every point corresponds to a single ϕ field factor. Points labelled with x and y are called external points, while those in the interior are called internal points or vertices (there is one in this diagram). The value of the corresponding term can be obtained from the diagram by following "Feynman rules": assign \( {\displaystyle -i\lambda \int d^{4}z} \) to every vertex and the Feynman propagator \( {\displaystyle D_{F}(x_{1}-x_{2})} \) to every line with end points x1 and x2. The product of factors corresponding to every element in the diagram, divided by the "symmetry factor" (2 for this diagram), gives the expression for the term in the perturbation series.[1]:91-94 In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, \( {\displaystyle \langle \Omega |T\{\phi (x_{1})\cdots \phi (x_{n})\}|\Omega \rangle } \) is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ϕ4 interaction theory discussed above, every vertex must have four legs.[1]:98 In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.[1]:102-115 Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.[20]:44 Lines whose end points are vertices can be thought of as the propagation of virtual particles.[1]:31 Main article: Renormalisation Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass m and the coupling constant λ, have no physical meaning — m, λ, and the field strength ϕ are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off Λ, obtain expressions for the physical quantities, and then take the limit Λ → ∞. This is an example of regularisation, a class of methods to treat divergences in QFT, with Λ being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalised perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ϕ4 theory, the field strength is first redefined: \( {\displaystyle \phi =Z^{1/2}\phi _{r},} \) where ϕ is the bare field, ϕr is the renormalised field, and Z is a constant to be determined. The Lagrangian density becomes: \( {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}m_{r}^{2}\phi _{r}^{2}-{\frac {\lambda _{r}}{4!}}\phi _{r}^{4}+{\frac {1}{2}}\delta _{Z}(\partial _{\mu }\phi _{r})(\partial ^{\mu }\phi _{r})-{\frac {1}{2}}\delta _{m}\phi _{r}^{2}-{\frac {\delta _{\lambda }}{4!}}\phi _{r}^{4},} \) where mr and λr are the experimentally measurable, renormalised, mass and coupling constant, respectively, and \( {\displaystyle \delta _{Z}=Z-1,\quad \delta _{m}=m^{2}Z-m_{r}^{2},\quad \delta _{\lambda }=\lambda Z^{2}-\lambda _{r}} \) are constants to be determined. The first three terms are the ϕ4 Lagrangian density written in terms of the renormalised quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularisation scheme (such as the cut-off regularisation introduced above or dimensional regularization); call the regulator Λ. Compute Feynman diagrams, in which divergent terms will depend on Λ. Then, define δZ, δm, and δλ such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit Λ → ∞ is taken. In this way, meaningful finite quantities are obtained.[1]:323-326 It is only possible to eliminate all infinities to obtain a finite result in renormalisable theories, whereas in non-renormalisable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalisable QFT,[1]:719–727 while quantum gravity is non-renormalisable.[1]:798[20]:421 Renormalisation group Main article: Renormalization group The renormalisation group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.[1]:393 The way in which each parameter changes with scale is described by its β function.[1]:417 Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation.[1]:410-411 As an example, the coupling constant in QED, namely the elementary charge e, has the following β function: \( {\displaystyle \beta (e)\equiv {\frac {1}{\Lambda }}{\frac {de}{d\Lambda }}={\frac {e^{3}}{12\pi ^{2}}}+O(e^{5}),} \) where Λ is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases.[23] The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.[1]:420 The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following β function: \( {\displaystyle \beta (g)\equiv {\frac {1}{\Lambda }}{\frac {dg}{d\Lambda }}={\frac {g^{3}}{16\pi ^{2}}}\left(-11+{\frac {2}{3}}N_{f}\right)+O(g^{5}),} \) where Nf is the number of quark flavours. In the case where Nf ≤ 16 (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.[1]:531 Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.)[24] Examples include string theory[14] and N = 4 supersymmetric Yang–Mills theory.[25] According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off Λ, i.e. that the theory is no longer valid at energies higher than Λ, and all degrees of freedom above the scale Λ are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalisable effective field theory.[1]:402-403 The difference between renormalisable and non-renormalisable theories is that the former are insensitive to details at high energies, whereas the latter do depend of them.[8]:2 According to this view, non-renormalisable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off Λ from calculations in such a theory merely indicates that new physical phenomena appear at scales above Λ, where a new theory is necessary.[20]:156 Other theories The quantisation and renormalisation procedures outlined in the preceding sections are performed for the free theory and ϕ4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field ψ representing the electron field and a vector field Aμ representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: \( {\displaystyle {\mathcal {L}}={\bar {\psi }}(i\gamma ^{\mu }\partial _{\mu }-m)\psi -{\frac {1}{4}}F_{\mu \nu }F^{\mu \nu }-e{\bar {\psi }}\gamma ^{\mu }\psi A_{\mu },} \) where γμ are Dirac matrices, \( {\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}} \) , and \( {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} \) is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.[1]:78 Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of positrons, while those pointing backward in time represent the propagation of electrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry Main article: Gauge theory If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant: \( {\displaystyle \psi (x)\to e^{i\alpha (x)}\psi (x),\quad A_{\mu }(x)\to A_{\mu }(x)+ie^{-1}e^{-i\alpha (x)}\partial _{\mu }e^{i\alpha (x)},} \) where α(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.[1]:482–483 Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations \( {\displaystyle e^{i\alpha (x)}} \) and \( {\displaystyle e^{i\alpha '(x)}} \) is yet another symmetry transformation \( {\displaystyle e^{i[\alpha (x)+\alpha '(x)]}} \). For any α(x), \( {\displaystyle e^{i\alpha (x)}} \) is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.[1]:496 The photon field Aμ may be referred to as the U(1) gauge boson. U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories).[1]:489 Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ψi, i = 1,2,3 representing quark fields as well as eight vector fields Aa,μ, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.[1]:547 The QCD Lagrangian density is:[1]:490-491 \( {\displaystyle {\mathcal {L}}=i{\bar {\psi }}^{i}\gamma ^{\mu }(D_{\mu })^{ij}\psi ^{j}-{\frac {1}{4}}F_{\mu \nu }^{a}F^{a,\mu \nu }-m{\bar {\psi }}^{i}\psi ^{i},} \) where Dμ is the gauge covariant derivative: \( {\displaystyle D_{\mu }=\partial _{\mu }-igA_{\mu }^{a}t^{a},} \) where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices), \( {\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c},} \) and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: \( {\displaystyle \psi ^{i}(x)\to U^{ij}(x)\psi ^{j}(x),\quad A_{\mu }^{a}(x)t^{a}\to U(x)\left[A_{\mu }^{a}(x)t^{a}+ig^{-1}\partial _{\mu }\right]U^{\dagger }(x),} \) where U(x) is an element of SU(3) at every spacetime point x: \( {\displaystyle U(x)=e^{i\alpha (x)^{a}t^{a}}.} \) The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantisation, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density\( {\displaystyle {\mathcal {L}}[\phi ,\partial _{\mu }\phi ]} \) under a certain local transformation of the fields, the measure \( {\displaystyle \int {\mathcal {D}}\phi } \) of the path integral may change.[20]:243 For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.[1]:705-707 The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.[26] Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.[1]:17-18[20]:73 For example, the U(1) symmetry of QED implies charge conservation.[27] Gauge transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field Aμ, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarisation. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing Aμ can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description.[20]:168 To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.[1]:512-515 A more rigorous generalisation of the Faddeev–Popov procedure is given by BRST quantization.[1]:517 Spontaneous symmetry breaking Main article: Spontaneous symmetry breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.[1]:347 To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density: \( {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\phi ^{i})(\partial ^{\mu }\phi ^{i})+{\frac {1}{2}}\mu ^{2}\phi ^{i}\phi ^{i}-{\frac {\lambda }{4}}(\phi ^{i}\phi ^{i})^{2},} \) where μ and λ are real parameters. The theory admits an O(N) global symmetry: \( {\displaystyle \phi ^{i}\to R^{ij}\phi ^{j},\quad R\in \mathrm {O} (N).} \) The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ϕ0 satisfying \( {\displaystyle \phi _{0}^{i}\phi _{0}^{i}={\frac {\mu ^{2}}{\lambda }}.} \) Without loss of generality, let the ground state be in the N-th direction: \( {\displaystyle \phi _{0}^{i}=\left(0,\cdots ,0,{\frac {\mu }{\sqrt {\lambda }}}\right).} \) The original N fields can be rewritten as: \( {\displaystyle \phi ^{i}(x)=\left(\pi ^{1}(x),\cdots ,\pi ^{N-1}(x),{\frac {\mu }{\sqrt {\lambda }}}+\sigma (x)\right),} \) and the original Lagrangian density as: \( {\displaystyle {\mathcal {L}}={\frac {1}{2}}(\partial _{\mu }\pi ^{k})(\partial ^{\mu }\pi ^{k})+{\frac {1}{2}}(\partial _{\mu }\sigma )(\partial ^{\mu }\sigma )-{\frac {1}{2}}(2\mu ^{2})\sigma ^{2}-{\sqrt {\lambda }}\mu \sigma ^{3}-{\sqrt {\lambda }}\mu \pi ^{k}\pi ^{k}\sigma -{\frac {\lambda }{2}}\pi ^{k}\pi ^{k}\sigma ^{2}-{\frac {\lambda }{4}}(\pi ^{k}\pi ^{k})^{2},} \) where k = 1,...,N-1. The original O(N) global symmetry is no longer manifest, leaving only the subgroup O(N-1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.[1]:349-350 Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, O(N) has N(N-1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N-1) has (N-1)(N-2)/2. The number of broken symmetries is their difference, N-1, which corresponds to the N-1 massless fields πk.[1]:351 On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarised massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson.[1]:743-744 In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.[20]:199 In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.[1]:690 Main article: Supersymmetry All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesised the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.[1]:795[20]:443 The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations Pμ and the Lorentz transformations Jμν.[28]:58–60 In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Qα, called supercharges, which themselves transform as Weyl fermions.[1]:795[20]:444 The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, QαI, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.[1]:795[20]:450 Supersymmetry can also be constructed in other dimensions,[29] most notably in (1+1) dimensions for its application in superstring theory.[30] The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.[20]:448 Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang–Mills theory,[20]:450 and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.[20]:444 If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.[31] Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model—why the mass of the Higgs boson is not radiatively corrected (under renormalisation) to a very high scale such as the grand unified scale or the Planck scale—can be resolved by relating the Higgs field and its superpartner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.[1]:796-797[32] Nevertheless, as of 2018, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.[1]:797[20]:443 Other spacetimes The ϕ4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases.[33] In high-energy physics, string theory is a type of (1+1)-dimensional QFT,[20]:452[14] while Kaluza–Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.[20]:428-429 In Minkowski space, the flat metric ημν is used to raise and lower spacetime indices in the Lagrangian, e.g. \( {\displaystyle A_{\mu }A^{\mu }=\eta _{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} \) where ημν is the inverse of ημν satisfying ημρηρν = δμν. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: \( {\displaystyle A_{\mu }A^{\mu }=g_{\mu \nu }A^{\mu }A^{\nu },\quad \partial _{\mu }\phi \partial ^{\mu }\phi =g^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi ,} \) where gμν is the inverse of gμν. For a real scalar field, the Lagrangian density in a general spacetime background is \( {\displaystyle {\mathcal {L}}={\sqrt {|g|}}\left({\frac {1}{2}}g^{\mu \nu }\nabla _{\mu }\phi \nabla _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right),} \) where g = det(gμν), and ∇μ denotes the covariant derivative.[34] The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. Topological quantum field theory Main article: Topological quantum field theory The correlation functions and physical predictions of a QFT depend on the spacetime metric gμν. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.[35]:36 QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern–Simons theory is an example of TQFT and has been used to construct models of quantum gravity.[36] Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.[37]:1–5 The world line trajectory of fractionalized particles (known as anyons) can form a link configuration in the spacetime,[38] which relates the braiding statistics of anyons in physics to the link invariants in mathematics. Topological quantum field theories (TQFTs) applicable to the frontier research of topological quantum matters include Chern-Simons-Witten gauge theories in 2+1 spacetime dimensions, other new exotic TQFTs in 3+1 spacetime dimensions and beyond.[39] Perturbative and non-perturbative methods Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft–Polyakov monopole, domain wall, flux tube, and instanton.[8] Examples of QFTs that are completely solvable non-perturbatively include minimal models of conformal field theory[40] and the Thirring model.[41] Mathematical rigour In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally ill-defined.[42] However, perturbative quantum field theory, which only requires that quantities be computable as a formal power series without any convergence requirements, can be given a rigorous mathematical treatment. In particular, Kevin Costello's monograph Renormalization and Effective Field Theory[43] provides a rigorous formulation of perturbative renormalization that combines both the effective-field theory approaches of Kadanoff, Wilson, and Polchinski, together with the Batalin-Vilkovisky approach to quantizing gauge theories. Furthermore, perturbative path-integral methods, typically understood as formal computational methods inspired from finite-dimensional integration theory,[44] can be given a sound mathematical interpretation from their finite-dimensional analogues.[45] Since the 1950s,[46] theoretical physicists and mathematicians have attempted to organise all QFTs into a set of axioms, in order to establish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,[47]:2 which has led to such results as CPT theorem, spin–statistics theorem, and Goldstone's theorem.[46] Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically — both can be classified in the framework of representations of cobordisms.[48] Algebraic quantum field theory is another approach to the axiomatisation of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag–Kastler axioms.[47]:2-3 One way to construct theories satisfying Wightman axioms is to use Osterwalder–Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).[47]:10 Yang–Mills existence and mass gap, one of the Millennium Prize Problems, concerns the well-defined existence of Yang–Mills theories as set out by the above axioms. The full problem statement is as follows.[49] Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on \( \mathbb {R} ^{4} \) and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973) and Osterwalder & Schrader (1975). See also Abraham–Lorentz force AdS/CFT correspondence Axiomatic quantum field theory Introduction to quantum mechanics Common integrals in quantum field theory Conformal field theory Constructive quantum field theory Einstein–Maxwell–Dirac equations Form factor (quantum field theory) Green–Kubo relations Green's function (many-body theory) Group field theory Lattice field theory List of quantum field theories Local quantum field theory Noncommutative quantum field theory Quantization of a field Quantum electrodynamics Quantum field theory in curved spacetime Quantum chromodynamics Quantum flavordynamics Quantum hadrodynamics Quantum hydrodynamics Quantum triviality Relation between Schrödinger's equation and the path integral formulation of quantum mechanics Relationship between string theory and quantum field theory Schwinger–Dyson equation Static forces and virtual-particle exchange Symmetry in quantum mechanics Theoretical and experimental justification for the Schrödinger equation Topological quantum field theory Ward–Takahashi identity Wheeler–Feynman absorber theory Wigner's classification Wigner's theorem Hobson, Art (2013). "There are no particles, there are only fields". American Journal of Physics. 81 (211): 211–223.arXiv:1204.4616. Bibcode:2013AmJPh..81..211H. doi:10.1119/1.4789885. John L. Heilbron (14 February 2003). The Oxford Companion to the History of Modern Science. Oxford University Press. ISBN 978-0-19-974376-6. Joseph John Thomson (1893). Notes on Recent Researches in Electricity and Magnetism: Intended as a Sequel to Professor Clerk-Maxwell's 'Treatise on Electricity and Magnetism'. Dawsons. Weisskopf, Victor (November 1981). "The development of field theory in the last 50 years". Physics Today. 34 (11): 69–85. Bibcode:1981PhT....34k..69W. doi:10.1063/1.2914365. Werner Heisenberg (1999). Physics and Philosophy: The Revolution in Modern Science. Prometheus Books. ISBN 978-1-57392-694-2. Shifman, M. (2012). Advanced Topics in Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-19084-8. 't Hooft, Gerard (2015-03-17). "The Evolution of Quantum Field Theory". The Standard Theory of Particle Physics. Advanced Series on Directions in High Energy Physics. 26. pp. 1–27. arXiv:1503.05007. Bibcode:2016stpp.conf....1T. doi:10.1142/9789814733519_0001. ISBN 978-981-4733-50-2. Yang, C. N.; Mills, R. L. (1954-10-01). "Conservation of Isotopic Spin and Isotopic Gauge Invariance". Physical Review. 96 (1): 191–195. Bibcode:1954PhRv...96..191Y. doi:10.1103/PhysRev.96.191. Sutton, Christine. "Standard model". britannica.com. Encyclopædia Britannica. Retrieved 2018-08-14. Kibble, Tom W. B. (2014-12-12). "The Standard Model of Particle Physics". arXiv:1412.4094 [physics.hist-ph]. Polchinski, Joseph (2005). String Theory. 1. Cambridge University Press. ISBN 978-0-521-67227-6. Schwarz, John H. (2012-01-04). "The Early History of String Theory and Supersymmetry". arXiv:1201.0981 [physics.hist-ph]. "Common Problems in Condensed Matter and High Energy Physics" (PDF). science.energy.gov. Office of Science, U.S. Department of Energy. 2015-02-02. Retrieved 2018-07-18. Wilczek, Frank (2016-04-19). "Particle Physics and Condensed Matter: The Saga Continues". Physica Scripta. 2016 (T168): 014003. arXiv:1604.05669. Bibcode:2016PhST..168a4003W. doi:10.1088/0031-8949/T168/1/014003. Tong 2015, Chapter 1 In fact, its number of degrees of freedom is uncountable, because the vector space dimension of the space of continuous (differentiable, real analytic) functions on even a finite dimensional Euclidean space is uncountable. On the other hand, subspaces (of these function spaces) that one typically considers, such as Hilbert spaces (e.g. the space of square integrable real valued functions) or separable Banach spaces (e.g. the space of continuous real-valued functions on a compact interval, with the uniform convergence norm), have denumerable (i. e. countably infinite) dimension in the category of Banach spaces (though still their Euclidean vector space dimension is uncountable), so in these restricted contexts, the number of degrees of freedom (interpreted now as the vector space dimension of a dense subspace rather than the vector space dimension of the function space of interest itself) is denumerable. Zee, A. (2010). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 978-0-691-01019-9. Fock, V. (1932-03-10). "Konfigurationsraum und zweite Quantelung". Zeitschrift für Physik (in German). 75 (9–10): 622–647. Bibcode:1932ZPhy...75..622F. doi:10.1007/BF01344458. Becker, Katrin; Becker, Melanie; Schwarz, John H. (2007). String Theory and M-Theory. Cambridge University Press. p. 36. ISBN 978-0-521-86069-7. Fujita, Takehisa (2008-02-01). "Physics of Renormalization Group Equation in QED".arXiv:hep-th/0606101. Aharony, Ofer; Gur-Ari, Guy; Klinghoffer, Nizan (2015-05-19). "The Holographic Dictionary for Beta Functions of Multi-trace Coupling Constants". Journal of High Energy Physics. 2015 (5): 31.arXiv:1501.06664. Bibcode:2015JHEP...05..031A. doi:10.1007/JHEP05(2015)031. Kovacs, Stefano (1999-08-26). "N = 4 supersymmetric Yang–Mills theory and the AdS/SCFT correspondence".arXiv:hep-th/9908171. Veltman, M. J. G. (1976). Methods in Field Theory, Proceedings of the Les Houches Summer School, Les Houches, France, 1975. Brading, Katherine A. (March 2002). "Which symmetry? Noether, Weyl, and conservation of electric charge". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 33 (1): 3–22. Bibcode:2002SHPMP..33....3B. CiteSeerX doi:10.1016/S1355-2198(01)00033-8. Weinberg, Steven (1995). The Quantum Theory of Fields. Cambridge University Press. ISBN 978-0-521-55001-7. de Wit, Bernard; Louis, Jan (1998-02-18). "Supersymmetry and Dualities in various dimensions".arXiv:hep-th/9801132. Polchinski, Joseph (2005). String Theory. 2. Cambridge University Press. ISBN 978-0-521-67228-3. Nath, P.; Arnowitt, R. (1975). "Generalized Super-Gauge Symmetry as a New Framework for Unified Gauge Theories". Physics Letters B. 56 (2): 177. Bibcode:1975PhLB...56..177N. doi:10.1016/0370-2693(75)90297-x. Munoz, Carlos (2017-01-18). "Models of Supersymmetry for Dark Matter". EPJ Web of Conferences. 136: 01002. arXiv:1701.05259. Bibcode:2017EPJWC.13601002M. doi:10.1051/epjconf/201713601002. Morandi, G.; Sodano, P.; Tagliacozzo, A.; Tognetti, V. (2000). Field Theories for Low-Dimensional Condensed Matter Systems. Springer. ISBN 978-3-662-04273-1. Parker, Leonard E.; Toms, David J. (2009). Quantum Field Theory in Curved Spacetime. Cambridge University Press. p. 43. ISBN 978-0-521-87787-9. Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008-12-11). "Undergraduate Lecture Notes in Topological Quantum Field Theory". arXiv:0810.0344v5 [math-th]. Carlip, Steven (1998). Quantum Gravity in 2+1 Dimensions. Cambridge University Press. pp. 27–29. doi:10.1017/CBO9780511564192. ISBN 9780511564192. Carqueville, Nils; Runkel, Ingo (2017-05-16). "Physics of Renormalization Group Equation in QED". arXiv:1705.05734 [math.QA]. Witten, Edward (1989). "Quantum Field Theory and the Jones Polynomial". Communications in Mathematical Physics. 121 (3): 351–399. Bibcode:1989CMaPh.121..351W. doi:10.1007/BF01217730. MR 0990772. Putrov, Pavel; Wang, Juven; Yau, Shing-Tung (2017). "Braiding Statistics and Link Invariants of Bosonic/Fermionic Topological Quantum Matter in 2+1 and 3+1 dimensions". Annals of Physics. 384 (C): 254–287.arXiv:1612.09298. doi:10.1016/j.aop.2017.06.019. Di Francesco, Philippe; Mathieu, Pierre; Sénéchal, David (1997). Conformal Field Theory. Springer. ISBN 978-1-4612-7475-9. Thirring, W. (1958). "A Soluble Relativistic Field Theory?". Annals of Physics. 3 (1): 91–112. Bibcode:1958AnPhy...3...91T. doi:10.1016/0003-4916(58)90015-0. Haag, Rudolf (1955). "On Quantum Field Theories" (PDF). Dan Mat Fys Medd. 29 (12). Kevin Costello, Renormalization and Effective Field Theory, Mathematical Surveys and Monographs Volume 170, American Mathematical Society, 2011, ISBN 978-0-8218-5288-0 Gerald B. Folland, Quantum Field Theory: A Tourist Guide for Mathematicians, Mathematical Surveys and Monographs Volume 149, American Mathematical Society, 2008, ISBN 0821847058 | chapter=8 Nguyen, Timothy (2016). "The perturbative approach to path integrals: A succinct mathematical treatment". J. Math. Phys. 57.arXiv:1505.04809. doi:10.1063/1.4962800. Buchholz, Detlev (2000). "Current Trends in Axiomatic Quantum Field Theory". Quantum Field Theory. Lecture Notes in Physics. 558: 43–64. arXiv:hep-th/9811233. Bibcode:2000LNP...558...43B. doi:10.1007/3-540-44482-3_4. ISBN 978-3-540-67972-1. Summers, Stephen J. (2016-03-31). "A Perspective on Constructive Quantum Field Theory".arXiv:1203.3991v2 [math-ph]. Sati, Hisham; Schreiber, Urs (2012-01-06). "Survey of mathematical foundations of QFT and perturbative string theory". arXiv:1109.0955v2 [math-ph]. Jaffe, Arthur; Witten, Edward. "Quantum Yang–Mills Theory" (PDF). Clay Mathematics Institute. Retrieved 2018-07-18. Further reading General readers Pais, A. (1994) [1986]. Inward Bound: Of Matter and Forces in the Physical World (reprint ed.). Oxford, New York, Toronto: Oxford University Press. ISBN 978-0198519973. Schweber, S. S. (1994). QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga. Princeton University Press. ISBN 9780691033273. Feynman, R.P. (2001) [1964]. The Character of Physical Law. MIT Press. ISBN 978-0-262-56003-0. Feynman, R.P. (2006) [1985]. QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6. Gribbin, J. (1998). Q is for Quantum: Particle Physics from A to Z. Weidenfeld & Nicolson. ISBN 978-0-297-81752-9. Introductory texts McMahon, D. (2008). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-154382-8. Bogolyubov, N.; Shirkov, D. (1982). Quantum Fields. Benjamin Cummings. ISBN 978-0-8053-0983-6. Frampton, P.H. (2000). Gauge Field Theories. Frontiers in Physics (2nd ed.). Wiley. Greiner, W.; Müller, B. (2000). Gauge Theory of Weak Interactions. Springer. ISBN 978-3-540-67672-0. Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 978-0-07-032071-0. Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Group. ISBN 978-0-201-11749-3. Kleinert, H.; Schulte-Frohlinde, Verena (2001). Critical Properties of φ4-Theories. World Scientific. ISBN 978-981-02-4658-7. Kleinert, H. (2008). Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation (PDF). World Scientific. ISBN 978-981-279-170-2. Loudon, R (1983). The Quantum Theory of Light. Oxford University Press. ISBN 978-0-19-851155-7. Mandl, F.; Shaw, G. (1993). Quantum Field Theory. John Wiley & Sons. ISBN 978-0-471-94186-6. Ryder, L.H. (1985). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-33859-2. Schwartz, M.D. (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. ISBN 978-1107034730. Archived from the original on 2018-03-22. Retrieved 2020-05-13. Ynduráin, F.J. (1996). Relativistic Quantum Mechanics and Introduction to Field Theory. Relativistic Quantum Mechanics and Introduction to Field Theory (1st ed.). Springer. Bibcode:1996rqmi.book.....Y. doi:10.1007/978-3-642-61057-8. ISBN 978-3-540-60453-2. Greiner, W.; Reinhardt, J. (1996). Field Quantization. Springer. ISBN 978-3-540-59179-5. Scharf, Günter (2014) [1989]. Finite Quantum Electrodynamics: The Causal Approach (third ed.). Dover Publications. ISBN 978-0486492735. Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 978-0521-8644-97. Tong, David (2015). "Lectures on Quantum Field Theory". Retrieved 2016-02-09. Advanced texts Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3. Bogoliubov, N.; Logunov, A.A.; Oksak, A.I.; Todorov, I.T. (1990). General Principles of Quantum Field Theory. Kluwer Academic Publishers. ISBN 978-0-7923-0540-8. Weinberg, S. (1995). The Quantum Theory of Fields. 1. Cambridge University Press. ISBN 978-0521550017. External links One-dimensional quantum field theory on Wikiversity "Quantum field theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy: "Quantum Field Theory", by Meinard Kuhlmann. Siegel, Warren, 2005. Fields.arXiv:hep-th/9912205. Quantum Field Theory by P. J. Mulders Quantum field theories Chern–Simons Conformal field theory Ginzburg–Landau Kondo effect Local QFT Noncommutative QFT Quantum Yang–Mills Quartic interaction sine-Gordon String theory Toda field Topological QFT Yang–Mills Yang–Mills–Higgs Chiral Non-linear sigma Schwinger Standard Model Thirring–Wess Wess–Zumino Wess–Zumino–Witten Yukawa Four-fermion interactions BCS theory Fermi's interaction Luttinger liquid Top quark condensate Gross–Neveu Hubbard Nambu–Jona-Lasinio Thirring Thirring–Wess History Axiomatic QFT Loop quantum gravity Loop quantum cosmology QFT in curved spacetime Quantum chaos Quantum chromodynamics Quantum dynamics Quantum electrodynamics links Quantum gravity links Quantum hadrodynamics Quantum hydrodynamics Quantum information Quantum information science links Quantum logic Quantum thermodynamics Quantum mechanics Introduction History timeline Glossary Classical mechanics Old quantum theory collapse Universal wavefunction Wave–particle duality Matter wave Wave propagation Virtual particle Dirac Klein–Gordon Pauli Rydberg Schrödinger Heisenberg Interaction Matrix mechanics Path integral formulation Phase space Schrödinger algebra calculus differential stochastic geometry group Q-analog Measurement problem QBism biology chemistry chaos cognition complexity theory computing Quantum technology links Matrix isolation Phase qubit Quantum dot cellular automaton display laser single-photon source solar cell Quantum well Dirac sea Fractional quantum mechanics Quantum electrodynamics links Quantum geometry Quantum field theory links Quantum gravity links Quantum information science Quantum mechanics of time travel Textbooks Quantum gravity Central concepts AdS/CFT correspondence Ryu-Takayanagi Conjecture Causal patch Gravitational anomaly Graviton Holographic principle IR/UV mixing Planck scale Quantum foam Trans-Planckian problem Weinberg–Witten theorem Faddeev-Popov ghost Toy models 2+1D topological gravity CGHS model Jackiw–Teitelboim gravity Liouville gravity RST model Topological quantum field theory Quantum field theory in curved spacetime Bunch–Davies vacuum Hawking radiation Semiclassical gravity Unruh effect Black holes Black hole complementarity Black hole information paradox Black-hole thermodynamics Bousso's holographic bound ER=EPR Firewall (physics) Gravitational singularity String theory Bosonic string theory M-theory Supergravity Superstring theory Canonical quantum gravity Loop quantum gravity Wheeler–DeWitt equation Euclidean quantum gravity Hartle–Hawking state Causal dynamical triangulation Causal sets Noncommutative geometry Spin foam Group field theory Superfluid vacuum theory Twistor theory Dual graviton Quantum cosmology Eternal inflation Multiverse FRW/CFT duality Branches of physics Theoretical Computational Experimental Applied Classical mechanics Acoustics Classical electromagnetism Optics Thermodynamics Statistical mechanics Atmospheric physics Biophysics Chemical physics Engineering physics Geophysics Materials science Mathematical physics See also Standard Model Particle physics Fermions Gauge boson Higgs boson Quantum field theory Gauge theory Strong interaction Color charge Quantum chromodynamics Quark model Electroweak interaction Weak interaction Quantum electrodynamics Fermi's interaction Weak hypercharge Weak isospin CKM matrix Spontaneous symmetry breaking Higgs mechanism Mathematical formulation of the Standard Model Beyond the Standard Model Hierarchy problem Dark matter Cosmological constant problem Strong CP problem Neutrino oscillation Technicolor Kaluza–Klein theory Grand Unified Theory Theory of everything MSSM Superstring theory Supergravity Quantum gravity String theory Loop quantum gravity Causal dynamical triangulation Canonical quantum gravity Superfluid vacuum theory Twistor theory Gran Sasso INO LHC SNO Super-K Tevatron Physics Encyclopedia Hellenica World - Scientific Library Retrieved from "http://en.wikipedia.org/"
4615d7f9e2835230
 RICHARD FEYNMAN - Nobel Lecture, December1965 Nuclear? No, thanks! - Nucleare? No, grazie! Planning a trip to Sicily?  Then, make sure you won't be feeding the Mafia! Pizzo-free travel - brought to you by the ADDIO PIZZO's guys Home   Scientific Products   RF Communications Products   Contact     If you have a scientific interest in the physics of the radio, you should browse this site as an e-book! Richard P. Feynman The Development of the Space-Time View of Quantum Electrodynamics Nobel Lecture, December 11, 1965 We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing. Since winning the prize is a personal thing, I thought I could be excused in this particular situation, if I were to talk personally about my relationship to quantum electrodynamics, rather than to discuss the subject itself in a refined and finished fashion. Furthermore, since there are three people who have won the prize in physics, if they are all going to be talking about quantum electrodynamics itself, one might become bored with the subject. So, what I would like to tell you about today are the sequence of events, really the sequence of ideas, which occurred, and by which I finally came out the other end with an unsolved problem for which I ultimately received a prize. I realize that a truly scientific paper would be of greater value, but such a paper I could publish in regular journals. So, I shall use this Nobel Lecture as an opportunity to do something of less value, but which I cannot do elsewhere. I ask your indulgence in another manner. I shall include details of anecdotes which are of no value either scientifically, nor for understanding the development of ideas. They are included only to make the lecture more entertaining. I worked on this problem about eight years until the final publication in 1947. The beginning of the thing was at the Massachusetts Institute of Technology, when I was an undergraduate student reading about the known physics, learning slowly about all these things that people were worrying about, and realizing ultimately that the fundamental problem of the day was that the quantum theory of electricity and magnetism was not completely satisfactory. This I gathered from books like those of Heitler and Dirac. I was inspired by the remarks in these books; not by the parts in which everything was proved and demonstrated carefully and calculated, because I couldn't understand those very well. At the young age what I could understand were the remarks about the fact that this doesn't make any sense, and the last sentence of the book of Dirac I can still remember, "It seems that some essentially new physical ideas are here needed." So, I had this as a challenge and an inspiration. I also had a personal feeling, that since they didn't get a satisfactory answer to the problem I wanted to solve, I don't have to pay a lot of attention to what they did do. I did gather from my readings, however, that two things were the source of the difficulties with the quantum electrodynamical theories. The first was an infinite energy of interaction of the electron with itself. And this difficulty existed even in the classical theory. The other difficulty came from some infinites which had to do with the infinite numbers of degrees of freedom in the field. As I understood it at the time (as nearly as I can remember) this was simply the difficulty that if you quantized the harmonic oscillators of the field (say in a box) each oscillator has a ground state energy of (½) and there is an infinite number of modes in a box of every increasing frequency w, and therefore there is an infinite energy in the box. I now realize that that wasn't a completely correct statement of the central problem; it can be removed simply by changing the zero from which energy is measured. At any rate, I believed that the difficulty arose somehow from a combination of the electron acting on itself and the infinite number of degrees of freedom of the field. Well, it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one - it is a sort of a silly one, as a matter of fact. And, so I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons. That means there is no field at all. You see, if all charges contribute to making a single common field, and if that common field acts back on all the charges, then each charge must act back on itself. Well, that was where the mistake was, there was no field. It was just that when you shook one charge, another would shake later. There was a direct interaction between charges, albeit with a delay. The law of force connecting the motion of one charge with another would just involve a delay. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction across. Now, this has the attractive feature that it solves both problems at once. First, I can say immediately, I don't let the electron act on itself, I just let this act on that, hence, no self-energy! Secondly, there is not an infinite number of degrees of freedom in the field. There is no field at all; or if you insist on thinking in terms of ideas like that of a field, this field is always completely determined by the action of the particles which produce it. You shake this particle, it shakes that one, but if you want to think in a field way, the field, if it's there, would be entirely determined by the matter which generates it, and therefore, the field does not have any independent degrees of freedom and the infinities from the degrees of freedom would then be removed. As a matter of fact, when we look out anywhere and see light, we can always "see" some matter as the source of the light. We don't just see light (except recently some radio reception has been found with no apparent material source). You see then that my general plan was to first solve the classical problem, to get rid of the infinite self-energies in the classical theory, and to hope that when I made a quantum theory of it, everything would just be fine. That was the beginning, and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held to this theory, in spite of all difficulties, by my youthful enthusiasm. Then I went to graduate school and somewhere along the line I learned what was wrong with the idea that an electron does not act on itself. When you accelerate an electron it radiates energy and you have to do extra work to account for that energy. The extra force against which this work is done is called the force of radiation resistance. The origin of this extra force was identified in those days, following Lorentz, as the action of the electron itself. The first term of this action, of the electron on itself, gave a kind of inertia (not quite relativistically satisfactory). But that inertia-like term was infinite for a point-charge. Yet the next term in the sequence gave an energy loss rate, which for a point-charge agrees exactly with the rate you get by calculating how much energy is radiated. So, the force of radiation resistance, which is absolutely necessary for the conservation of energy would disappear if I said that a charge could not act on itself. So, I learned in the interim when I went to graduate school the glaringly obvious fault of my own theory. But, I was still in love with the original theory, and was still thinking that with it lay the solution to the difficulties of quantum electrodynamics. So, I continued to try on and off to save it somehow. I must have some action develop on a given electron when I accelerate it to account for radiation resistance. But, if I let electrons only act on other electrons the only possible source for this action is another electron in the world. So, one day, when I was working for Professor Wheeler and could no longer solve the problem that he had given me, I thought about this again and I calculated the following. Suppose I have two charges - I shake the first charge, which I think of as a source and this makes the second one shake, but the second one shaking produces an effect back on the source. And so, I calculated how much that effect back on the first charge was, hoping it might add up the force of radiation resistance. It didn't come out right, of course, but I went to Professor Wheeler and told him my ideas. He said, - yes, but the answer you get for the problem with the two charges that you just mentioned will, unfortunately, depend upon the charge and the mass of the second charge and will vary inversely as the square of the distance R, between the charges, while the force of radiation resistance depends on none of these things. I thought, surely, he had computed it himself, but now having become a professor, I know that one can be wise enough to see immediately what some graduate student takes several weeks to develop. He also pointed out something that also bothered me, that if we had a situation with many charges all around the original source at roughly uniform density and if we added the effect of all the surrounding charges the inverse R square would be compensated by the R2 in the volume element and we would get a result proportional to the thickness of the layer, which would go to infinity. That is, one would have an infinite total effect back at the source. And, finally he said to me, and you forgot something else, when you accelerate the first charge, the second acts later, and then the reaction back here at the source would be still later. In other words, the action occurs at the wrong time. I suddenly realized what a stupid fellow I am, for what I had described and calculated was just ordinary reflected light, not radiation reaction. But, as I was stupid, so was Professor Wheeler that much more clever. For he then went on to give a lecture as though he had worked this all out before and was completely prepared, but he had not, he worked it out as he went along. First, he said, let us suppose that the return action by the charges in the absorber reaches the source by advanced waves as well as by the ordinary retarded waves of reflected light; so that the law of interaction acts backward in time, as well as forward in time. I was enough of a physicist at that time not to say, "Oh, no, how could that be?" For today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. So, it did not bother me any more than it bothered Professor Wheeler to use advance waves for the back reaction - a solution of Maxwell's equations, which previously had not been physically used. Professor Wheeler used advanced waves to get the reaction back at the right time and then he suggested this: If there were lots of electrons in the absorber, there would be an index of refraction n, so, the retarded waves coming from the source would have their wave lengths slightly modified in going through the absorber. Now, if we shall assume that the advanced waves come back from the absorber without an index - why? I don't know, let's assume they come back without an index - then, there will be a gradual shifting in phase between the return and the original signal so that we would only have to figure that the contributions act as if they come from only a finite thickness, that of the first wave zone. (More specifically, up to that depth where the phase in the medium is shifted appreciably from what it would be in vacuum, a thickness proportional to l/(n-1). ) Now, the less the number of electrons in here, the less each contributes, but the thicker will be the layer that effectively contributes because with less electrons, the index differs less from 1. The higher the charges of these electrons, the more each contribute, but the thinner the effective layer, because the index would be higher. And when we estimated it, (calculated without being careful to keep the correct numerical factor) sure enough, it came out that the action back at the source was completely independent of the properties of the charges that were in the surrounding absorber. Further, it was of just the right character to represent radiation resistance, but we were unable to see if it was just exactly the right size. He sent me home with orders to figure out exactly how much advanced and how much retarded wave we need to get the thing to come out numerically right, and after that, figure out what happens to the advanced effects that you would expect if you put a test charge here close to the source? For if all charges generate advanced, as well as retarded effects, why would that test not be affected by the advanced waves from the source? I found that you get the right answer if you use half-advanced and half-retarded as the field generated by each charge. That is, one is to use the solution of Maxwell's equation which is symmetrical in time and that the reason we got no advanced effects at a point close to the source in spite of the fact that the source was producing an advanced field is this. Suppose the source s surrounded by a spherical absorbing wall ten light seconds away, and that the test charge is one second to the right of the source. Then the source is as much as eleven seconds away from some parts of the wall and only nine seconds away from other parts. The source acting at time t=0 induces motions in the wall at time +10. Advanced effects from this can act on the test charge as early as eleven seconds earlier, or at t= -1. This is just at the time that the direct advanced waves from the source should reach the test charge, and it turns out the two effects are exactly equal and opposite and cancel out! At the later time +1 effects on the test charge from the source and from the walls are again equal, but this time are of the same sign and add to convert the half-retarded wave of the source to full retarded strength. Thus, it became clear that there was the possibility that if we assume all actions are via half-advanced and half-retarded solutions of Maxwell's equations and assume that all sources are surrounded by material absorbing all the the light which is emitted, then we could account for radiation resistance as a direct action of the charges of the absorber acting back by advanced waves on the source. Many months were devoted to checking all these points. I worked to show that everything is independent of the shape of the container, and so on, that the laws are exactly right, and that the advanced effects really cancel in every case. We always tried to increase the efficiency of our demonstrations, and to see with more and more clarity why it works. I won't bore you by going through the details of this. Because of our using advanced waves, we also had many apparent paradoxes, which we gradually reduced one by one, and saw that there was in fact no logical difficulty with the theory. It was perfectly satisfactory. We also found that we could reformulate this thing in another way, and that is by a principle of least action. Since my original plan was to describe everything directly in terms of particle motions, it was my desire to represent this new theory without saying anything about fields. It turned out that we found a form for an action directly involving the motions of the charges only, which upon variation would give the equations of motion of these charges. The expression for this action A is where is the four-vector position of the ith particle as a function of some parameter . The first term is the integral of proper time, the ordinary action of relativistic mechanics of free particles of mass mi. (We sum in the usual way on the repeated index m.) The second term represents the electrical interaction of the charges. It is summed over each pair of charges (the factor ½ is to count each pair once, the term i=j is omitted to avoid self-action) .The interaction is a double integral over a delta function of the square of space-time interval I2 between two points on the paths. Thus, interaction occurs only when this interval vanishes, that is, along light cones. The fact that the interaction is exactly one-half advanced and half-retarded meant that we could write such a principle of least action, whereas interaction via retarded waves alone cannot be written in such a way. So, all of classical electrodynamics was contained in this very simple form. It looked good, and therefore, it was undoubtedly true, at least to the beginner. It automatically gave half-advanced and half-retarded effects and it was without fields. By omitting the term in the sum when i=j, I omit self-interaction and no longer have any infinite self-energy. This then was the hoped-for solution to the problem of ridding classical electrodynamics of the infinities. It turns out, of course, that you can reinstate fields if you wish to, but you have to keep track of the field produced by each particle separately. This is because to find the right field to act on a given particle, you must exclude the field that it creates itself. A single universal field to which all contribute will not do. This idea had been suggested earlier by Frenkel and so we called these Frenkel fields. This theory which allowed only particles to act on each other was equivalent to Frenkel's fields using half-advanced and half-retarded solutions. There were several suggestions for interesting modifications of electrodynamics. We discussed lots of them, but I shall report on only one. It was to replace this delta function in the interaction by another function, say, f(I2ij), which is not infinitely sharp. Instead of having the action occur only when the interval between the two charges is exactly zero, we would replace the delta function of I2 by a narrow peaked thing. Let's say that f(Z) is large only near Z=0 width of order a2. Interactions will now occur when T2-R2 is of order a2 roughly where T is the time difference and R is the separation of the charges. This might look like it disagrees with experience, but if a is some small distance, like 10-13 cm, it says that the time delay T in action is roughly or approximately, - if R is much larger than a, T=R±a2/2R. This means that the deviation of time T from the ideal theoretical time R of Maxwell, gets smaller and smaller, the further the pieces are apart. Therefore, all theories involving in analyzing generators, motors, etc., in fact, all of the tests of electrodynamics that were available in Maxwell's time, would be adequately satisfied if were 10-13 cm. If R is of the order of a centimeter this deviation in T is only 10-26 parts. So, it was possible, also, to change the theory in a simple manner and to still agree with all observations of classical electrodynamics. You have no clue of precisely what function to put in for f, but it was an interesting possibility to keep in mind when developing quantum electrodynamics. It also occurred to us that if we did that (replace d by f) we could not reinstate the term i=j in the sum because this would now represent in a relativistically invariant fashion a finite action of a charge on itself. In fact, it was possible to prove that if we did do such a thing, the main effect of the self-action (for not too rapid accelerations) would be to produce a modification of the mass. In fact, there need be no mass mi, term, all the mechanical mass could be electromagnetic self-action. So, if you would like, we could also have another theory with a still simpler expression for the action A. In expression (1) only the second term is kept, the sum extended over all i and j, and some function replaces d. Such a simple form could represent all of classical electrodynamics, which aside from gravitation is essentially all of classical physics. Although it may sound confusing, I am describing several different alternative theories at once. The important thing to note is that at this time we had all these in mind as different possibilities. There were several possible solutions of the difficulty of classical electrodynamics, any one of which might serve as a good starting point to the solution of the difficulties of quantum electrodynamics. I would also like to emphasize that by this time I was becoming used to a physical point of view different from the more customary point of view. In the customary view, things are discussed as a function of time in very great detail. For example, you have the field at this moment, a differential equation gives you the field at the next moment and so on; a method, which I shall call the Hamilton method, the time differential method. We have, instead (in (1) say) a thing that describes the character of the path throughout all of space and time. The behavior of nature is determined by saying her whole spacetime path has a certain character. For an action like (1) the equations obtained by variation (of Xim (ai)) are no longer at all easy to get back into Hamiltonian form. If you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths - but the path of one particle at a given time is affected by the path of another at a different time. If you try to describe, therefore, things differentially, telling what the present conditions of the particles are, and how these present conditions will affect the future you see, it is impossible with particles alone, because something the particle did in the past is going to affect the future. Therefore, you need a lot of bookkeeping variables to keep track of what the particle did in the past. These are called field variables. You will, also, have to tell what the field is at this present moment, if you are to be able to see later what is going to happen. From the overall space-time view of the least action principle, the field disappears as nothing but bookkeeping variables insisted on by the Hamiltonian method. As a by-product of this same view, I received a telephone call one day at the graduate college at Princeton from Professor (John Archibald) Wheeler, in which he said, "Feynman, I know why all electrons have the same charge and the same mass" "Why?" "Because, they are all the same electron!" And, then he explained on the telephone, "suppose that the world lines which we were ordinarily considering before in time and space - instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time - to the proper four velocities - and that's equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron." "But, Professor", I said, "there aren't as many positrons as electrons." "Well, maybe they are hidden in the protons or something", he said. I did not take the idea that all the electrons were the same one from him as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole! Feynman diagram applied to the radio-electric transduction Errante's radio-electric transduction mechanism is fully consistent with Prof. John Archibald Wheeler's own intuition. To summarize, when I was done with this, as a physicist I had gained two things. One, I knew many different ways of formulating classical electrodynamics, with many different mathematical forms. I got to know how to express the subject every which way. Second, I had a point of view - the overall space-time point of view - and a disrespect for the Hamiltonian method of describing physics. I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways - the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don't know why this is - it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn't look at all like the way you said it before. I don't know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson's equation, which, therefore, is a very different way to say the same thing that doesn't look at all like the way you said it before. I don't know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing. I was now convinced that since we had solved the problem of classical electrodynamics (and completely in accordance with my program from M.I.T., only direct interaction between particles, in a way that made fields unnecessary) that everything was definitely going to be all right. I was convinced that all I had to do was make a quantum theory analogous to the classical one and everything would be solved. So, the problem is only to make a quantum theory, which has as its classical analog, this expression (1). Now, there is no unique way to make a quantum theory from classical mechanics, although all the textbooks make believe there is. What they would tell you to do, was find the momentum variables and replace them by , but I couldn't find a momentum variable, as there wasn't any. The character of quantum mechanics of the day was to write things in the famous Hamiltonian way - in the form of a differential equation, which described how the wave function changes from instant to instant, and in terms of an operator, H. If the classical physics could be reduced to a Hamiltonian form, everything was all right. Now, least action does not imply a Hamiltonian form if the action is a function of anything more than positions and velocities at the same moment. If the action is of the form of the integral of a function, (usually called the Lagrangian) of the velocities and positions at the same time then you can start with the Lagrangian and then create a Hamiltonian and work out the quantum mechanics, more or less uniquely. But this thing (1) involves the key variables, positions, at two different times and therefore, it was not obvious what to do to make the quantum-mechanical analogue. I tried - I would struggle in various ways. One of them was this; if I had harmonic oscillators interacting with a delay in time, I could work out what the normal modes were and guess that the quantum theory of the normal modes was the same as for simple oscillators and kind of work my way back in terms of the original variables. I succeeded in doing that, but I hoped then to generalize to other than a harmonic oscillator, but I learned to my regret something, which many people have learned. The harmonic oscillator is too simple; very often you can work out what it should do in quantum theory without getting much of a clue as to how to generalize your results to other systems. So that didn't help me very much, but when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, "what are you doing" and so on, and I said, "I'm drinking beer." Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, "listen, do you know any way of doing quantum mechanics, starting with action - where the action integral comes into the quantum mechanics?" "No", he said, "but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow." Next day we went to the Princeton Library, they have little rooms on the side to discuss things, and he showed me this paper. What Dirac said was the following: There is in quantum mechanics a very important quantity which carries the wave function from one time to another, besides the differential equation but equivalent to it, a kind of a kernal, which we might call K(x', x), which carries the wave function j(x) known at time t, to the wave function j(x') at time, t+e Dirac points out that this function K was analogous to the quantity in classical mechanics that you would calculate if you took the exponential of ie, multiplied by the Lagrangian imagining that these two positions x,x' corresponded t and t+e. In other words, Professor Jehle showed me this, I read it, he explained it to me, and I said, "what does he mean, they are analogous; what does that mean, analogous? What is the use of that?" He said, "you Americans! You always want to find a use for everything!" I said, that I thought that Dirac must mean that they were equal. "No", he explained, "he doesn't mean they are equal." "Well", I said, "let's see what happens if we make them equal." So I simply put them equal, taking the simplest example where the Lagrangian is ½Mx2 - V(x) but soon found I had to put a constant of proportionality A in, suitably adjusted. When I substituted for K to get and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, "well, you see Professor Dirac meant that they were proportional." Professor Jehle's eyes were bugging out - he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, "no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That's a good way to discover things!" So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times. It must have been a day or so later when I was lying in bed thinking about these things, that I imagined what would happen if I wanted to calculate the wave function at a finite interval later. I would put one of these factors eieL in here, and that would give me the wave functions the next moment, t+e and then I could substitute that back into (3) to get another factor of eieL and give me the wave function the next moment, t+2e and so on and so on. In that way I found myself thinking of a large number of integrals, one after the other in sequence. In the integrand was the product of the exponentials, which, of course, was the exponential of the sum of terms like eL. Now, L is the Lagrangian and e is like the time interval dt, so that if you took a sum of such terms, that's exactly like an integral. That's like Riemann's formula for the integral Ldt, you just take the value at each point and add them together. We are to take the limit as e-0, of course. Therefore, the connection between the wave function of one instant and the wave function of another instant a finite time later could be obtained by an infinite number of integrals, (because e goes to zero, of course) of exponential where S is the action expression (2). At last, I had succeeded in representing quantum mechanics directly in terms of the action S. This led later on to the idea of the amplitude for a path; that for each possible way that the particle can go from one point to another in space-time, there's an amplitude. That amplitude is e to the times the action for the path. Amplitudes from various paths superpose by addition. This then is another, a third way, of describing quantum mechanics, which looks quite different than that of Schrödinger or Heisenberg, but which is equivalent to them. Now immediately after making a few checks on this thing, what I wanted to do, of course, was to substitute the action (1) for the other (2). The first trouble was that I could not get the thing to work with the relativistic case of spin one-half. However, although I could deal with the matter only nonrelativistically, I could deal with the light or the photon interactions perfectly well by just putting the interaction terms of (1) into any action, replacing the mass terms by the non-relativistic (Mx2/2)dt. When the action has a delay, as it now had, and involved more than one time, I had to lose the idea of a wave function. That is, I could no longer describe the program as; given the amplitude for all positions at a certain time to compute the amplitude at another time. However, that didn't cause very much trouble. It just meant developing a new idea. Instead of wave functions we could talk about this; that if a source of a certain kind emits a particle, and a detector is there to receive it, we can give the amplitude that the source will emit and the detector receive. We do this without specifying the exact instant that the source emits or the exact instant that any detector receives, without trying to specify the state of anything at any particular time in between, but by just finding the amplitude for the complete experiment. And, then we could discuss how that amplitude would change if you had a scattering sample in between, as you rotated and changed angles, and so on, without really having any wave functions. It was also possible to discover what the old concepts of energy and momentum would mean with this generalized action. And, so I believed that I had a quantum theory of classical electrodynamics - or rather of this new classical electrodynamics described by action (1). I made a number of checks. If I took the Frenkel field point of view, which you remember was more differential, I could convert it directly to quantum mechanics in a more conventional way. The only problem was how to specify in quantum mechanics the classical boundary conditions to use only half-advanced and half-retarded solutions. By some ingenuity in defining what that meant, I found that the quantum mechanics with Frenkel fields, plus a special boundary condition, gave me back this action, (1) in the new form of quantum mechanics with a delay. So, various things indicated that there wasn't any doubt I had everything straightened out. It was also easy to guess how to modify the electrodynamics, if anybody ever wanted to modify it. I just changed the delta to an f, just as I would for the classical case. So, it was very easy, a simple thing. To describe the old retarded theory without explicit mention of fields I would have to write probabilities, not just amplitudes. I would have to square my amplitudes and that would involve double path integrals in which there are two S's and so forth. Yet, as I worked out many of these things and studied different forms and different boundary conditions. I got a kind of funny feeling that things weren't exactly right. I could not clearly identify the difficulty and in one of the short periods during which I imagined I had laid it to rest, I published a thesis and received my Ph.D. During the war, I didn't have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn't be real and probabilities of events wouldn't add up to 100%. That is, if you took the probability that this would happen and that would happen - everything you could think of would happen, it would not add up to one. Another problem on which I struggled very hard, was to represent relativistic electrons with this new quantum mechanics. I wanted to do a unique and different way - and not just by copying the operators of Dirac into some kind of an expression and using some kind of Dirac algebra instead of ordinary complex numbers. I was very much encouraged by the fact that in one space dimension, I did find a way of giving an amplitude to every path by limiting myself to paths, which only went back and forth at the speed of light. The amplitude was simple (ie) to a power equal to the number of velocity reversals where I have divided the time into steps and I am allowed to reverse velocity only at such a time. This gives (as approaches zero) Dirac's equation in two dimensions - one dimension of space and one of time . Dirac's wave function has four components in four dimensions, but in this case, it has only two components and this rule for the amplitude of a path automatically generates the need for two components. Because if this is the formula for the amplitudes of path, it will not do you any good to know the total amplitude of all paths, which come into a given point to find the amplitude to reach the next point. This is because for the next time, if it came in from the right, there is no new factor ie if it goes out to the right, whereas, if it came in from the left there was a new factor ie. So, to continue this same information forward to the next moment, it was not sufficient information to know the total amplitude to arrive, but you had to know the amplitude to arrive from the right and the amplitude to arrive to the left, independently. If you did, however, you could then compute both of those again independently and thus you had to carry two amplitudes to form a differential equation (first order in time). And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the Dirac equation, and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence - I have never succeeded in that either. But, I did want to mention some of the unsuccessful things on which I spent almost as much effort, as on the things that did work. To summarize the situation a few years after the way, I would say, I had much experience with quantum electrodynamics, at least in the knowledge of many different ways of formulating it, in terms of path integrals of actions and in other forms. One of the important by-products, for example, of much experience in these simple forms, was that it was easy to see how to combine together what was in those days called the longitudinal and transverse fields, and in general, to see clearly the relativistic invariance of the theory. Because of the need to do things differentially there had been, in the standard quantum electrodynamics, a complete split of the field into two parts, one of which is called the longitudinal part and the other mediated by the photons, or transverse waves. The longitudinal part was described by a Coulomb potential acting instantaneously in the Schrödinger equation, while the transverse part had entirely different description in terms of quantization of the transverse waves. This separation depended upon the relativistic tilt of your axes in spacetime. People moving at different velocities would separate the same field into longitudinal and transverse fields in a different way. Furthermore, the entire formulation of quantum mechanics insisting, as it did, on the wave function at a given time, was hard to analyze relativistically. Somebody else in a different coordinate system would calculate the succession of events in terms of wave functions on differently cut slices of space-time, and with a different separation of longitudinal and transverse parts. The Hamiltonian theory did not look relativistically invariant, although, of course, it was. One of the great advantages of the overall point of view, was that you could see the relativistic invariance right away - or as Schwinger would say - the covariance was manifest. I had the advantage, therefore, of having a manifestedly covariant form for quantum electrodynamics with suggestions for modifications and so on. I had the disadvantage that if I took it too seriously - I mean, if I took it seriously at all in this form, - I got into trouble with these complex energies and the failure of adding probabilities to one and so on. I was unsuccessfully struggling with that. Then Lamb did his experiment, measuring the separation of the 2S½ and 2P½ levels of hydrogen, finding it to be about 1000 megacycles of frequency difference. Professor Bethe, with whom I was then associated at Cornell, is a man who has this characteristic: If there's a good experimental number you've got to figure it out from theory. So, he forced the quantum electrodynamics of the day to give him an answer to the separation of these two levels. He pointed out that the self-energy of an electron itself is infinite, so that the calculated energy of a bound electron should also come out infinite. But, when you calculated the separation of the two energy levels in terms of the corrected mass instead of the old mass, it would turn out, he thought, that the theory would give convergent finite answers. He made an estimate of the splitting that way and found out that it was still divergent, but he guessed that was probably due to the fact that he used an unrelativistic theory of the matter. Assuming it would be convergent if relativistically treated, he estimated he would get about a thousand megacycles for the Lamb-shift, and thus, made the most important discovery in the history of the theory of quantum electrodynamics. He worked this out on the train from Ithaca, New York to Schenectady and telephoned me excitedly from Schenectady to tell me the result, which I don't remember fully appreciating at the time. Returning to Cornell, he gave a lecture on the subject, which I attended. He explained that it gets very confusing to figure out exactly which infinite term corresponds to what in trying to make the correction for the infinite change in mass. If there were any modifications whatever, he said, even though not physically correct, (that is not necessarily the way nature actually works) but any modification whatever at high frequencies, which would make this correction finite, then there would be no problem at all to figuring out how to keep track of everything. You just calculate the finite mass correction Dm to the electron mass mo, substitute the numerical values of mo+Dm for m in the results for any other problem and all these ambiguities would be resolved. If, in addition, this method were relativistically invariant, then we would be absolutely sure how to do it without destroying relativistically invariant. After the lecture, I went up to him and told him, "I can do that for you, I'll bring it in for you tomorrow." I guess I knew every way to modify quantum electrodynamics known to man, at the time. So, I went in next day, and explained what would correspond to the modification of the delta-function to f and asked him to explain to me how you calculate the self-energy of an electron, for instance, so we can figure out if it's finite. I want you to see an interesting point. I did not take the advice of Professor Jehle to find out how it was useful. I never used all that machinery which I had cooked up to solve a single relativistic problem. I hadn't even calculated the self-energy of an electron up to that moment, and was studying the difficulties with the conservation of probability, and so on, without actually doing anything, except discussing the general properties of the theory. But now I went to Professor Bethe, who explained to me on the blackboard, as we worked together, how to calculate the self-energy of an electron. Up to that time when you did the integrals they had been logarithmically divergent. I told him how to make the relativistically invariant modifications that I thought would make everything all right. We set up the integral which then diverged at the sixth power of the frequency instead of logarithmically! So, I went back to my room and worried about this thing and went around in circles trying to figure out what was wrong because I was sure physically everything had to come out finite, I couldn't understand how it came out infinite. I became more and more interested and finally realized I had to learn how to make a calculation. So, ultimately, I taught myself how to calculate the self-energy of an electron working my patient way through the terrible confusion of those days of negative energy states and holes and longitudinal contributions and so on. When I finally found out how to do it and did it with the modifications I wanted to suggest, it turned out that it was nicely convergent and finite, just as I had expected. Professor Bethe and I have never been able to discover what we did wrong on that blackboard two months before, but apparently we just went off somewhere and we have never been able to figure out where. It turned out, that what I had proposed, if we had carried it out without making a mistake would have been all right and would have given a finite correction. Anyway, it forced me to go back over all this and to convince myself physically that nothing can go wrong. At any rate, the correction to mass was now finite, proportional to where a is the width of that function f which was substituted for d. If you wanted an unmodified electrodynamics, you would have to take a equal to zero, getting an infinite mass correction. But, that wasn't the point. Keeping a finite, I simply followed the program outlined by Professor Bethe and showed how to calculate all the various things, the scatterings of electrons from atoms without radiation, the shifts of levels and so forth, calculating everything in terms of the experimental mass, and noting that the results as Bethe suggested, were not sensitive to a in this form and even had a definite limit as ag0. The rest of my work was simply to improve the techniques then available for calculations, making diagrams to help analyze perturbation theory quicker. Most of this was first worked out by guessing - you see, I didn't have the relativistic theory of matter. For example, it seemed to me obvious that the velocities in non-relativistic formulas have to be replaced by Dirac's matrix a or in the more relativistic forms by the operators . I just took my guesses from the forms that I had worked out using path integrals for nonrelativistic matter, but relativistic light. It was easy to develop rules of what to substitute to get the relativistic case. I was very surprised to discover that it was not known at that time, that every one of the formulas that had been worked out so patiently by separating longitudinal and transverse waves could be obtained from the formula for the transverse waves alone, if instead of summing over only the two perpendicular polarization directions you would sum over all four possible directions of polarization. It was so obvious from the action (1) that I thought it was general knowledge and would do it all the time. I would get into arguments with people, because I didn't realize they didn't know that; but, it turned out that all their patient work with the longitudinal waves was always equivalent to just extending the sum on the two transverse directions of polarization over all four directions. This was one of the amusing advantages of the method. In addition, I included diagrams for the various terms of the perturbation series, improved notations to be used, worked out easy ways to evaluate integrals, which occurred in these problems, and so on, and made a kind of handbook on how to do quantum electrodynamics. But one step of importance that was physically new was involved with the negative energy sea of Dirac, which caused me so much logical difficulty. I got so confused that I remembered Wheeler's old idea about the positron being, maybe, the electron going backward in time. Therefore, in the time dependent perturbation theory that was usual for getting self-energy, I simply supposed that for a while we could go backward in the time, and looked at what terms I got by running the time variables backward. They were the same as the terms that other people got when they did the problem a more complicated way, using holes in the sea, except, possibly, for some signs. These, I, at first, determined empirically by inventing and trying some rules. I have tried to explain that all the improvements of relativistic theory were at first more or less straightforward, semi-empirical shenanigans. Each time I would discover something, however, I would go back and I would check it so many ways, compare it to every problem that had been done previously in electrodynamics (and later, in weak coupling meson theory) to see if it would always agree, and so on, until I was absolutely convinced of the truth of the various rules and regulations which I concocted to simplify all the work. During this time, people had been developing meson theory, a subject I had not studied in any detail. I became interested in the possible application of my methods to perturbation calculations in meson theory. But, what was meson theory? All I knew was that meson theory was something analogous to electrodynamics, except that particles corresponding to the photon had a mass. It was easy to guess the d-function in (1), which was a solution of d'Alembertian equals zero, was to be changed to the corresponding solution of d'Alembertian equals m2. Next, there were different kind of mesons - the one in closest analogy to photons, coupled via , are called vector mesons - there were also scalar mesons. Well, maybe that corresponds to putting unity in place of the , I would here then speak of "pseudo vector coupling" and I would guess what that probably was. I didn't have the knowledge to understand the way these were defined in the conventional papers because they were expressed at that time in terms of creation and annihilation operators, and so on, which, I had not successfully learned. I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said, "how do you create an electron? It disagrees with the conservation of charge", and in that way, I blocked my mind from learning a very practical scheme of calculation. Therefore, I had to find as many opportunities as possible to test whether I guessed right as to what the various theories were. One day a dispute arose at a Physical Society meeting as to the correctness of a calculation by Slotnick of the interaction of an electron with a neutron using pseudo scalar theory with pseudo vector coupling and also, pseudo scalar theory with pseudo scalar coupling. He had found that the answers were not the same, in fact, by one theory, the result was divergent, although convergent with the other. Some people believed that the two theories must give the same answer for the problem. This was a welcome opportunity to test my guesses as to whether I really did understand what these two couplings were. So, I went home, and during the evening I worked out the electron neutron scattering for the pseudo scalar and pseudo vector coupling, saw they were not equal and subtracted them, and worked out the difference in detail. The next day at the meeting, I saw Slotnick and said, "Slotnick, I worked it out last night, I wanted to see if I got the same answers you do. I got a different answer for each coupling - but, I would like to check in detail with you because I want to make sure of my methods." And, he said, "what do you mean you worked it out last night, it took me six months!" And, when we compared the answers he looked at mine and he asked, "what is that Q in there, that variable Q?" (I had expressions like (tan -1Q) /Q etc.). I said, "that's the momentum transferred by the electron, the electron deflected by different angles." "Oh", he said, "no, I only have the limiting value as Q approaches zero; the forward scattering." Well, it was easy enough to just substitute Q equals zero in my form and I then got the same answers as he did. But, it took him six months to do the case of zero momentum transfer, whereas, during one evening I had done the finite and arbitrary momentum transfer. That was a thrilling moment for me, like receiving the Nobel Prize, because that convinced me, at last, I did have some kind of method and technique and understood how to do something that other people did not know how to do. That was my moment of triumph in which I realized I really had succeeded in working out something worthwhile. At this stage, I was urged to publish this because everybody said it looks like an easy way to make calculations, and wanted to know how to do it. I had to publish it, missing two things; one was proof of every statement in a mathematically conventional sense. Often, even in a physicist's sense, I did not have a demonstration of how to get all of these rules and equations from conventional electrodynamics. But, I did know from experience, from fooling around, that everything was, in fact, equivalent to the regular electrodynamics and had partial proofs of many pieces, although, I never really sat down, like Euclid did for the geometers of Greece, and made sure that you could get it all from a single simple set of axioms. As a result, the work was criticized, I don't know whether favorably or unfavorably, and the "method" was called the "intuitive method". For those who do not realize it, however, I should like to emphasize that there is a lot of work involved in using this "intuitive method" successfully. Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven. It must be clearly understood that in all this work, I was representing the conventional electrodynamics with retarded interaction, and not my half-advanced and half-retarded theory corresponding to (1). I merely use (1) to guess at forms. And, one of the forms I guessed at corresponded to changing d to a function f of width a2, so that I could calculate finite results for all of the problems. This brings me to the second thing that was missing when I published the paper, an unresolved difficulty. With d replaced by f the calculations would give results which were not "unitary", that is, for which the sum of the probabilities of all alternatives was not unity. The deviation from unity was very small, in practice, if a was very small. In the limit that I took a very tiny, it might not make any difference. And, so the process of the renormalization could be made, you could calculate everything in terms of the experimental mass and then take the limit and the apparent difficulty that the unitary is violated temporarily seems to disappear. I was unable to demonstrate that, as a matter of fact, it does. It is lucky that I did not wait to straighten out that point, for as far as I know, nobody has yet been able to resolve this question. Experience with meson theories with stronger couplings and with strongly coupled vector photons, although not proving anything, convinces me that if the coupling were stronger, or if you went to a higher order (137th order of perturbation theory for electrodynamics), this difficulty would remain in the limit and there would be real trouble. That is, I believe there is really no satisfactory quantum electrodynamics, but I'm not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn't any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they're either infinite, or, if you try to modify them, the modification destroys the unitarity. I don't think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn't agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that. This completes the story of the development of the space-time view of quantum electrodynamics. I wonder if anything can be learned from it. I doubt it. It is most striking that most of the ideas developed in the course of this research were not ultimately used in the final result. For example, the half-advanced and half-retarded potential was not finally used, the action expression (1) was not used, the idea that charges do not act on themselves was abandoned. The path-integral formulation of quantum mechanics was useful for guessing at final expressions and at formulating the general theory of electrodynamics in new ways - although, strictly it was not absolutely necessary. The same goes for the idea of the positron being a backward moving electron, it was very convenient, but not strictly necessary for the theory because it is exactly equivalent to the negative energy sea point of view. We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression? This would certainly seem to be the case, but it must be remarked that although the problem actually solved was only such a reformulation, the problem originally tackled was the (possibly still unsolved) problem of avoidance of the infinities of the usual theory. Therefore, a new theory was sought, not just a modification of the old. Although the quest was unsuccessful, we should look at the question of the value of physical ideas in developing a new theory. Therefore, I think equation guessing might be the best method to proceed to obtain the laws for the part of physics which is presently unknown. Yet, when I was much younger, I tried this equation guessing and I have seen many students try this, but it is very easy to go off in wildly incorrect and impossible directions. I think the problem is not to find the best or most efficient method to proceed to a discovery, but to find any method at all. Physical reasoning does help some people to generate suggestions as to how the unknown may be related to the known. Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. I, therefore, think that a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him. This may be asking too much of one man. Then new students should as a class have this. If every individual student follows the same current fashion in expressing and thinking about electrodynamics or field theory, then the variety of hypotheses being generated to understand strong interactions, say, is limited. Perhaps rightly so, for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction - a direction obvious from an unfashionable view of field theory - who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself. I say sacrificed himself because he most likely will get nothing from it, because the truth may lie in another direction, perhaps even the fashionable one. But, if my own experience is any guide, the sacrifice is really not great because if the peculiar viewpoint taken is truly experimentally equivalent to the usual in the realm of the known there is always a range of applications and problems in this realm for which the special viewpoint gives one a special power and clarity of thought, which is valuable in itself. Furthermore, in the search for new laws, you always have the psychological excitement of feeling that possible nobody has yet thought of the crazy possibility you are looking at right now. So what happened to the old theory that I fell in love with as a youth? Well, I would say it's become an old lady, that has very little attractive left in her and the young today will not have their hearts pound anymore when they look at her. But, we can say the best we can for any old woman, that she has been a very good mother and she has given birth to some very good children. And, I thank the Swedish Academy of Sciences for complimenting one of them. Thank you. All the concepts, methods, designs and devices presented on this web site are the original novelty works of FRANCESCO ERRANTE. Patents & Copyright © 1985- of FRANCESCO ERRANTE. Material is governed by the Copyright, Designs and Patent Act. Copies of these documents made by electronic or mechanical means, including information storage and retrieval systems, may only be employed for personal use. All rights reserved. Anima Liberation Front All rights reserved. Copyright © 1985- Francesco Errante www.Radiondistics.com - Tel.(+39) 339.180.1313
fc42e051fa898dd4
Saturday, June 12, 2021 New Website I have a new personal website, and new blog posts will appear there. Please visit and let me know if you have any feedback about its format or readability. Thanks! Tuesday, June 08, 2021 RQM and Molecular Composition According to the last post, the constitution of complex natural systems should be understood using a theory of composite causal processes. Composite causal processes are formed from a pattern of discrete causal interactions among a group of smaller sub-processes. When the latter sustains a higher rate of in-group versus out-group interactions, they form a composite. While this account has intuitive appeal in the case of macroscopic systems, what about more basic building blocks of nature? Can the same approach work in the microscopic realm? In this post, I will make the case that it does, focusing on molecules.  A key to reaching this conclusion will be the use of the conceptual resources of relational quantum mechanics (RQM). Background: The Problem of Molecular Structure In approaching the question of molecular composition, we need to reckon with a long-standing problem regarding how the structure of molecules—the spatial organization of component atoms we are all familiar with from chemistry—relates to quantum theory.[1]  Modern chemistry uses QM models to successfully calculate the value of molecular properties: one starts by solving for the molecular wave function and associated energies using the time-independent Schrödinger equation Ĥ ψ=Eψ.[2] But there are several issues in connecting the quantum formalism to molecular structure. First and most simply, the quantum description of a multiple particle system does not “reside” in space at all. The wave function assigns (complex) numbers to points in a multi-dimensional configuration space (3N dimensions where N is the number of particles in the system). How do we get from this to a spatially organized molecule? In addition to this puzzle, some of the methods used to estimate ψ in practice raise additional issues. Something to keep in mind in what follows is that multi-particle atomic and molecular wave equations are generally computationally intractable.  So, simplifying assumptions of some sort will always be needed. One important strategy normally used is to assume that the nuclei are stationary in space, and then proceed to estimate the electronic wave function.[3] Where do we get the assumption for the particular configuration for the nuclei in the case of a molecule?  This is typically informed by experimental evidence and/or candidates can be evaluated iteratively, seeking the lowest equilibrium energy configuration. I’ll discuss the implications of this assumption shortly. Next, there are different techniques used to estimate the electronic wave function. For multi-electron atoms, one adds additional electrons using hydrogen-like wave functions (called orbitals) of increasing energy. Chemistry textbooks offer visualizations of these orbitals for various atoms and we can form some intuitions for how they overlap to form bonded molecules (but strictly speaking remember the wave functions are not in 3D space). One approach to molecular wave functions uses hybrid orbitals based on these overlaps in its calculations.  Another approach skips this process and just proceeds by incrementally adding the requisite electrons to orbitals calculated for whole molecule at once.[4] In this method, the notion of localized atoms linked by bonds is much more elusive, but this intuitive departure interestingly has no impact on the effectiveness of the calculation method (this method is frequently more efficient). Once we have molecular wave functions, we have an estimate of energies and can derive other properties of interest.  We can also use the wave function to calculate the electron density distribution for the system (usually designated by ρ): this gives the number of electrons one would expect to find at various spatial locations upon measurement.  This is the counterpart of the process we use to probabilistically predict the outcome of a measurement for any quantum system by multiplying the wave function ψ by its complex conjugate ψ* (the Born rule). Interestingly, another popular technique quantum chemists (and condensed matter physicists) use to estimate electronic properties uses ρ instead of ψ as a starting point (called Density Functional Theory).[5] Notably, the electron density seems to offer a more promising way to depict molecular structure in our familiar space, letting us visualize molecular shape, and pictures of these density distributions are also featured in textbooks. Theorists have also developed sophisticated ways to correlate features of ρ with chemical concepts, including bonding relationships.[6] However, here we still need to be careful in our interpretation:  while ρ is a function that assigns numbers to points in our familiar 3D space, it should not be taken to represent an object simply located in space.  I’ll have more to say about interpreting ρ below. Still, this might all sound pretty good: we understand that the ball and stick molecules of our school days don’t actually exist, but we have ways to approximate the classical picture using the correct (quantum) physics.  But this would be too quick—in particular, remember that in performing our physical calculations we put the most important ingredient of a molecule’s spatial structure in by hand! As mentioned above, the fixed nuclei spatial configuration was an assumption, not a derivation.  If one tries to calculate wave functions for molecules from scratch with the appropriate number of nuclei and electrons, one does not recover the particular asymmetries that distinguish most polyatomic molecules and that are crucial for understanding their chemical behavior.  This problem is often brought into focus by highlighting the many examples of molecules with the same atomic constituents (isomers) that differ crucially in their geometric structure (some even have the same bonding structure but different geometry). Molecular wave functions would generally not distinguish these from each other unless the configuration is brutely added as an assumption. Getting from QM Models to Molecular Structure So how does spatial molecular structure arise from a purely quantum world? It seems that two additional ingredients are needed. The first is to incorporate the role of intra-and extra-molecular interactions. The second is to go beyond the quantum formalism and incorporate an interpretation of quantum mechanics. With regard to the first step, note that the discussion thus far focused on quantum modeling of isolated molecules in equilibrium. This is an idealization, since in the actual world, molecules are usually constantly interacting with other systems in their environment, as well as always being subject to ongoing internal dynamics.  Recognizing this, but staying within orthodox QM, there is research indicating that applications of decoherence theory can go some way to accounting for the emergence of molecular shape. Most of this work explores models featuring interactions between a molecule and an assumed environment. Recently, there has been some innovative research extending decoherence analysis to include consideration of the internal environment of the molecule (interaction between the electrons and the nuclei -- see links in the footnote).[7] More work needs to be done, but there is definitely some prospect that the study of interactions withing the QM-decoherence framework will shed light on show molecular structure comes about. However, we can say already that decoherence will not solve the problem by itself.[8]  It can go some way toward accounting for the suppression of interference and the emergence of classical like-states (“preferred pointer states”), but multiple possible configurations will remain. These, of course, also continue be defined in the high-D configuration space context of QM. To fully account for the actual existence of a particular observed structures in 3D space requires grappling with the question of interpreting QM.  There is a 100-year-old debate centered on the problem of how definite values of a system’s properties are realized upon measurement when the formalism of QM would indicate the existence of a superposition of multiple possibilities (aka the “measurement problem”). Alexander Franklin & Vanessa Seifert have a new paper (preprint) that does an excellent job arguing that the problem of molecular structure is an instance of the measurement problem. It includes a brief look at how three common interpretations of QM (the Everett interpretation, Bohmian mechanics, and the spontaneous collapse approach) would address the issue.  The authors do not conclude in this paper that the consideration of molecular structure has any bearing on deciding between rival QM interpretations.  In contrast, I think the best interpretation is RQM in part because of the way it accounts for molecular structure: it does so in a way that also allows for these quantum systems to fit into an independently attractive general theory of how natural systems are composed (see the last post). How RQM Explains Spatial Structure To discuss how to approach the problem using RQM, let’s first return to the interpretation of the electron density distribution (ρ). As mentioned above, chemistry textbooks include pictures of ρ, and, because it is a function assigning (real) numbers to points in 3D space, there is a temptation to view ρ as depicting the molecule as a spatial object.  The ability to construct an image of ρ for actual molecules using X-ray crystallography may encourage this as well. But viewing ρ as a static extended object in space is clearly inconsistent with its usual statistical meaning in a QM context. As an alternative intuition, textbooks will point out that if you imagine a repeated series of position measurements on the molecular electrons, then one can think of ρ as describing a time-extended pattern of these localizing “hits”. But this doesn’t give us a reason to think molecules have spatial structure in the absence of our interventions. For this, we would want an interpretation that sees spatial localization as resulting from naturally occurring interactions involving a molecule’s internal and external environment (like those explored in decoherence models). We want to envision measurement-like interactions occurring whenever systems interact, without assuming human agents or macroscopic measuring devices need to be involved. This is the picture envisioned by RQM.[9] It is a “democratic” interpretation, where the same rules apply universally. In particular, all interactions between physical systems are “measurement-like” for those systems directly involved. Assuming these interactions are fairly elastic (not disruptive) and relatively transitory, then a molecule would naturally incur a pattern of localizing hits over time. These form its shape in 3D space. It would be nice if we could take ρ, as usually estimated, to represent this shape, but this is technically problematic. Per RQM, the quantum formalism cannot be taken as offering an objective (“view from nowhere”) representation of a system. Both wave functions and interaction events are perspectival.  So, strictly speaking, we cannot use ρ (derived from a particular ψ) to represent a pattern of hits resulting from interactions involving multiple partners. However, given a high level of stability in molecular properties across different contexts, I believe this view of ρ can still offer a useful approximation of what is happening. It gives a sense of how, given RQM, a molecule acquires a structure in 3D space as a result of a natural pattern of internal and environmental interactions. Putting it All Together What this conclusion also allows us to do is fit microscopic quantum systems into the broader framework discussed in the prior post, where patterns of discrete causal interactions are the raw material of composition. Like complex macroscopic systems, atoms and molecules are individuated by these patterns, and RQM offers a bridge from this causal account to our physical representations. Our usual QM models of atoms and molecules describe entangled composite systems, with details determined by the energy profiles of the constituents. Such models of isolated systems can be complimented by decoherence analyses involving additional systems in a theorized environment. RQM tells us that that these models represent the systems from an external perspective, which co-exists side-by-side with another picture: the internal perspective. This is one that infers the occurence of repeated measurement-like interactions among the constituents, a pattern that is also influenced in part by periodic measurement-like interactions with other systems in its neighborhood. The theory of composite causal processes connects with this latter perspective. The composition of atoms and molecules, like that of macroscopic systems, is based on a sustained pattern of causal interactions among sub-systems, occurring in a larger environmental context. Stepping back, the causal process account presented in these last three posts certainly leaves a number of traditional ontological questions open.  In part, this is because my starting point comes from the philosophy of scientific explanation. I believe the main virtue of this theory of a causal world-wide-web is that it can provide a unified underpinning for explanations across a wide range of disciplines, despite huge variation in research approaches and representational formats. Scientific understanding is based on our grasp of these explanations, and uncovering a consistent causal framework that helps enable this achievement is a good way to approach ontology. Bacciagaluppi, G. (2020). The Role of Decoherence in Quantum Mechanics. In E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). Esser, S. (2019). The Quantum Theory of Atoms in Molecules and the Interactive Conception of Chemical Bonding. Philosophy of Science, 86(5), 1307-1317. Franklin, A., & Seifert, V.A. (forthcoming). The Problem of Molecular Structure Just Is the Measurement Problem. The British Journal of the Philosophy of Science. Mátyus, E. (2019). Pre-Born-Oppenheimer Molecular Structure Theory. Molecular Physics, 117(5), 590-609. Weisberg, M., Needham, P., & Hendry, R. (2019). Philosophy of Chemistry. In E. N. Zalta, (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 Edition). [1] For background, see sections 4 and 6 of the Stanford Encyclopedia article “Philosophy of Chemistry”. Also, see the nice presentation of the problem of molecular structure in Franklin & Seifert (forthcoming) (preprint); this paper is discussed later in this post. For a perspective from a theoretical quantum chemist, see the recent paper from Edit Mátyus, which also features a good discussion of the background: Mátyus (2019) (preprint). [2] Here ψ is the wave function, E is the energy, and Ĥ is the Hamiltonian operator appropriate for the system. For example, the Hamiltonian for an atom will contain a kinetic energy term and a potential energy term that is based on the electrostatic attraction between the electrons and the nucleus (along with repulsion between electrons). [3] This assumption is justified by the vast difference in velocity between speedy electrons and the slower nuclei (an adiabatic approximation). For molecules, this is typically referred to as the “clamped nuclei” or Born-Oppenheimer approximation. [4] These methods are known as the valence bond (VB) and molecular orbital (MO) techniques. [5] The rationale behind DFT is that it can be demonstrated that for molecules the ground state energy and other properties can be derived directly from ρ (Hohenberg-Kohn theorems).  This kind of equivalence between ψ and its associated density is clearly not generally true for quantum systems, but in this case the existence of a minimum energy solution allows for the result to be established. [6] Of particular note here is the Quantum Theory of Atoms in Molecules (QTAIM) research program, initiated by R.W.F. Bader. QTAIM finds links to bonding and other chemical features via a detailed topological analysis of ρ. I discuss this in a 2019 paper (preprint). [7] For decoherence studies involving the external environment, see the references cited in section 3.2 of Mátyus (2019) (preprint). Two recent ArXiv papers from Mátyus & Cassam-Chennai explore the contribution of internal dccoherence (see here and here). [8] The present discussion is a specific instance of a more general point that now seems widely accepted in discussions of the QM interpretations: decoherence helps explain why quantum interference effects are suppressed when systems interact with their environments, but it does not solve the quantum measurement problem (which seeks to understand why definite outcomes are observed upon measurement). See the excellent SEP article by Bacciagaluppi. [9] For more, see my earlier post, which lists a number of good RQM references. Monday, May 31, 2021 Composing Natural Systems An interesting feature of Relational Quantum Mechanics (RQM) is its implication that discrete measurement-like interaction events are going on between natural systems (unobserved by us) all the time.  It turns out that this offers a way to incorporate quantum phenomena into an attractive account of how smaller natural systems causally compose larger ones.  In this post I will discuss the general approach, including a brief discussion of its implications for the ideas of reduction and emergence. In a follow-up post, I will discuss the quantum case in more detail with a focus on molecules. Composite Causal Processes The ontological framework I’m using (discussed in the last section of the prior post) is a modified version of Wesley Salmon’s causal process account (Salmon, 1984). The basic entities are called causal processes, and these comprise a network characterized by two dimensions of causation, called propagation and production. Propagation refers to the way an isolated causal process bears dispositions or propensities toward potential interactions with other processes--aka its disposition profile. Production refers to how these profiles are altered in causal interactions with each other (this is the mutual manifestation of the relevant dispositions). The entities and properties described by science correspond to features of this causal web. For example, an electron corresponds to a causal process, and its properties describe its dispositions to produce change in interactions with other systems. Given this picture, we can go on to form an account of how composite causal processes are formed.  What is exciting about the resulting view is that it can provide a framework for systems spanning the microscopic-macroscopic divide. For background, I note that neither Salmon nor others who have explored causal process views provide a detailed account of composition. Recall that Salmon’s intent was to give a causal theory in service of underpinning scientific explanations.  In this context, he did outline a pertinent distinction between etiological explanations and constitutive explanations. Etiological explanations trace the relevant preceding processes and interactions leading up to a phenomenon. A constitutive explanation, on the other hand, is one that cites the interactions and processes that compose the phenomenon: A constitutive explanation is thoroughly causal, but it does not explain particular facts or general regularities in terms of causal antecedents. The explanation shows, instead, that the fact-to-be-explained is constituted by underlying causal mechanisms. (Salmon, 1984, 270) However, while Salmon sketches how one would divide a causal network into etiological and constitutive elements, he doesn’t provide a recipe for marking off the boundaries that define which processes/interactions are “internal” to what is to be explained by the constitutive explanation (see Salmon 1984, p. 275). Going beyond Salmon, and drawing on the work of others, we can offer an account of composition for causal processes.  They key idea is to propose that a coherent structure at a higher scale arises from patterns of repeated interactions at a lower scale. We should pick out composite causal processes and their interactions by attending to such patterns at the lower scale. In Herbert Simon’s discussion of complex systems, he notes that complexity often “takes the form of hierarchy (Simon, 1962, 468)” and notes the role interactions play in this context: In hierarchic systems we can distinguish between interactions among subsystems, on the one hand, and the interactions within subsystems—that is, among the parts of those subsystems—on the other. (Simon, 1996, p.197, emphasis original) The suggestion to take from this is that differential interaction rates give rise to a hierarchy of causal processes. When a group of processes interacts more with each other than with “outsiders” then it can form a composite. For example, a social group like a family or a business can be marked off from others (at a first approximation) by the differential intensity with which its members interact within vs. outside the group. As part of his discussion of analyzing complex systems, Bill Wimsatt also explores the idea of decomposition based on interactions, i.e., breaking down a system into subsystems based on the relative strength of intra vs extra-system interactions. (Wimsatt, 2007, 184-6).  And while he describes how different theoretical concerns lead us to utilize a variety of analytical strategies, Wimsatt makes it clear that patterns of causal connections are the ultimate basis for understanding complex systems: Ontologically, one could take the primary working matter of the world to be causal relationships, which are connected to one another in a variety of ways—and together make up patterns of causal networks…Under some conditions, these networks are organized into larger patterns that comprise levels of organization (Wimsatt, 2007, 200, emphasis original).[1] Wimsatt explains that levels of organization are “compositional levels”, characterized by hierarchical part-whole relations (201). This notion of composition includes not just the idea of parts, but of parts engaged in certain patterns of causal interactions, consistent with the approach to composite causal processes suggested above. To summarize: a composite causal process consists of two or more sub-processes (the constituting group) that interact with a greater frequency than each does with other processes.  Just like any causal process, a composite process carries its own disposition profile: here the pattern of interacting sub-processes accounts for how composite processes will themselves interact (what this means for the concepts of reduction and emergence will be discussed below). Consider social groups again, perhaps taking the example of smaller, pre-industrial societies. Each may have its own distinctive dispositions to mutually interact with other, similarly sized groups (e.g., to share a resource, trade, or to engage in raids or battle). These would be composed from the dispositions of their constituent members as they are shaped in the course of structured patterns of in-group interaction. We can also envision here that the higher scale environmental interactions also impact the evolution of the composite entity, but its stability is due to maintaining its characteristic higher-frequency internal processes. Let me add a couple of further comments about composite processes.  First, as already indicated, a group of constituting sub-processes may be themselves composite, allowing for a nested hierarchy. Second, the impact of larger scale external interactions can vary.  Some may have negligible impact. Other interactions (especially if regular in nature) can contribute to shaping the ongoing nature of the composite. At the other extreme, there will be some external interactions that could disrupt or destroy it. The persistence of a composite would seem to require a certain robustness in the internal interaction pattern of its components. Achieving stability (and the associated ability to propagate a characteristic higher scale disposition profile) may require the differential between intra-process and extra-process interactions to be particularly high, or else there may need to have a particular pattern to the repeated interactions.  There will clearly be vague or boundary cases as well. Why go to all this trouble of fairly abstract theorizing about a web of causal processes?  Because this account fleshes out the notions that underwrite the causal explanations scientists formulate in a variety of domains. In the physical sciences, the familiar hierarchy of entities, including atoms, molecules, and condensed matter, all correspond to composite causal processes. Of course, in physical models, what marks out a composite system might be described in a number of ways (for example, in terms of the relative strength of forces or energy-minimizing equilibrium configurations).  But I argue this is consistent with the key being the relative frequency of recurring discrete interactions in-system vs. out-system. (This will be explored further in the companion post.) In biology, the complexity of systems may sometimes defy the easy identification of the boundaries of composites. Also, a researcher’s explanatory aims will sometimes warrant taking different perspectives on phenomena. In these cases, scientists will describe theoretical entities that do not necessarily follow a simple quantitative accounting of intra-process vs. extra-process interactions. On the one hand, the case of a cell provides a pretty clear paradigm case meeting the definition of a composite process. On the other hand, many organisms and groups of organisms present difficult cases that have given rise to a rich debate in the literature regarding biological individuality. Still, a causal account of constitution is a useful starting point, as noted here by ElliottSober: The individuality of organisms involves a distinction between self and other—between inside and outside. This distinction is defined by characteristic causal relations. Parts of the same organism influence each other in ways that differ from the way that outside entities influence the organism’s parts. (Sober, 1993, 150) The way parts “influence each other”, of course, might involve considerations beyond a mere quantitative view of interactions, and connotes an entry point where theoretical concerns can create distance from the basic conception of the composite causal process. In a biological context, sub-processes and interactions related to survival and reproduction may, for example, receive disproportionate attention in creating boundaries around composite entities. Notably, Roberta Millstein has proposed a definition of a biological population based on just this kind of causal interaction-based concept (Millstein 2009). It is also worth mentioning that constitutive explanations in science will rarely attempt to explain the entire entity. This would mean accounting all of its causal properties (aka its entire dispositional profile) in terms of its interacting sub-processes. It is more common for a scientific explanation to target one property corresponding to a behavior of interest (corresponding to one of many features of a disposition profile). Reduction and Emergence I want to make a few remarks about how this approach to composites sheds light on the topics of ontological reduction and emergence. In a nutshell, the causal composition model discussed here gives a straightforward account of these notions that sidesteps some common confusions and controversies, such as the “causal exclusion problem.” When considering the relationship between phenomena characterized at larger scales and smaller ones, the key observation is that a larger entity’s properties do not only depend not only on the properties of smaller composing entities. They also depend on their pattern of interaction.  This is in contrast to the usual static framing that posits a metaphysical relationship (whether expressed in terms of composition or “realization”) between higher-level properties and lower-level properties at some instant of time.  This picture is conceptually confused (if taken seriously as opposed to a being a deliberate simplifying idealization): there is no reason to think such synchronic relationships characterize our world. Recall that, in the present account, a property describes a regular feature of the disposition profile of a causal process. A composite causal process is made up of a pattern of interacting sub-processes.  The disposition profiles of the sub-processes are changing during these interactions: they are not static.  The dispositions of the composite depend on this matrix of changing sub-processes.  Note that both the forming of a higher-scale disposition (and its manifestation in a higher-scale interaction) takes more time than the equivalents at the smaller scale.  No composite entity or property exists at an instant:  this is a fiction concocted by us facilitate our understanding. Unfortunately, contemporary metaphysicians have taken this notion seriously.  It is perhaps easiest to see the problem in the case of a biological system:  nothing is literally “alive” at an instant of time.  Living things are sustained by temporally extended processes.  Less intuitively, the same is true of inanimate objects. Emergence and reduction are clearer, unmysterious notions when based on this dynamic conception of the composition relationship.  Properties of larger things “emerge” from the interacting group of smaller things. The “reduction base” includes the interaction pattern of the components and their (changing) properties.  The exclusion problem says that since higher-level properties are realized by lower-level properties at any arbitrary instant of time, they cannot have causal force of their own (on pain of overdetermination). We can see why this is a pseudo-problem once a better understanding of composition is in place. Causal production occurs at multiple scales. This take on reduction and emergence is obviously not unique to the causal process model discussed here.  It is implied by any approach that recognizes that properties of composites depend on interacting parts. For example, Wimsatt discusses at some length how notions of reduction and emergence should be understood given his understanding of complex systems. He offers a definition of reductive explanation that shows a similarity to the causal process view of constitutive explanation: A reductive explanation of a behavior or a property of a system is one that shows it to be mechanistically explicable in terms of the properties of and interactions among the parts of the system. (Wimsatt, 207, 275) This approach to reductive explanation is perfectly consistent with a form of emergence, in the sense that the properties of the whole are intuitively “more than the sum of its parts (277).” The key idea here, again, is that composition includes the interactions between the parts. For comparison, Wimsatt introduces the notion of “aggregativity”, where the properties of the whole are “mere” aggregates of the properties of its parts. For this to happen, “the system property would have to depend on the parts’ properties in a very strongly atomistic manner, under all physically possible decompositions (277-280)”. He analyzes the conditions needed for this to occur and concludes they are nearly never met outside of the case of conserved quantities in (idealized) physical theories. Simon had introduced similar notions, describing hypothetical idealized systems where there are no interactions between parts as “decomposable,” which are then contrasted to “nearly decomposable systems, in which the interactions among the subsystems are weak but not negligible (Simon, 1996, 197, emphasis original).” To highlight this distinguishing feature, Simon considers a boundary case: that of gases. Ideal gases, which assume interactions between molecules are negligible, are, for Simon, decomposable systems. In the causal process account, we would similarly point out that an ideal gas doesn’t have a clearly defined constituting group: the molecules do not have a characteristic pattern of interacting with each other at any greater frequency than they do with the external system (the container). An actual, non-ideal gas, on the other hand, with weak but non-negligible interactions between constituent molecules, would correspond to the idea of a composite causal process. Some contemporary work in metaphysics, focused on dispositions/powers and their role in causation, has incorporated similar views about composition and emergence. Rani Lill Anjum and Stephen Mumford describe a “dynamic view” of emergence:  The idea is that emergent properties are sustained through the ongoing activity; that is, through the causal process of interaction of the parts. A static instantaneous constitution view wouldn't provide this (Anjum & Mumford 2017, 101) In their view, higher scale properties are emergent because they depend on lower-level parts whose causal properties are undergoing transformation as they interact, consistent with the view discussed here.  Most recently, R. D. Ingthorsson's new book, while not discussing emergence and reduction explicitly, also presents a view of composition based on the causal interaction of parts which is in the same spirit (Ingthorsson, 2021, Ch. 6). I think composite causal processes provide a good framework for understanding how natural systems are constituted.  A puzzle for the view, however, might arise via its use of patterns of discrete causal interactions to define composites. How would this work in physics, where the forces binding together composites, such as the Coulomb (electrostatic) force, are continuous?  One possible answer is to point out that physical models employ idealizations, and claim their depictions can still correspond to the “deeper” ontological picture of causal processes. But I believe we can find a better and more comprehensive answer than this.  To do so, we must look more carefully at physical accounts of nature’s building blocks, atoms and molecules, and see if we can uncover a correspondence with the causal theory. I think we can, assuming we utilize the RQM interpretation.  This is the subject of the next post. Anjum, R., & Mumford, S. (2017). Emergence and Demergence. In M. Paolini Paoletti, & F. Orilia (Eds.), Philosophical and Scientific Perspectives on Downward Causation (pp. 92-109). New York: Routledge. Ingthorsson, R.D. (2021). A Powerful Particulars View of Causation. New York: Routledge. Millstein, R. L. (2009). Populations as Individuals. Biological Theory, 4(3), 267-273. Salmon, W. (1984). Scientific Explanation and the Causal Structure of the World. Princeton: Princeton University Press. Simon, H. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, 106(6), 467-482. Wimsatt, W. C. (2007). Re-Engineering Philosophy for Limited Beings. Cambridge, Massachusetts: Harvard University Press. Photo: Alina Grubnyak via Unsplash [1] This passage goes on to mention other, less neat, network patterns: “Under somewhat different conditions they yield the kinds of systematic slices across which I have called perspectives. Under some conditions they are so richly connected that neither perspectives nor levels seem to capture their organization, and for this condition, I have coined the term causal thickets (Wimsatt, 2007, 200).” Thursday, January 28, 2021 Why I Favor Relational Quantum Mechanics I think Relational Quantum Mechanics (RQM), initially proposed by Carlo Rovelli, is the best interpretation of quantum mechanics.1 It is important to note right away, however, that I depart from Rovelli’s thinking in one important respect. He takes an anti-realist view of the wave function (or quantum state). As I will discuss below, I endorse a view that sees the wave function as representing something real (even if imperfectly and incompletely). There are two reasons I prefer RQM. First, I think it makes better sense of QM as a successful scientific endeavor compared to other interpretations. Second, it fits neatly with an attractive ontology for our world. Quick Introduction Orthodox or “textbook” QM features a closely knit family of mathematical models and recipes for their use. The models describe the state of a microscopic system characterized by certain physical quantities (typically given in the form of a wave function). It gives a formula for calculating how the system evolves in time (the Schrödinger equation). Notably, because of the nature of the mathematical formalism, one typically cannot ascribe definite values to the physical quantities of interest. However, QM also offers a procedure for calculating (probabilistically) the outcomes of particular measurements of these quantities. The problem with taking orthodox QM as a universally applicable physical theory can be described in several ways (this is usually called the measurement problem). One simple way is to note an inconsistency arising from the presence of what appears to be two completely different kinds of interaction. In the absence of any interaction, a system evolves in time as described by the Schrödinger equation. But interactions are handled in two different ways. On the one hand, we have the measurement process (utilizing the Born rule), that is, an interaction between the quantum system under investigation and a scientist’s experimental apparatus. On the other hand, we can also describe an interaction between two systems that are not subject to measurement. In the first kind of interaction, a definite value of a system’s physical quantity is found (we say the wave function of the system collapses). In the second kind of interaction, we represent two (or more) systems, previously considered isolated, as now correlated in a composite system (we say they become entangled). This system evolves in the same fashion as any isolated system. And as such the composite system may be in a superposition of states where no definite values for a given quantity can be ascribed. In a nutshell, the RQM solution is to stipulate that a physical interaction is a measurement-style event. However, this is only true for those systems directly involved: the systems are merely entangled from the standpoint of other “third-party” systems. The appearance of two sorts of interaction arises from a difference in perspective. This is weird, of course, since particular values of the physical quantities revealed in an interaction are manifest only relative to the interaction partner(s) involved. They don’t exist in a fully objective way. All interpretations of QM ask us to accept something unintuitive or revisionary. This is the “ask” made by RQM. Reason One: RQM Validates Quantum Theory as Successful Science Before discussing the interpretation further, I can quickly outline a reason to prefer RQM to many competing approaches. This point is primarily a negative one. In contrast to other approaches, RQM is an interpretation that delivers a satisfying account of QM as a successful scientific theory: one that draws appropriate connections between the results of our experimental investigations and a meaningful picture of the world around us. I obviously won’t be doing a deep dive into all the options, but will quickly sketch why I think RQM is superior. First, for a quick cut in the number of alternatives, I eliminate views that are merely pragmatic, or see QM as only describing what agents experience, believe, or know. I insist that alongside its other aims (such as prediction and practical control), a scientific theory should contribute to our understanding of nature. To do so, the theory should offer successful explanations of worldly phenomena, that is, ones that tell us (broadly speaking) what kind of things are out there and how they hang together. This means, in turn, that at least some of the elements of the mathematical models that we use should represent features of the world (allowing that the fidelity of any given representation is significantly constrained by reasons having to do with the aims of the scientist and the tools employed). I will outline in the next section of this post how I think this works in the case of RQM. As for the remaining alternatives, I will limit the conversation to the three most prominent broadly realist approaches to thinking about QM: Everett-style interpretations, Bohmian mechanics, and objective collapse approaches, such as Ghirardi-Rimini-Weber (GRW) theory (the implied ontology of these approaches might be fleshed out in more than one way, but I will not pursue the details here.) For these alternatives, a different issue rises to the fore. An interpretation should not just consider how the features of formal QM models might correspond to reality. It should also respect the status of quantum theory as a hugely successful experimental science. Orthodox or “textbook” QM includes not just the mathematical formalism, but also the recipes for how it is used by investigators and how it connects to our experiences in the laboratory. And here is where I think Everettians and Bohmians in particular fall short. Note first that all three of the alternative approaches depart from orthodox QM by adding to, subtracting from, or modifying its basic elements.2 GRW changes things by replacing the Schrödinger equation with a new formula that attempts to encompass both continuous evolution and the apparent collapse onto particular outcomes observed in measurement. Bohmian mechanics adds new elements to the picture by associating the quantum state with a configuration of particles in 3D space and adding a new guidance equation for them. Everettian approaches just drop the measurement process and seek to reinterpret what is going on without it. For the Everett framework in particular, I’m not sure the extent of its departure from orthodox QM is always appreciated. It is sometimes claimed to be the simplest version of QM. This is since it works by simply removing what is often seen as a problematic element of the theory. But in doing so it divorces QM from its basis in experimental practice. This is a drastic departure indeed. To see this, note that to endorse Everett is to conclude that the very experiments that prompted the development of QM and have repeatedly corroborated it over nearly a century are illusory. For the Everettian, to take one example, no experimental measurement of the spin of an electron has ever or will ever have a particular outcome (all outcomes happen, even though we’ll never perceive that). Bohmian mechanics also turns our experiments into fictions. For the Bohmian, there is actually no electron and no spin involved in the measurement of an electron’s spin. Rather, there is an orchestrated movement of spinless point particles comprising the system and the laboratory (and the rest of the universe) into the correct spatial positions. GRW-style approaches are different, in that they are testable alternatives to QM. Unfortunately, researchers have been busy gradually ruling them out as empirically adequate alternatives (see, e.g., Vinante, 2020). It is also worth noting, however, that GRW also distorts the usual interpretation of experimental results by stipulating that all collapses are in the position basis. Unlike these approaches, RQM is truly an interpretation, rather than a modification, of orthodox QM, a successful theory that was motivated by experimental findings and is extremely well supported by decades of further testing. The measurement process, in particular, is not some problematic add-on to quantum theory – it is at the heart of it. Human beings and our experiences and interventions are part of the natural world. RQM does justice to this fact by explaining that measurements—the connections between quantum systems and ourselves—are just like any other physical interaction. Reason Two: RQM Offers an Attractive Ontological Picture Laudisa and Rovelli (in the SEP article) describe RQM’s ontology as a “sparse” one, comprised of the relational interaction events between systems. This event ontology has attractive features (akin to the “flash” ontology sometimes discussed in conjunction with objective collapse interpretations). There is no question of strange higher-dimensional spaces or other worlds: the events happen in spacetime. Also, one of the goals of science-inspired metaphysical work is to foster the potential unification of scientific theories. Importantly, a QM interpretation that features an event ontology offers at least the promise of establishing a rapport with relativity theory, which is typically seen as putting events in the leading role (see a recent discussion by Maccone, 2019). But does giving this role to interaction events preclude a representational role for the wave function? Given that physical properties of systems only take definite values when these events occur, perhaps systems should not be accorded any reality apart from this context. And, in fact, Carlo Rovelli has consistently taken a hard anti-realist stance toward the wave function/quantum state. In his original presentation of RQM he gave it a role only as record of information about one system from the point of view of another, and thought it was possible to reformulate quantum theory using an information-based framework. This conflicts with my insistence above that such anti-realism was inconsistent with the aims of a good scientific theory. Thankfully, there is no need to follow Rovelli on this point. Instead, I concur with a view outlined by Mauro Dorato recently. He suggests that rather than view non-interacting systems as simply having no real properties, they can be characterized as having dispositions: In other words, such systems S have intrinsic dispositions to correlate with other systems/observers O, which manifest themselves as the possession of definite properties q relative to those Os. (Dorato, 2016, 239; emphasis original) As he points out, referencing ideas due to philosopher C.B. Martin, such manifestations only occur as mutual manifestations involving dispositions characterizing two or more systems.3 Since these manifestations have a probabilistic aspect to them, the dispositions might also be referred to as propensities. So, here the wave function has a representational role to play. It represents a systems’s propensities toward interaction with a specified partner system(s). The Schrödinger equation would show how propensities can be described across time in the absence of interaction. Now, it is true that the QM formalism does not offer a full or absolute accounting for a system’s properties, given its relational limitations. But here we should recall that models across the sciences are typically incomplete and imperfect. In addition to employing approximations and idealizations, they approach phenomena from a certain perspective dictated by the nature of the research program. But we can say the wave function represents something real (if incompletely and in an idealized way). Reality has two aspects, non-interacting systems with propensities, and the interaction events that occur in spacetime. The idea that properties are dispositional in nature is one that has been pursued increasingly by philosophers in recent years. It fits well with physics, since both state dependent and independent properties (like mass and charge) are only known via their manifestations in interactions.4 While advocates disagree about the details, the idea that the basic ontology of the world features objects that bear dispositions/propensities has also been used more widely to address a number of difficult philosophical topics (like modality). Most importantly, perhaps, dispositions and their manifestations provide a good basis for theorizing about causation.5 Fitting Both Quantum Systems and Scientists Into the Causal Web To conclude, I’ll briefly describe how I would flesh out this ontological picture, putting an emphasis on causation. I mentioned above the role representational models play in explanation. To be more specific, scientific explanations are typically causal explanations: they seek to place a phenomenon in a structured causal context. When successful explanations feature models, then, these models represent features of the world’s causal structure. The suggestions above on how to view the ontology associated with RQM fit into a particularly attractive theory of this structure. This is a modified version of Wesley Salmon’s causal process account (Salmon, 1984). Here the basic entity or object is labeled a causal process, and there are two dimensions of causation: propagation and production. Propagation refers to the evolution of a causal process in the absence of interaction, while production refers to the change that causal processes undergo when an interaction occurs. As described by Ladyman & Ross: The metaphysic suggested by process views is effectively one in which the entire universe is a graph of real processes, where the edges are uninterrupted processes, and the vertices the interactions between them (Ladyman & Ross, 2007, 263). According to Salmon, a propagating causal process carries or “transmits” causal influence from one spacetime point to another. The character of this causal influence is then altered by interactions. I theorize that this causal influence takes the form of a cluster of dispositions or propensities toward mutual interactions (aka a propensity profile). The interactions produce a change in this profile.6 To summarize: 1. The web of nature has two aspects: the persisting causal process and the causal interaction event (a discrete change-making interaction between processes). 2. The quantum formalism offers a partial representation of the propensity profile of a causal process. It is partial because these are only the propensities toward manifestations that take place in interactions with (one or more) designated reference systems. The Schrödinger equation represents the propagation of these propensities from one interaction to the next. 3. All manifestations are mutual, and take the form of a change in the profile of each process involved in the interaction. A quantum measurement is an interaction like any other. Humans may treat the wave function as representing the phenomena we are tracking, but we are also causal processes, as are our measuring devices. It is just that the changes manifest in us in an interaction (our altered propensity profiles) are conceptualized as epistemic. 4. Per RQM, when two physical systems interact, they are represented as an entangled composite system from the perspective of a third system. This relational representation of the composite system might in practice be thought of as a limitation on what the third system “knows.” Under certain conditions, however, this entanglement can have a distinctive indirect impact on the third system—interference effects—revealing it is not only epistemic (as always, decoherence explains why we rarely experience these effects). There is much more to flesh out, of course. I would add to this summary an account of how composite systems form higher-level propensities of their own, based on the pattern of repeated interactions of their constituents. Also, there is an interesting question of how serious of a (relational or perspectival) scientific realist to be about the properties identified in quantum theory. My preference is to be a realist about the (singular) causal network, but view the formalism as offering only an idealized depiction of regularities in the propensity profiles of the underlying causal processes. 1 For background, see the Stanford Encyclopedia article (Laudisa & Rovelli, 2019). Rovelli’s original paper is (Rovelli, 1996 - arXiv:quant-ph/9609002). Good philosophical discussions include Brown (2009; link via, Van Fraassen (2010; link via Van Fraassen website), Dorato (2016; preprint here, but note final version has significant changes), and Ruyant (2018; preprint here). 2 For a recent attempt to carefully describe the principles of orthodox QM, see Poinat (2020); link (researchgate). 3 What Martin calls “reciprocal disposition partners.” See Martin (2008), especially Ch. 5.  4 In addition to contemporary work by Dorato and others, there have been a handful of theorists over the decades since QM was formulated who have employed dispositions/propensities to interpret QM. See Suárez (2007) for a survey of some of these. 5 Important work here includes Chakravartty (2007) and Mumford & Anjum (2011). 6 The main changes from Salmon’s own work are as follows. The first is to be a realist about dispositions/propensities, whereas Salmon’s version of empiricism drove him to reject any suggestion of causal powers. He characterized causal processes in terms of their transmission of an observable “mark” or, in a subsequent version of the theory, the transmission of a conserved physical quantity. The second change is that causal processes cannot be said to propagate in spacetime, as Salmon envisioned, since this would be inconsistent with the non-local character of quantum systems. Brown, M. J. (2009). Relational Quantum Mechanics and the Determinacy Problem. The British Journal for the Philosophy of Science, 60(4), 679-695. Chakravartty, A. (2007). A Metaphysics for Scientific Realism. Cambridge: Cambridge University Press. Dorato, M. (2016). Rovelli's Relational Quantum Mechanics, Anti-Monism, and Quantum Becoming. In A. Marmodoro, & D. Yates (Eds.), The Metaphysics of Relations (pp. 235-262). Oxford: Oxford University Press. Ladyman, J., & Ross, D. (2007). Everything Must Go. Oxford: Oxford University Press. Laudisa, F., & Rovelli, C. (2019). Relational Quantum Mechanics. Retrieved from The Stanford Encyclopedia of Philosophy, Winter 2019 Edition: Maccone, L. (2019). A Fundamental Problem in Quantizing General Relativity. Foundations of Physics, 49, 1394-1403. Martin, C. (2008). The Mind in Nature. Oxford: Oxford University Press. Mumford, S., & Anjum, R. L. (2011). Getting Causes from Powers. Oxford: Oxford University Press. Poinat, S. (2020). Quantum Mechanics and Its Interpretations: A Defense of the Quantum Principles. Foundations of Physics, 1-18. Rovelli, C. (1996). Relational Quantum Mechanics. International Journal of Theoretical Physics, 35, 1637-1678. Ruyant, Q. (2018). Can We Make Sense of Relational Quantum Mechanics. Foundations of Physics, 48, 440-455. Suárez, M. (2007). Quantum Propensities. Studies in History and Philosophy of Modern Physics, 38, 418-438. Van Fraassen, B. (2010). Rovelli's World. Foundations of Physics, 40, 390-417. Vinante, A., (2020) Narrowing the Parameter Space of Collapse Models with Ultracold Layered Force Sensors. Physical Review Letters, 125, 100401-100401. Thursday, April 16, 2020 Metaphysics and the Problem of Consciousness In a recent post I talked about different frameworks for addressing the subjective dimension of consciousness. One path used ideas from philosophy of mind, the other looked to evolutionary biology. Of course, many who ponder solving this and related aspects of the mind-body problem take a more overtly metaphysical turn. Here I’ll briefly discuss why I don’t think these efforts are likely to get it right. Against “vertical” metaphysical relations My first post in this recent series was prompted by reading Philip Goff’s book presenting his panpsychist approach to the problem of consciousness.1 In the sections where he addresses the combination problem, Goff considers alternative strategies for situating a macro-size conscious subject in the world: several of these involve appeals to “grounding”. To sketch, grounding (in its application to ontology) is a kind of non-causal explanatory metaphysical relation between entities, with things at a more fundamental “level” of reality typically providing a ground for something at a higher level. For example, a metaphysician fleshing out the notion of a physicalist view of reality might appeal to a grounding relationship between, say, fundamental simple micro-physical entities and bigger, more complex macro-size objects. It’s a way of working out the idea that the former account for the latter, or the latter exist in virtue of the former. There are a variety of ways to explicate this kind of idea.2 Goff presents a version called constitutive grounding. He thinks this faces difficulties in the case of accounting for macro-sized conscious subjects in terms of micro-sized ones, and discusses an alternative approach where the more fundamental thing is at the higher level: he endorses a view where the most fundamental conscious entity is, in fact, the entire cosmos (“cosmopsychism”). In this scenario, human and animal concsciousness can be accounted for via a relation to the cosmos called grounding by subsumption. Goff motivates these various notions of grounding with examples that appeal to how certain of our concepts seem to be linked together, or to how our visual experiences appear to be composed. Please read the book for the details.3 Here, I want to comment on why I don’t find an approach like this to be very illuminating. It is actually a part of a more general methodological concern I have developed over time. Certainly, trying to uncover the metaphysical truth about things is always a somewhat quixotic endeavor! But I think it is extremely likely to go wrong when done via excavation of our intuitions in the absence of close engagement with the relevant sciences.4 To make a long story short, I’ll just say that here I concur with much of Ladyman and Ross’s infamous critique of analytic metaphysics.5 But to get more specific, I have a deep skepticism in particular about the whole notion of synchronic (“vertical”) metaphysical relations. Not only panpsychist discussions but a great many philosophy of mind debates are structured around the idea that ontological elements at different “levels” are connected by such relations as part-whole, supervenience, or grounding. Positing these vertical relations, in turn, has contributed to confusion in debates about notions of (ontological) reduction and emergence. The causal exclusion problem, I believe, is misguided to the extent it is premised in part on the existence of these vertical relations. I see no evidence that there are any such synchronic relations in the actual world investigated by the natural sciences (although they may characterize some of our idealized models). At arbitrary infinitesimal moments of time there exist no relata to connect: there are no such things as organisms, brains, cells, or even molecules. All these phenomena are temporally extended dynamic processes. Any static conception we employ is an artifact of our cognitive apparatus or our representational schemes. Reifying these static conceptions and then drawing vertical lines between entities at different scales is a mistake. My view is that all relations of composition in nature are diachronic. Solve the problem with a new metaphysics of causation? Given this, I think questions about how phenomena at different scales relate to each other involve a causal form of composition. So, one might ask whether thinking about the nature of causation help can with the problem of consciousness. Even before doing my own deep dive into research on the topic, I was drawn to those panpsychist approaches that explored this avenue. As mentioned in the earlier post, Russell’s account takes a causal approach to the structuring of subjects, although he himself doesn’t go on to offer a detailed theory.6 I think Whitehead’s speculative metaphysics can be characterized, at least in part, as an attempt to use a rich metaphysics of causation to account for the integration of mind and world. In more recent times, Gregg Rosenberg developed an account that found a home for consciousness in the nature of causation.7 Over time, however, I have also become skeptical of these more expansive causal theories. This is in spite of my view of the central role causation should play in any account of the composition of natural systems. Here, the problem is that these approaches go too far by baking in the answer to the mind-body problem from the beginning. Methodologically, I believe we should resist the urge to invent a causal theory that is so enriched with specific dualistic features that it directly addresses the challenge. For example, in Whitehead’s system every causal event (“actual occasion”) already has in place both a subjective and an objective “pole.” For Rosenberg, two kinds of properties (“effective” and “receptive”) are involved in each causal event, and this ultimately underpins the apparent dualism of the physical and mental. In contrast to these speculative solutions, we should be more conservative and pursue a causal theory that makes sense of our successful scientific explanations of natural phenomena, and then see how that effort might shed light on the mind. I’ll discuss my view on this in a future post. 1 Consciousness and Fundamental Reality. 2017. Oxford: Oxford University Press. 2 Here’s the SEP article on grounding. 3 Also, check out Daniel Stoljar’s review. 4 A quite different way metaphysics can go wrong is when those who are truly and deeply engaged with science (specifically physics) succumb to the tendency to (more or less) read ontology off of the mathematical formalism.  But that is a discussion for another time. 5 Everything Must Go: Metaphysics Naturalized. 2007. James Ladyman & Don Ross. Oxford: Oxford Univerisity Press. See. Ch 1. 6 At least this is true of The Analysis of Matter (1927), where the view now known as Russellian Monism was most fully developed.  In his later Human Knowledge: Its Scope and Limits (1948), he presents a bit of a theory via his account of “causal lines:” specifically, this comes in the context of an argument that such a conception of causation is needed to account for successful scientific inferences (part VI, chapter V).  As an aside: by this time, Russell seemed to come quite a long way toward a reversal of the arguments presented in his (much more cited) “On the Notion of Cause” from 1913. There, Russell argued that the prevailing philosophical view of cause and effect does not play a role in advanced sciences. Someone looking to harmonize the early and late Russell might argue that the disagreement between the two positions is limited: one could say the later Russell is developing causal notions that better suit the practice of science as compared to the more traditional concept that is the focus of criticism in the earlier article. However, I think it is clear that the later book’s perspective is quite a sea change from the earlier paper’s generally dismissive approach to the importance of causation to science. 7 A Place for Consciousness. 2004. Oxford: Oxford University Press. I have some older posts about the book.
14f4c6fd14c564fa
Wednesday, December 07, 2005 No comment This morning, I had the plan to write something about the historical figure behind St. Nicolaus (Santa Claus for his friends) who in Germany fills children's shoes with sweets and small presents in the night to December 6th. On my way to IUB, I had heard a radio program about him: He lived in the fourth century somewhere where it's now Turkey, was a bishop and provided three sisters that were so poor that they had to prositute themselves with balls of gold so they could merry. Some 700 years after his death, some knights brought his body to Bari in Italy to save it from the arabs and then parts of his body were distributed all over Europe. The character of this saint also changed a lot over the centuries from being the saint of millers to the saint of drinkers (apearently, the Russian word for getting drunk is derived from his name) to the saint of children. But this is not what I am going to talk about. Rather, I would like to point out this news: Heise is a German publishing company of not only in my mind by far the best computer journals here. They also have a news ticker which I think is comparable to slashdot which hosts a discussion forum. Now a court (Landgericht Hamburg) has ordered the Heise publishing house to make sure that there is no illegal content in the forum (and not only delete entries when it is pointed out to them that they are illegal). Otherwise, they could be fined by any lawyer ('Abmahnung'). The court ruled in the case of a posting providing a script for to run a simple denial of service attack against the server of a company that was discussed in the original article. The court decided that Heise must make sure that no such illegal contet is distributed via their website. Heise will challenge this ruling at the next higher level. But if it prevails, it means that in Germany anybody providing any possibility for users to leave comments is potentially threatened to be fined, no matter if it is a forum, a guest book or the comment section of a blog: You can simply post an annonymous comment of some illegal content and then sue the provider of the website for publishing it. This would be the end of any unmoderated discussion on the German part of the internet. Just another case where a court show complete ignorance of the working of the internet. So, comment, as long as I still let you! (note: I had written this yesterday, but due to a problem with I could not post it until today) Thursday, November 17, 2005 What is not a duality A couple of days ago, Sergey pointed me to a paper Background independent duals of the harmonic oscillator by Viqar Husain. The abstract promises to show that there is a duality between a class of topological and thus background independent theories that are dual to the harmonic oscillator. Sounds interesting. So, what's going on? This four and a half page paper starts out with one page discussing the general philosophy, how important GR's lesson to look for background independence is and how great dualities are. The holy grail would be to find a background independent theory that has some classical, long wavelength limit in which it looks like a metric theory. For dualities, the author mentions the Ising/Thirring model duality and of course AdS/CFT. The latter already involves a metric theory in terms of an ordinary field theory, but the AdS theory is not background independent, it is an expansion around AdS and one has to maintain the AdS symmetries at least asymptotically. So he looks for something different. So what constitutes a duality? Roughly speaking it means that there is a single theory (defined in an operational sense, the theory is the collection of what one could measure) that has at least two different looking descriptions. For example, there is one theory that can either be described as type IIB strings on an AdSxS5 background or as N=4 strongly coupled large N gauge theory. Husain gives a more precise definition when he claims: Two [...] theories [...] are equivalent at the quantum level. "Equivalent" means that there is a precise correspondence between operators and quantum states in the dual theories, and a relation between their coupling constants, at least in some limits. Then he goes on to show that there is a one to one map between the observables in some topological theories and the observables of the harmonic oscillator. Unfortunately, such a map is not enough for a duality in the usual sense. Otherwise, all quantum mechanical theories with a finite number of degrees of freedom would be dual to each other. All have equivalent Hilbert spaces and thus operators acting on one Hilbert space can also be interpreted as operators acting in the other Hilbert space. But this is only kinematics. What is different between the harmonic oscillator and the hydrogen atom say is the dynamics. They have different Hamiltonians. By the above argument, the oscillator Hamiltonian also acts in the hydrogen atom Hilbert space but it does not generate the dynamics. So what does Husain do concretely? He focusses on BF theory on space-times of the globally hyperbolic form R x Sigma for some Euclidean compact 3-manifold Sigma. There are two fields, a 2-form B and a (abelian for simplicity) 1-form A with field strength F=dA. The Lagrangian is just B wedge F. This theory does not need a metric and is therefore topological. Classically, the equations of motion are dB=0 and F=0. For quantization, Husain performs a canonical analysis. From now on, indices a,b,c run over 1,2,3. He finds that epsilon_abc B_bc is the canonical momentum for A_a and that there are first class constraints setting F_ab=0 and the spatial dB=0. Observables come in two classes O1(gamma) and O2(S) where gamma is a closed path in Sigma and S is a closed 2-surface in Sigma. O1(gamma) is given by the integral of A over gamma, while O2(S) is the integral of B over S. Because of the constraints, these observables are invariant under deformations of S and gamma and thus only depend on homotopy classes of gamma and S. Thus one can think of O1 as living in H^1(Sigma) and O2 as living in H^2(Sigma). Next, one computes the Poisson brackets of the observables and finds that two O1's or two O2's Poisson commute while {O1(gamma),O2(S)} is given in terms of the intersection number of gamma and S. As the theory is diffeomorphism invariant, the Hamiltonian vanishes and the dynamics are trivial. Basically, that's all one could (should) say about this theory. However Husain goes on: First, he specialises to Sigma = S1 x S2. This means (up to equivalence) there is only one non-trivial gamma (winding around S1) and one S (winding around the S2). Their intersection is 1. Thus, in the quantum theory, O1(gamma) and O2(S) form a canonical pair of operators having the same commutation relations as x and p. Another example is Sigma=T3 where H^1 = H^2 = R^3 so this is like 3d quantum mechanics. Husain chooses to form combinations of these operators like for creation and annihilation operators for the harmonic oscillator. According to the above definition of "duality" this constitutes a duality between the BF-theory and the harmonic oscillator: We have found a one to one map between the algebras of observables. What he misses is that there is a similar one to one map to any other quantum mechanical system: One could directly identify x and p and use that for any composite observables (for example for the particle in any complicated potential). Alternatively, one could take any orthogonal generating system e1, e2,... of a (separable) Hilbert space and define latter operators a+ mapping e(i) to e(i+1) and a acting in the opposite direction. Big deal. This map lifts to a map for all operators acting on that Hilbert space to the observables of the BF-theory. So, for the above definition of "duality" all systems with a finite number of degrees of freedom are dual to each other. What is missing of course (and I should not hesitate to say that Husain realises that) is that this is only kinematical. A system is not only given by its algebra of observables but also by the dynamics or time evolution or Hamiltonian: On has to single out one of the operators in the algebra as the Hamiltonian of the system (leaving issues of convergence aside, strictly one only needs time evolution as an automorphism of the algebra and can later ask if there is actually an operator that generates it. This is important in the story of the LQG string but not here). For BF-theory, this operator is H_BF=0 while for the harmonic oscillator it is H_o= a^+ a + 1/2. So the dynamics of the two theories have no relation at all. Still, Husain makes a big deal out of this by claiming that the harmonic oscillator Hamiltonian is dual to the occupation number operator in the topological theory. So what? The occupation number operator is just another operator with no special meaning in that system. But even more, he stresses the significance of the 1/2: The occupation number doesn't have that and if for some (unclear) reason one would take that operator as a generator of something, there would not be any zero point energy. And this might have a relevance for the cosmological constant problem. What is that? There is one (as it happens background independent) theory that has a Hamiltonian. But if one takes a different, random operator as the Hamiltonian, that has its smallest eigenvalue at 0. What has that to say about the cosmological constant? Maybe one should tell these people that there are other dualities that not only identify the structure of the observable algebra (without dynamics). But, dear reader, be warned that in the near future we will read or hear that background independent theories have solved the cosmological constant problem. Let me end with a question that I would really like to understand (and probably, there is a textbook answer to it): If I quantise a system the way we have done it for the LQG string, one does the following: One singles out special observables say x and p (or their exponentials) and promotes them to elements of the abstract quantum algebra (the Weyl algebra in the free case). Then there are automorphisms of the classical algebra that get promoted to automorphisms of the quantum algebra in a straight forward way. For the string, those were the diffeomorphisms, but take simply the time evolution. Then one uses the GNS construction to construct a Hilbert space and tries to find operators in that Hilbert space that implement those automorphisms: Be a_t the automorphism in the algebra sending observable O to a_t(O) and p the representation map that sends algebra elements to operators on the Hilbert space. Then one looks for unitary operators U(t) (or their hermitian generators) such that p( a_t(O) ) = U(t)^-1 p(O) U(t) In the case of time evolution, this yields the quantum Hamilton operator. However, there is an ambiguity in the above procedure: If U(t) fulfils the above requirement, so does e^(i phi(t)) U(t) for any real number phi(t). Usually, there is an additional requirement as t comes from a group (R in the case of time translations but Diff(S^1) in the case of the string) and one could require that U(t1) U(t2) = U(t1 + t2) where + is the group law. This does not leave much room for the t-dependence of phi(t). In fact, in general it is not possible to find phi(t) such that this relation is always satisfied. In that case we have an anomaly and this is exactly the way the central charge appears in the LQG string case. Assume now, that there is no anomaly. Then it is still possible to shift phi by a constant times t (in case of a one dimensional group of automorphisms, read: time translation). This does not effect any of the relations about the implementation of the automorphisms a_t or the group representation property. But in terms of the Hamiltonian, this is nothing but a shift of the zero point of energy. So, it seems to me that none of the physics is affected by this. The only way to change this is to turn on gravity because the metric couples to this in form of a cosmological constant. Am I right? That would mean that any non-gravitational theory cannot say anything about zero point energies because they are only observable in gravity. So if you are a studying any theory that does not contain gravity you cannot make any sensible statements about zero point energies or the cosmological constant. Tuesday, November 15, 2005 Fixing radios Thursday, November 03, 2005 Sudoku types You surely have seen these sudoku puzzles in newspapers: In the original version, it is a 9x9 grid with some numbers inserted. The problem is to fill the grid with number 1 to 9 such that in each row, column and 3x3 block each digit appears exactly once. In the past I was mildly interested in them, I had done perhaps five or six over several weeks, mostly the ones in Die Zeit. But the last couple of days I was back in the UK where this is really a big thing. And our host clearly is an addict with books all over the house. So I myself did a couple more of them. And indeed, there is something to it. But what I wanted to point out is that I found several types of ways to approach these puzzles. This starts from "I don't care about puzzles, especially if they are about numbers". This is an excellent attitude because it saves you lots of time. However, sudokus are about permutations of five things and it just happens that they are usually numbers but this is inessentiel in the problem. A similar approach was taken by a famous Cambridge physicist who expressed that he found "solving systems of linear equations" not too entertaining. Well, either he's has a much deeper understanding of sudokus than me or he has not really looked at a single one to see that probably linear equations are of no help at all. But the main distinction (and that probably tells about your degree of geekiness) is in my mind: How many sudokus do you solve before you write a progam that does it? If the answer is 0 you are really lazy. You could object that if you enjoy solving puzzles why would you delegate that fun to your computer but this just shows that you have never felt the joy of programming. Here is my go at it. Wednesday, October 26, 2005 Spacetime dynamics and RG flow A couple of days ago there appeared a paper by Freedman, Headrick, and Lawrence that I find highly original. It not just follows up on a number of other papers but actually answers a question that has been lurking around for quite a while but had not really been addressed so far (at least as far as I am aware of). I had asked myself the question before but attributed it to my lack of understanding of the field and never worried enough to try to work it out myself. At least, these gentlemen have and produced this beautiful paper. It is set in the context of tachyon condensation (and this is of course where all this K-Theory stuff is located): You imagine setting up some arrangement of branes and (as far as this paper is concerned even more important as this is about closed strings) some spatial manifold (if you want with first fundamental form, that is the conjugated momentum to a spatial metric) with all the fields you like in terms of string theory and ask what happens. In general, your setup will be unstable. There could be forces or you could be in some unstable equilibrium. The result is that typically your space-time goes BOOOOOOOOOOM as you had Planck scale energy densities all around but eventually the dust (i.e. gravitational and other radiation) settles and you ask: What will I find? The K-Theory approach to this is to compute all the conserved charges before turning on dynamics and then predicting you will end up in the lowest energy state with the same value for all the charges (here one might worry that we are in a gravitational theory which does not really have local energy density but only different expansion rates but let's not do that tonight). Then K-Theory (rather than for example de Rham or some other cohomology) is the correct theory of charges. The disadvantage of this approach is that it is potentially very crude and just knowing a couple of charges might not tell you a lot. You can also try to approach the problem from the worldsheet perspective. There you start out with a CFT and perturb it by a relevant operator. This kicks off a renormalisation group flow and you will end up in some other CFT describing the IR fixed point. General lore tells you that this IR RG fixed point describes your space-time after the boom. The c-theorem tells you that the central charge decreases during the flow but of course you want a critical string theory before and after and this is compensated by the dilaton getting the appropriate slope. The paper is addresses this lore and checks if it is true. The first concern is of course that proper space-time dynamics is expected to (classically) be given by some ordinary field equation in some effective theory with typically two time derivatives and time reversal symmetry where the beta functions play the role of force. In contrast, RG flow is a first order differential equation where the beta-functions point in the direction of the flow. And (not only because of the c-theorem) there is a preferred direction of time (downhill from UV to IR). As it is shown in the paper, this general scheme is in fact true. And since we have to include the dilaton anyway, this also gets its equation of motion and (like the Hubble term in Friedman Robertson Walker cosmology) provides a damping term for the space-time fields. So, at least for large damping, the space-time theory is also effectively first order but at small (or negative which is possible and of course needed for time reversal) damping the dynamics is of different character. What the two descriptions agree on is the set of possible end-points of this tachyon condensation, but in general the dynamics is different and because second order equations can overshoot at minima, the proper space-time dynamics can end up in a different minimum than predicted by RG flow. All this (with all details and nice calculations) is in the paper and I can strongly recommend reading it! Monday, October 24, 2005 Hamburg summary Friday, October 21, 2005 No news is good news Wednesday, October 19, 2005 More conference reporting I just found that the weekly quality paper "Die Zeit" has an interview with Smolin on the occation of Loops '05. Probably no need to learn German for this, nothing new: String theory doesn't predict anything because there are 10^500 String theories (they lost the ^ somewhere), Peter W. can tell you more about this, stringy people have lost contact to experiment, LQG people do better because they predict a violation of the relativistic dispersion relation for light (is this due to the 3+1 split of their canonical formalism?) and Einstein would have been suppressed today because he was an independant thinker and not part of the (quantum mechanics) mainstream. I was told, "Frankfurter Allgemeine Sonntagszeitung" also had a report on Loops '05. On their webpage, the article costs 1.50 Euros and I am reluctant to pay this. Maybe one of my readers has a copy and can post/fax it to me? Tomorrow, I will be going to Hamburg where for three days they are celebrating the opening of the centre for mathematical physics. This is a joint efford of people from the physics (Louis, Fredenhagen sen., Samtleben, Kuckert) and math (Schweigert) departments of Hamburg university and the DESY theory group (Schomerus, Teschner). This is only one hour away and I am really looking forward to having a stringy critical mass coming together in northern Germany. Speakers of the opening colloquium include Dijkgraaf (hopefully he will make it this time), Hitchin, Zamolodchikov, Nekrassov, Cardy and others. If there is some reasonable network connection, there will be some more live blogging, Urs in now a postdoc in Christoph Schweigert's group, I assume he will be online as well. Monday, October 17, 2005 Classical limit of mathematics The most interesting periodic event at IUB is the mathematics colloquium as the physicists don't manage to get enough people together for a regular series. Today, we had G. Litvinov who introduced us to idempotent mathematics. The idea is to build upon the group homomorphism x-> h ln(x) for some positive number h that maps the positive reals and multiplication to the reals with addition. So we can call addition in R "multiplication" in terms of the preimage and we can also define "addition" in terms of the pre-image. The interesting thing is what becomes of this when we take the "classical limit" of h->0: Then "addition" is nothing but the maximim and this "addition" is idempotent: a "+" a = a. This is an example of an idempotent semiring and in fact it is the generic one: Besides idempotency, it satisfies many of the usual laws: associativity, distributional law, commutativity. Thus you can carry over much of the usual stuff you can do with fields to this extreme limit. Other examples of this structure are Boolean algebras or compact convex sets where "multiplication" is the usual sum of sets and "addition" is the convex hull (obviously, the above example is a special case). Another example are polynomials with non-negative coefficients and for these the degree turns out to be a homomorphism! The obvious generalization of the integral is the supremum and the Fourier transform becomes the Legendre transform (you have to work out what the characters of the addition are!). This theory has many applications, it seems especially strong for optimization problems. But you can also apply this limiting procedure to algebraic varieties under which they are turned into Newton polytopes. I enjoyed this seminar especially because it made clear that many constructions can be thought of extreme limits of some even more common, linear constructions. But now for something completely different: When I came back to my computer, I had received the following email: Dear Mr. Helling I would greatly appreciate your response. Please what is interrelation mutually fractal attractor of the black hole condensation, Bott spectrum of the homotopy groups and moduli space of the nonassociative geometry? Thank you very much obliged. I have no idea what he is talking about but maybe one of my readers has. I googled for a passage from the question and found that exactly the same question has also been posted in the comment sections of the Coffee Table and Lubos's blog. Thursday, October 13, 2005 How to read blogs and how not to write blogs I usually do not have articles here saying basically "check out these articles in other blogs I liked them". Basically this is because I think if you, dear reader, have found this blog you will be able to find others that you like as well, so no need for me to point you around the blogsphere. And , honestly, I don't think too many people read this blog, anyway. I don't have access to the server's log files and I do not have a counter (I must say, I hate counters because often they delay loading a page a lot). But it happens more and more often that I meet somebody in person and she/he tells me that she/he has read this or that in atdotde. So in the end, I might not write for the big bit bucket. My reporting on Loops '05 was picked up in other places so that might have brought even more reader to my little place. I even got an email from a student in China asking me that he cannot read atdotde anymore (as well as for example Lubos' Reference Frame). Unfortunatly, I had to tell him that this was probably due to the fact that his government decided to block from the Chinese part of the net because blogs are way to subversive. So as a little service for some of my readers who not already now, here is a hint on how to read blogs: Of course you can if you have some minutes of boredom type your friends names into google and surf to their blogs every now and then. That is fine. Maybe at some point, you want to find out, what's new in all those blogs. So you go through your bookmarks (favourites in MS speak) and check I you've seen everything that you find there. But that is cyber stone age! What you want is a "news aggregator". This is a little program that does this for you periodically and informs you about the new articles it found. You just have to tell it where to look. This comes in form of a URL called the "RSS feed". Often you find little icons in the sidebar of the blogs that link to that URL. In others like this you have to guess. All the blogs on it is in the form URL_of_blog/atom.xml so for atdotde it is You have to tell your news aggregator about this URL. In the simplest form, this is just your web browser. Firefox calls it "live bookmarks": You open the "manage bookmarks" window and select "new life bookmark" from the menu. I use an aggregator called liferea, that even opens a little window once it found anything new, but there are many others. Coming back to the theme of the beginning, I will for once tell you which blogs I monitor (in no particular order): • Amelies Welt(in German) I know Amelie from a mailing list an like the variety of topics she writes about. • BildBlog They set straight the 'news' from the biggest German tabloid. And it's funny. • Bitch, PhD Academic and feminist blogger. I learned a lot. • String Coffee Table you can chat about strings while nipping a coffee. • Musings The first blog of a physicist I came across. Jacques still sets standards. • Die Schreibmaschine Anna is a friend from Cambridge, currently not very active because... • Broken Ankle Diary a few weeks ago she broke here ankle • Lubos Motl's Reference Frame Strong opinions on physics , global warming, politics. • hep-th The arxiv now also comes in this form but still I prefer to read it in the classic way. • Jochen Weller One of the Quantum Diaries. Jochen was in Cambrigde while I was there. • Preposterous Universe Sean's blog is dead now because he is part of... • Cosmic Variance Currently my best loved blog. Physics and everything else. • Not Even Wrong Peter Woit has one point of criticism of string theory that he keeps repeating. But he is a very reasonable guy. • Daily ACK I met Al on a dive trip in the English Channel. Some astronomy and some Apple and Perl news. • Physics Comments Sounded like a good idea but not really working at least in the hep-th area. Have fun! Now I should send trackback pings. This is such a pain with Ah, I nearly forgot: This article and how academic blogs can hurt your job hunting scares me a lot! (I admit, I found it on Cosmic Variance.) • Tuesday, October 11, 2005 IUB is noble Coming back from Loops '05 I find a note in my mailbox that the International University Bremen has now an Ig-Noble Laurate amongst its faculty: V. Benno Meyer-Rochow has received the prize in fluid dynamics for his work on the pressure produced when penguins pooh. More news on the others Today, I give it a new shot. And the plenary talks are promising. Currently, John Baez has been giving a nice overview on various incarnations of spin foam models (he listed Feynman diagrams, lattice gauge theory and topological strings among them although I am under the impression that in the last point he is misguided as topological strings in fact take into account the complex/holomorphic structure of the background). However, starting from the point "what kind of matter do we have to include to have a nice continuum limit" he digressed via a Witten anecdote (who answered the question if he thinks LQG is right said that he hoped not because he hoped (in the 90s) that there is only one unique matter content (ie strings) consistend with quantised gravity) to making fun of string theorists asking them to make their homework to check the 10^whatever vacua in the landscape. The next speaker will be Dijkgraaf who hopefully will do a better job than did Theissen yesterday in presenting that stringy people have interesting, deep stuff to say about physics. Unfortunately, electro-magnetism lectures back in Bremen require me to run off at 11:00 and catch the train so I will not be able to follow the rest of the conference closely. Baez got back on track with a nice discussion of how Lorentzlian Triangulation fit into the scheme of things and what role the prescribed time slicing might have on large scale physics (introducting further terms than Lambda and R in the LEEA). He showed also a supposed to be spin foam version of it. Oh no. They have grad students and young postdocs as chairpersons. Bianca Dittrich just announced "Our next speaker is Robbert Dijkgraaf" and nothing happened. It seems Dijkgraaf didn't make it here on the early morning plane. Now, I can fulfil the annonymous reader's wish and report on the presentation of Laurent Friedel, the next speaker. Before I power down my laptop: Friedel looks at effects of quantum gravity on low energy particle actions. In order to do that he couples matter to the Ponzano Regge model and then will probably try to integrate out the gravitational degrees of freedom. Monday, October 10, 2005 The Others I sneaked into the Loops '05 Conference at the AEI at Potsdam. So, I will be able to give you live blogging for today and tomorrow. After some remarks by Nicolai and Thiemann and the usual impedence mismatch between laptops and projectors, Carlo Rovelli has started the first talk. He is on slide 2, and still reviews recent and not so recent devellopments of LQG. Rovelli talked about his paper on the graviton propagator. If you like he wants to recover Newton's law from his model. The obvious problem of course is that any propagator g(x,y) cannot depend on x or y if everything is diffeomorphism invariant (at least in these people's logic). So he had to include also a dependence on a box around the 'detector' and introduce the metric on the box as boundary values. He seems to get out of this problem by in fact using a relational notion as you would of course have to in any interesting background independent theory (measure not with respect to coordinates but with respect to physical rulers). Then there was a technical part which I didn't quite get and in the end he had something like g(x,y)=1/|x-y|^2 on his slide. This could be interesting. I will download the paper and read it on the train. Next is Smolin. Again computer problems, this time causing an unscheduled coffee break. Smolin started out talking about problems of bckground independent approaches including unification and the nature of anomalies. Then, however, he decided to focus on another one: How does macroscopic causality arise? He doesn't really know, but looked at some simple models where macro causality is usually destroyed my some non-local edges (like in a small world network). Surprisingly, he claims, these non-local connection do not change macroscopic physics (critical behaviour) a lot and thus they are not really detectable. Even more, these non-local "defects" could, according to Smolin, play the role of matter. Then he showed another model where instead of a spin network, the physics is in twisted braided ribbon graphs. There, he called some configurations "quarks" and asigned the usual quantum numbers and ribbon transformations for C, P and T. Then it got even better, next slides mentioned the problem of small power in low l modes in the CMB ("scales larger than 1/Lambda"), the Poineer anomly and the Tully Fisher relation that is the empirical observation behind MOND. I have no idea what his theory as to do with all these fancy open probelms. Stefan Theissen next to me makes interesting noises of astonishment. Next speaker is John Barrett. This talk sounds very solid. He presents a 3+0 dimensional model which to me looks much like a variant of a spin network (a graph with spin labels and certain weight factors for vertices, links, and tetrahedra). He can do Feynman graph like calculations in this model. Further plus: A native speaker of British English. Last speaker of the forenoon is Stefan Theissen. He tries to explain how gravity arises from string theory to the LQG crowd. Many have left before he started and so far he has only presented string theory as one could have done this already 20 years ago: Einstein's equation as consistency requirement for the sigma model and scattering amplitudes producing the vertices of the Einstein Hilbert action. Solid but not really exciting. In the afternoon, there are parallel sessions. I chose the "seminar room". Here, Markopoulou presents her idesa that dynamics in some (quantum gravity?) theory has formal similiarities to quantum information processing. In some Ising type model she looks at the block spin transformation and reformulates the fact that low energy fields only talk to the block spins and not to the high frequency fields. With some fancy mathematical machinery, she relates this to error correction where the high frequency fields play the role of noise. Next is Olaf Dreyer. Very strange. He proposes that quantum mechnics should be deterministic and non-linear. Most of what he says are philosophical statements (and I do by far not agree with all of them) but what seems to be at the core of it is that he does not want macroscopic states that are superpositions of elementary states. I thought that was solved by decoherence long ago... At least Rovelli asks "[long pause] maybe I didn't understand it. you make very general statements. But where is the physics?" The next speaker is Wang who expands a bit on what Smolin said in the morning. It's really about Small World Networks (TM). If you have such a network with gauge flux along the edges then in fact a non-local random link looks locally as a charged particle. This is just like in Wheeler's geometrodynamics. The bulk of the talk is about the Ising model on a lattice with a small number of additional random links. The upshot is that the critical temperature and the heat capacity as well as the correltations at criticality do not much depend on the existence of the additional random links. Martinetti reminds us that time evolution might have a connection with temperature. Concretely, he wants to take the Tomito-Takesaki unitary evolution as time evolution and build a KMS-state out of it. There is a version of the Unruh effect in the language of KMS states and Martinetti works out the corretion to the Unruh temperature from the fact that the observer might have a finite life time. This correction turns out to be so small that by uncertainty, one would have to measure longer than the life time to detect the difference in tempaerture. I stopped reporting on the afternoon talks as I did not get much out of those. Currently, Rüdiger Vaas, a science journalist, is the last speaker of the day. He at least admits that his talk is on philosophy rather than physics. His topic are the philosophical foundations of big bang physics. Tuesday, September 20, 2005 Faster than light or not I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release. All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics. So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength. Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light. Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality. For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie. The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip. (These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit. What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things. Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not. Update: Links updated DAMTP-->IUB Friday, September 16, 2005 Negative votes and conflicting criteria Yesterday, Matthijs Bogaards and Dierk Schleicher ran a session on the electoral system for the upcoming general election we are going to have on Sunday in Germany. I had thought I I know how it works but I was proven wrong. Before I was aware that there is something like Arrow's impossibility theorm which states that there is a certain list of criteria your electoral system is supposed to fulfill but which cannot hold all at the same time for any implementation. What typically happens are cyclic preferences (there is a majority for A over B and one for B over C and one for C over A) but I thought all this is mostly academic and does not apply to real elections. I was proven wrong and there is a real chance that there is a paradoxical situation coming up. Before explaining the actual problem, I should explain some of the background. The system in Germany is quite complicated because it tries to accomodate a number of principles: First, after the war, the British made sure the system contains some component of constituency vote: Each local constituency (electoral district for you Americans) should send one candidate to parliament that is in principle directly responsible to the voters in that district so voters have something like "their representative". Second, proportional vote, that is the number of seats for a party should reflect the percentage of votes for that party in the popular vote. Third, Germany is a federal republic, so the sixteen federal states should each send their own representatives. Finally, there are some practical considerations like the number of seats in parliament should be roughly 600 and you shouldn't need a PhD in math and political science to understand your ballot. So this is how it works. Actually, it's slightly more complicated but that shall not bother us here. And I am not going into the problem of how to deal with rounding errors (you can of course only have integer seats) which brings with it its own paradoxes. What I am going to cover is how to deal with the fact, that the number of seats has to be non-negative: The ballot has two columns: In the first, you vote for a candidate from your constituency (which is nominated by its party). In the second, you vote for a party for the proportional vote. Each voter makes one cross in each column, one for a candidate from the constituency and one for a party in the proportional vote. There are half as many constituencies as there are seats in parliament and these are filled immediately according to majority vote of the first column. The second step is to count the votes in the second column. If a party neither gets more than five percent of those nor wins three or more constituencies their votes are dropped. The rest is used to work out how many of the total of 600 seats each of the parties gets. Now comes the federal component: Let's consider party A and assume the popular vote says they should get 100 seats. We have to determine how these 100 seats are distributed between the federal states. This is again done proportionally: Party A in federal state (i) gets that percentage of the 100 seats that reflects the percentage of the votes for party from state (i) of the total votes for party A in all of Germany. Let's say this is 10. Further assume that A has won 6 constituencies in federal state (i). Then, in addition to these 6 candiates from the constituencies, the top four candidates from party A's list for state (i) are send to Berlin. So far, everything is great: Each constituency has "their representative" and the total number of seats for each party is proportional to its share of the popular vote. Still, there is a problem: The two votes in the two columns are independent. And as the constituencies are determined by majority vote, except in a few special cases (Berlin Kreuzberg where I used to live before moving to Cambridge being one with the only constituency winner from the green party) it does not make much sense to vote for a constituency candidate that is not nominated by one of the two big parties. Any other vote would likely be irrelevant and effectively your only choice is between the candidate of SPD or CDU. Because of this, it can (and in fact often does for the two big parties) happen that a party wins more constituencies in a federal state than it is entitled to for that state according to the popular vote. In that case (because there are no negative numbers of candidates from the list to balance this) the rule is that all the constituency winners go to parliament and none from the list of that party. The parliament is enlarged for these "excess mandates". So that party gets more seats than their proportion of the popular vote. This obviously violates the principle of proportional elections but it gets worse: If that happens in a federal state for party A you can hurt this party by voting for it: Take the same numbers as above but assume A has won 11 constituencies in (i). If there are no further excess mandates, in the end, A gets 101 seats in the enlarged parliament of 601 seats. Now, assume A gets an additional proportional vote. It is not impossible that this does not increase A's total share of 100 votes for all of Germany but increases to proportional share for the A's candidates in federal state (i) from 10 to 11. This does not change anything for the represenatives from (i), still the 11 constituency candidates go to Berlin but there is no excess mandate anymore. Thus, overall, A sends only 100 representatives to a parliament of 600, one less than with the additional vote! As a result, in that situation the vote for A has a negative weight: It decreases A's share in the parliament. Usually, this is not so much of a problem, because the weights of votes depend on what other people have voted (which you do not know when you fill out your ballot) and chances are much higher that your vote has positive weight. So it is still save to vote for your favourite party. However, this year, there is one constituency in Dresden in the federal state of Saxony where one of the candidates died two weeks before election day. To ensure equal chances in campaining, the election in that constituency has been postponed for two weeks. This means, voters there will know the result from the rest of the country. Now, Saxony is known to be quite conservative so it is not unlikely that the CDU will have excess mandates there. And this might just yield the above situation: Voters from Dresden might hurt the CDU by voting for them in the popular vote and they would know if that were the case. It would still be democratic in a sense, it's just that if voters there prefer CDU or FDP they should vote for FDP and if they prefer SPD or the Greens they should vote for CDU. Still, it's not clear if you can explain that to voters in less then two weeks... I find this quite scary, especially since all polls predict this election to be extremely close and two very different outcomes are withing one standard deviation. If you are interested in alternative voting systems, Wikipedia is a good starting point. There are many different ones and because of the above mentioned theorem they all have at least one drawback. Yesterday, there was also a brief discussion of whether one should have a system that allows fewer or more of the small parties in parliament. There are of course the usual arguments of stability versus better representation of minorities. But there is another argument against a stable two party system that is not mentioned often: This is due to the fact that parties can actually change their policies to please more voters. If you assume, political orientation is well represented by a one dimensional scale (usually called left-right), then the situation of icecream salesmen on a beach could occur: There is a beach of 4km with two competing people selling icecream. Where will they stand? For the customers it would be best if they are each 1km from the two ends of the beach so nobody would have to walk more than 1km to buy an icecream and the average walking distance is half a km. However, this is an unstable situation as there is an incentive for each salesman to move further to the middle of the beach to increase the number of customers to which he is closer than his competitor. So, in the end, both will meet in the middle of the beach and customers have to walk up to 2km with an average distance of 1km. Plus if that happens with two parties in the political spectrum they will end up with indistinguishable political programs and as a voter you don't have a real choice anymore. You could argue that this has already taken place in the USA or Switzerland (there for other reasons) but that would be unfair to the Democrats. I should have had many more entries here about politics and the election like my role models on the other side of the Atlantic. I don't know why these never materialised (vitualised?). So, I have to be brief: If you can vote on Sunday, think of where the different parties actually have different plans (concrete, rather than abstract "less unemployment" or "more sunshine") and what the current government has done and if you would like to keep it that way (I just mention the war in Iraq and foreign policy, nuclear power, organic food as a mass market, immigration policy, tax on waste of energy, gay marriage, student fees, reform of academic jobs, renewable energy) your vote should be obvious. Mine is. The election is over and everybody is even more confused than before. As the obvious choices for coalitions do not have a majority one has to look for the several colourful alternatives and the next few weeks will show us which of the several impossibilities will actually happen. What will definitely happen is that in Dresden votes for the CDU will have negative weight (linked page in German with an excel sheet for your own speculations). So, Dresdeners, vote for CDU if you want to hurt them (and you cannot convince 90% of the inhabitants to vote for the SPD). Wednesday, September 14, 2005 Natural scales When I talk to non-specialists and mention that the Planck scale is where quantum gravity is likely to become relevant sometimes people get suspicious about this type of argument. If I have time, I explain that to probe smaller length details I would need so much CM energy that I create a black home and thus still cannot resolve it. However, if I have less time, I just say: Look, it's relativistic, gravity and quantum, so it's likely that c, G and h play a role. Turn those into a length scale and there is the Planck scale. If they do not believe this gives a good estimate I ask them to guess the size of an atom: Those are quantum objects, so h is likely to appear, the binding is electromagnetic, so e (in SI units in the combination e^2/4 pi epsilon_0) has to play a role and it comes out of the dynamics of electrons, so m, the electron mass, is likely to feature. Turn this into a length and you get the Bohr radius. Of course, as all short arguments, this has a flaw: there is a dimensionless quantity around that could spoil dimension arguments: alpha, the fine-structure constant. So you also need to say, that the atom is non-relativistic, so c is not allowed to appear. You could similarly ask for a scale that is independant of the electric charge, and there it is: Multiply the Bohr radius by alpha and you get the electron Compton wavelength h/mc. You could as well ask for a classical scale which should be independent of h: Just multiply another power of alpha and you get the classical electron radius e^2/4 pi epsilon_0 m c^2. At the moment, however, I cannot think of a real physical problem where this is the characteristic scale (NB alpha is roughly 1/137, so each scale is two orders of magnitude smaller than the previous). Update: Searching Google for "classical electron radius" points to scienceworld and wikipedia, both calling it the "Compton radius". Still, there is a difference of an alpha between the Compton wavelength and the Compton radius. Thursday, September 08, 2005 Reading through the arxive's old news items I became aware of hep-th/9203227 for which the abstract reads \Paper: 9203227 From: (J. B. Harvey) Date: Wed 1 Apr 1992 00:25 CST 1992 A solvable string theory in four dimensions, by J. Harvey, G. Moore, N. Seiberg, and A. Strominger, 30 pp \We construct a new class of exactly solvable string theories by generalizing the heterotic construction to connect a left-moving non-compact Lorentzian coset algebra with a right-moving supersymmetric Euclidean coset algebra. These theories have no spacetime supersymmetry, and a generalized set of anomaly constraints allows only a model with four spacetime dimensions, low energy gauge groups SU(3) and spontaneously broken SU(2)xU(1), and three families of quarks and leptons. The model has a complex dilaton whose radial mode is automatically eaten in a Higgs-like solution to the cosmological constant problem, while its angular mode survives to solve the strong CP problem at low energy. By adroit use of the theory of parabolic cylinder functions, we calculate the mass spectrum of this model to all orders in the string loop expansion. The results are within 5% of measured values, with the discrepancy attributable to experimental error. We predict a top quark mass of $176 \pm 5$ GeV, and no physical Higgs particle in the spectrum. It's quite old and there are some technical problems downloading it. Tuesday, September 06, 2005 Local pancake and axis of evil This would then be an explanation of this axis of evil. Monday, August 29, 2005 My not so humble opinions on text books LogVyper and summer holiday Tuesday, July 26, 2005 Bottom line on patents Now that the EU parliament has stopped the legislation on software patents, it seems time to summarize what we have learned: The whole problem arises because it is much easier to copy information than to produce it by other means. On the other hand, what's great about information is that you still have it if you give it to somebody else (this is the idea behind open source). So, there are patents in the first place because you do not want to disfavour companies that do expensive research and development to companies that just save these costs by copying the results of this R&D. The state provides patent facilities because R&D is in it's interest. The owner of the patent on the other hand should not use it to block progress and competition in the field. He should therefore sell licenses to the patent that reflect the R&D costs. Otherwise patent law would promote large companies and monopolies as these are more likely to be able to afford the costs of the administrative overhead of filing a patent. Therefore in an ideal world the patent holder should be forced to sell licenses for a fair price that is at most some specific fraction of the realistic costs of the R&D that lead to the patent (and not the commercial value of the products derived from the patent). Furthermore, the fraction could geometrically depend on the number of licenses sold so far such that the 100th license to an idea is cheaper than the first and so on (with the idea that from license fees you could at most asymptotically gain a fixed multiple of your R&D investment). This system would still promote R&D while stopping companies from exploiting their patents. Furthermore it would prevent trivial patents as those require hardly any R&D and are therefore cheap (probably, you should not be able to patent an idea for which the R&D costs were not significantly higher than the administrative costs of obtaining the patent). Unfortunately, in the real world it is hard to measure the costs of R&D that a necessary to come up with a certain idea. Friday, July 22, 2005 Geek girl Ever worked out your geek code? Consider yourself a geek? Maybe you should reconsider, check out Jeri Ellsworth, especially the Stanford lecture! Thursday, July 14, 2005 PISA Math n = number of steps per minute P = step length in meters Thursday, June 23, 2005 Sailing on Radiation Due to summer temperatures, I am not quite able to do proper work, so I waste my time in the blogsphere. Alasdair Allen is an astronomer that I know from diving in the English Channel. In his blog, he reports on the recent fuzz about Cosmos-1 not reaching its orbit. Cosmos-1 was a satellite that was supposed to use huge mirror sails to catch the solar radiation for propulsion. Al also mentions a paper together with a rebuttal that claims that this whole principle cannot work. In physics 101, we've all seen the light mill that demonstrates that the photons that bounce off the reflecting sides of the panels transfere their momentum to the wheel. So this shows that you can use radiation to move things. Well, does it? Gold argues, that the second law of thermodynamics is in the way of using this effectively. So what's going on? His point is that once the radiation field and the mirrors are in thermal equilibrium, the mirror would emit photos to both sides and there is no net flux of momentum. On general grounds, you should not be able to extract mechanical energy from heat in a world where everything has the same temperature. The reason that the light mill works is really that the mill is much colder than the radiation. So, it seems to me that the real question (if Gold is right, which I tend to think, but as I said above, it's hot and I cannot really convice myself, that at equilibrium the emission and absorption of photons to both sides balances) is how long it takes for the sails to heat up. If you want to archive a significant amount of acceleration they should be very light which on the other hand means the absolute heat capacity is small. At least, the rebuttle is so vague, it's written by an engineer of the project, that I don't think he really understood Gold's argument. But it seems, that some physics in the earlier stages of the flight was ill understood, as cosmos-1 did not make it to orbit... Monday, June 20, 2005 The future of publishing Back in Bremen and after finishing my tax declaration for 2004, great wakering provided my with an essay by Michael Peskin on the future of scientific publication. Most of it contains the widely accepted arguments about how The ArXive has revolutionized high energy physics but one aspect was new to me: He proposes that the refereeing process has to be organized by professionals and that roughly 30 percent of the costs of an article in PRL come from this stage of the publishing process. He foresees that this service will always need to be paid for but his business model sounds interesting: As page charges don't work, libraries should pay a sum (depending on the size of the institution but not on the number of papers) to these publishers which then accept papers from authors affiliated with those institutions for refereeing. This would still require a brave move to get this going but this would have to come from the libraries. And libraries are well aware of the current crisis in the business (PRL is incedibly cheap (2950 US$) compared to NPB which costs 15460 US$ per year for institutions). Once we are in the process of reforming the publishing process, I think we should also adopt an idea that I learnt from Vijay Balasubramanian: If a paper gets accepted, the name of the referee should also be made public. This would still protect the referee that rejects a paper but would make the referee accountable and more responsible for accepting any nonsense. Friday, June 17, 2005 Still more phenomenology Internet connectivity is worse than ever, so Jan Plefka and I had to resort to an internet cafe for lunch to get online. So I will just give a breif report of what happened since my last report. First there was Gordon Kane who urged everybody to think about how to extract physical data from the numbers that are going to come out of LHC. He claimed, that one should not expect (easily obtained) useful numbers on susy except the fact that it exists. Especially, it will be nearly impossible to deduce lagrangian parameters (masses etc) for the susy particles as there are not enough independent observables at LHC to completely determine those. Still, he points out that it will be important to be trained to understand the message that our experimental friends tell us. To this end, there will be the LHC Olympics where Monte Carlo data of the type as it will come out of the experiment will be provided with some interesting physics beyond the standard model hidden and there will be a competition to figure out what's going on. Today, Dvali was the first speaker. He presented his model that amounts to an IR modification of gravity (of mass term type) that is just beyond current observational limits from solar system observations that would allow for fit of cosmological data without dark energy. One realization of that modification would be a 5D brane world scenario with a 5D Einstein Hilbert action and a 4D EH action for the pull-back of the metric. Finally, there was Paul Langacker who explained why it is hard to get seasaw type neutrinos from heterotic Z_3 orbifolds. As everybody knows, in the usual scenario neutrino masses arise from physics around some high energy (GUT, Planck?) scale. Therefore neutrino physics might be the most direct source of ultra high energy physics information and one should seriously try to obtain it from string constructions. According to Langacker, this has so far not been possible (intersectin g brane worlds typically preserve lepton number and are thus incompatible with Majorana masses and he showed that none of the models in the class he studied had a usable neutrino sector). Thursday, June 16, 2005 More Phenomenology Now, we are at day three of the string phenomenology and it gets better by the day: Yesterday, the overall theme was flux vacua and brane constructions. These classes of models have the great advantage over heterotic constructions for example that they are much more concrete (fewer spectral sequences involved) and thus a simple mind like myself has fewer problem to understand them. Unfortunately, at the same rate as talks become more interesting (at least to me, I have to admit that I do not get too excited when people present the 100th semi-realistic model that might even have fewer phenomenological shortcomings than the the 99th that was presented at last year's conference) the internet connectivity get worse and worse: In principle, there is a WLAN in the lecture hall and the lobby and it is protected by a VPN. However, the signal strength is so low that the connection gets lost every other minute resulting in the VPN client also losing its authentification. As a result, I now type this into my emacs and hope to later cut and paste it into the forms at Today's session started with two presentations that I am sure many people are not completely convinced by they at least had great entertainment value: Mike Douglas reviewed his (and collaborators) counting of vacua and Dimopoulos presented Split Supersymmetry. Split Supersymmetry is the idea that the scale of susy breaking is much higher than the weak scale (and the hierarchy is to be explained by some other mechanism) but the fermionic superpartners still have masses around (or slightly above) 100GeV. This preserves the MSSM good properties for gauge unification and provides dark matter candidates but removes all possible problems coming with relatively light scalars (CP, FCNC, proton decay). However, it might also lack good motivation (Update: I was told, keeping this amount of susy prevents the Higgs mass from becoming too large. This is consistent with upper bounds coming from loop corrections etc) . But as I learned at the weak scale there are only four coupling constants that all come from tan(beta) so they should run and unify at the susy scale. But the most spectacular prediction would be that LHC would produce gluinos at a rate of about one per second and as they decay through the heavy scalars they might well have a life time of several seconds. As they are colour octets they either bind to q q-bar or to qqq and thus form R-mesons and R-baryons. These (at least if charged which a significant fraction would be) would get stuck inside the detector (for example in the muon chambers) and decay later into jets that would be easy to observe and do not come from the interaction area of the detector. So, stay tuned for a few more years. Talking of interesting accelerator physics beyond the standard model, Gianguido Dall'Agata urges me to spread a rumour that at some US accelerator (he doesn't remember which) sees evidence for a Z' that is a sign of another SU(2) group (coupling to right handed fermions?) that is broken at much higher scale than the usual SU(2)-left. He doesn't remember any more details but he promised to dig up the details. So again, stay tuned. Finally, I come to at least one reader at Columbia's favourite topic, The Landscape(tm). Mike gave a review talk that evolved from a talk that he has already given a number of times, so there was not much news. I haven't really followed this topic over the last couple of months so I was updated on a number of aspects and one of them I find worth discussing. I have to admit it is not really new but at least to me it got a new twist. It is the question of which a priori assumptions you are willing to make. Obviously you want to exclude vacua with N=2 susy as they come with exact moduli spaces. That is there is a continuum of such vacua and these would dominate any finite number however large it (or better: its exponent) it might be. Once you accept that you have to make some assumption to exclude some "unphysical" vacua you are free to exclude further: It is common in this business to assume four non-compact dimensions and put an upper bound on the size of the compact ones (or a lower bound on KK masses) for empirical reasons. Furthermore, one could immediately exclude models that for example unwanted ("exotic") chiral matter. To me (being no expert in these counting matters), intuition from intersecting branes and their T-duals, magnetized branes, suggests that this restriction would help to get rid of really many vacua and in the end you might end up with a relatively small number of remaining ones. Philosophically speaking, accepting a priori assumptions (aka empirical observations) one gives up the idea of a theory of everything, a theory that predicts every observation you make. Be it the amount of susy, the number of generations, the mass of the electron (in Planck units), the spectrum of the CMB, the number of planets in the solar system, the colour of my car. But (as I have argued earlier) a hope for such a TOE would have been very optimistic anyway. This would be a theory, that has only one single solution to its equations of motion (if that classical concept applies). Obviously, this is a much stricter requirement than to ask for a theory without parameters (a property I would expect from a more realistic TOE). All numerical parameters would actually be vevs of some scalar fields that are determined by the dynamics and might even be changing or at least varying between different solutions. So, we will have to make a priori assumptions. Does this render the theory unpredictive? Of course not! At least if we can make more observations that data we had to assume. For example, we could ask for all string vacua with standard model gauge group, four large dimensions, susy breaking at around 1TeV and maybe an electron mass of 511keV and some weak coupling constant. Then maybe we end up with an ensemble of N vacua (hopefully a small number). Then we could go ahead (if we were really good calculators) and check which of these is realized and from that moment on we would make predictions. So it would be a predictive theory, even if the number of vacua would be infinite if we dropped any of our a priori assumptions. Still, for the obvious reasons, we would never be able to prove that we have the correct theory and that there could not be any other, but this is just because physics is an empirical science and not math. I think, so far it is hard to disagree with what I have said (although you might not share some of my hopes/assumptions). It becomes really controversial if one starts to draw statistical conclusions from the distribution of vacua as in the end we only live in a single one. This becomes especially dangerous when combined with the a priori assumptions: These are of course most effective when they go against the statistics as then they rule out a larger fraction of vacua. It is tempting to promote any statement which goes against the statistics into an a priori assumption and celebrate any statement that is in line with the weight of the distribution. Try for yourself with the statement "SUSY is broken at a low scale". This all leaves aside the problem that so far nobody has had a divine message about the probability distribution between the 10^300 vacua and why it should be
669911b0c7033c04
A Superior Alternative to Rote Learning When I was taught soccer as a kid, there was one big mantra: repetition, repetition, repetition.  We learned to pass by standing in front of each other and passing the ball between us for 20 minutes. We did this almost every training session. The same way we learned headers. We learned shooting by shooting onto the goal for half an hour at the end of every training session. It wasn’t fun, but it worked. After several years of weekly practice, I’m quite good at soccer. When I was a bit older I learned to play trumpet and the mantra was again: repetition, repetition, repetition.  I had to repeat certain songs until I was able to play them perfectly. I’m sure this method would have worked again if I hadn’t given up after 2 years or so. The same teaching method was used to teach me mathematics, Latin etc. in school. I learned to solve equations by solving hundreds of them. I learned to integrate by integrating hundreds of integrals. I learned Latin vocabularies by repeating them over and over again. The story continued when I learned physics at university. To pass exams I had to know the exercise sheets by heart. Thus I calculated them over and over again until I had memorized every step. Again it wasn’t fun but worked. Rote learning is certainly a valid approach, but is it really the best we can do? It turns out, there is another teaching method that is not only much more fun but also far more effective. It’s called differential learning. Currently, this approach is only somewhat widespread in sports, but I’m convinced that it’s applicable almost everywhere. Introducing: Differential Learning The basic idea is this: Instead of letting someone repeat the correct way to do something over and over again, you actively him/her them to do it wrong.  For instance, if I want to teach soccer to kids, I don’t let them repeat the correct passing technique over and over again. Instead, I tell them to pass the ball in every correct and incorrect way possible. A good way to pass a ball is to use inside of the foot. I let them do this, but also tell them to do it in every other way possible.  They have to use the outer part of their foot. They have to use the back of their foot. They have to use the bottom of their foot. They even have to pass the ball with their shin. This way they learn to control the ball and pass it cleanly much quicker. They are immediately exposed to the differences between correct and inferior techniques. That’s why it’s called differential learning. The kids learn to adapt and find their own style. Most importantly, the brain doesn’t get bored and keeps learning and learning. This method is surprisingly new. It was first put forward in 1999 by the German sports scientist Wolfgang Schöllhorn. However, it became popular quickly, at least in the soccer world. For example, the former coach of Borussia Dortmund, Thomas Tuchel, used it with great success.  In addition to such anecdotal evidence there is serious research going on and so far, the data looks convincing. So ist differential learning limited to sports? Absolutely not. It’s easy to imagine how the same basic idea could be applied in other fields. However, I don’t know any examples where differential learning is currently used outside of the soccer world. This means we need to get creative. My field is physics, so I will use it as an example. Let’s say we want to teach quantum mechanics. The thing is if you pick up any textbook on quantum mechanics, all you find is the standard story, repeated over and over again. I recently helped a friend who was preparing for her final exam and was shocked when I saw again how similar all the textbooks are. What you’ll never find in these textbooks is disagreement or discussions of alternatives. However, this would be exactly what we need to make differential learning of quantum mechanics possible. So how could differential learning of quantum mechanics look like in practice? First, let’s remind ourselves how differential learning of soccer works. Afterward, we can try to map the essential steps to quantum mechanics. To teach kids soccer, we need to identify the fundamentals: passing, shooting, headers, tackles, stopping, etc. Then we let them execute these fundamentals, but make sure that they do it in every wrong and right way possible. The goal is that the kids learn to control the ball in all kinds of situations and are able to move the ball wherever they want it to be on the pitch. So what are the fundamentals of quantum mechanics and what do we want our students to be able to do? Our goal is that students are able to describe the behavior of elementary particles in all kinds of situations: • when they are alone and moving freely, • when they are confined in a box, • when they are bound to another particle, • when they scatter off a wall, • when they are shot onto a wall with slits in it, • when they move in a magnetic field etc. The differential way to teach this would be to give the students the task to describe particles in these situations, together with the experimental data that tells them what actually happens. We don’t force the correct way to do it onto them. Instead, we encourage them to try it in every wrong way possible. This way we can avoid that the students simply memorize the usual quantum algorithm* without understanding anything. This is exactly what goes wrong in the standard approach. Like the kids learning soccer by repeating the “correct way” to do something over and over again, students of quantum mechanics usually only learn to apply the standard quantum algorithm again and again. Instead, through differential learning, they would not only be able to describe what the particles do in all these situations but actually, understand why the description works. That’s just one example, but it’s easy to apply the principles of “differential learning” to any other topic. I would love to see people implement it in all kinds of fields. So, if you know any existing course that makes use of “differential learning” or has any ideas of how and where it could be used, please let me know. *The algorithm is so simple that it is easily possible to apply it without any deeper understanding: Write down the Hamiltonian for the system in question, put it into the Schrödinger equation, solve it and while doing so take care of the boundary conditions. The solution is a function of space $x$ and the square of the absolute value of the solution gives you the correct probability to find the particle at any place you want to know about. You can simply memorize it, together with the Schrödinger equation and you’ll be able to solve almost any problem your professor throws at you in an exam. PS: There are, of course, still lots of details missing in the alternative quantum mechanics course outlined above.  However, it’s on my to-do list for next year to fill in the gaps and develop a fully-fledged quantum mechanics mini-course that applies the principles of “differential learning”. Anyone can Contribute There is a problem in the tech community called “Devsplaining“. This notion is used to describe when “experts” condescendingly explain to others the “proper way to code”. The thing is that there is really is no proper way to code. Usually, the “experts” unnecessarily complicate stuff. As a result, most beginners think they need to study coding for years and know everything about the latest technology before they are capable of contributing anything valuable. With this kind of mindset, most never share anything they create. In tech, this means that people never launch their project or never even start because they think they aren’t ready. Exactly the same problem exists in physics, mathematics and probably most other scientific fields. The problem is so widespread that there isn’t even a name for it. It’s just so normal. As a result, many people feel they are not good enough to contribute. This is a problem because people who overcomplicate things exclude many brilliant minds who really could make a difference. It’s nonsense that you need to spend your best years doing complicated calculations to prove they are good enough. It’s nonsense that you must be capable of doing the most complicated calculations before you can add something. It’s nonsense that you need to master every mathematical aspect before you can contribute anything significant to the field. Novel deep insights can originate from everywhere. Possibly from a rigorous proof. Possibly from a long and complicated calculation. Possibly from a simple thought experiment. It’s not too hard to find examples for each of them: Moreover, making huge discoveries is not the only possibility to help physics move forward. To quote Sir William Lawrence Bragg There is a famous cartoon (based on a quote, often attributed to Einstein) that summarizes the problem nicely: If you now roll your eyes because you have seen this cartoon already 100 times, please let me explain. The thing is that while in exams the task usually is comparable to the task “climb the tree” here, the overall “task”, for example, in physics is much broader. Maybe more like “understand the tree”. Climbing the tree is a viable possibility to understand some aspects of the tree, but certainly not exhaustive. The thing is that “Devsplainers” try to convince everyone that the only thing worth doing is to climb the tree and that you have to do it in a very specific way. However, with a broader goal like “understand the tree” in mind it’s easy to imagine how each of the animals in the cartoon could contribute. The fish and bird could together figure out where the water comes from that are crucial for the tree. The elephant could use his strength to open up a “window” to look inside… You get the idea. As an aside: From this perspective, it’s clear how problematic it is that students usually are only asked to “climb the tree” in exams. Many students who could contribute in other ways are filtered out. However, this is a quite different story I don’t want to dive into here. So how could the situation be improved? I have two concrete ideas. 1. I’m currently building a “Travel Guide to Physics“.  The goal here is to show that nothing is really complicated. There are lots of different ways to tackle a subject and not every explanation is suited for everyone. Instead, everyone needs to find explanations that speak a language he/she understands and that’s what the Physics Travel Guide helps with. 2. I’m trying to get more people to share what they learn. The thing is, as I like to emphasize, beginners need explanations from beginners to understand something and not polished stuff from “Devsplainers” who overcomplicate everything.  If more people would share what they learn, everyone could learn more easily. Currently, we “are an army of wheel-reinventors” and I would like to help to change that. Moreover, sharing notes while learning is an example of how students can start contributing early on. Now, why aren’t more students doing this? There are two major obstacles: 1.  Many don’t know they are “allowed” to do this or that others would care about their notes and 2. many don’t know how to do publish their notes online. That’s why I created Physicsnotes.org. My goals with this project are 1. to motivate people to share their notes and show them that they don’t need permission. 2. To give them the tools and knowledge to do so. (I really regret that I didn’t publish my notes while I was a student. While some of them are now published as a book, the majority of them no longer exist.) So these are just my ideas. I’m certain there are more and I would love to hear them. Moreover, I think a crucial first step would be to give the problem a name like the tech people did with “Devsplaining”. With a fitting name, people could start to call people out for overcomplicating things and frighten beginners. So if you have an idea, please let me know. Unfortunately, repetition is a convincing argument. I recently wrote about the question “When do you understand?“. In this post, I outlined a pattern that I observed how I end up with a deep understanding of a given topic. However, there is also a second path that I totally missed in this post. The path to understanding that I outlined requires massive efforts to get to the bottom of things. I argued that you only understand something when you are able to explain it in simple terms. The second path that I missed in my post doesn’t really lead to understanding. Yet, the end result is quite similar. Oftentimes, it’s not easy to tell if someone got to his understanding via path 1 or path 2. Even worse, oftentimes you can’t tell if you got to your own understanding via path 1 or path 2. So what is this second path? It consists of reading something so often that you start to accept it as a fact. The second path makes use of repetition as a strong argument. Once you know this, it is shocking to observe how easy oneself gets convinced by mere repetition. However, this isn’t as bad as it may sound. If dozens of experts that you respect, repeat something, it is a relatively safe bet to believe them. This isn’t a bad strategy. At least, not always. Especially when you are starting, you need orientation. If you want to move forward quickly, you can’t get to the bottom of every argument. Still, there are instances where this second path is especially harmful. Fundamental physics is definitely one of them. If we want to expand our understanding of nature at the most fundamental level, we need to constantly ask ourselves: Do we really understand this? Or have we simply accepted it, because it got repeated often enough? The thing is that physics is not based on axioms. Even if you could manage to condense our current state of knowledge into a set of axioms, it would be a safe bet that at least one of them will be dropped in the next century. Here’s an example. Hawking and the Expanding Universe In 1983 Stephen Hawking gave a lecture about cosmology, in which he explained The De Sitter example was useful because it showed how one could solve the Wheeler-DeWitt equation and apply the boundary conditions in a simple case. […] However, if there are two facts about our universe which we are reasonably certain, one is that it is not exponentially expanding and the other is that it contains matter.” Only 15 years later, physicists were no longer “reasonably certain” that the universe isn’t exponentially expanding. On the contrary, we are now reasonably certain of the exact opposite. By observing the most distant supernovae two experimental groups established the accelerating expansion as an experimental fact. This was a big surprise for everyone and rightfully led to a Nobel prize for its discoverers. The moral of this example isn’t, of course, that Hawking is stupid. He only summarized what everyone at this time believed to know. This example shows how quickly our most basic assumptions can change. Although most experts were certain that the expansion of the universe isn’t accelerating, they were all wrong. Theorems in Physics and the Assumptions Behind Them If you want further examples, just have a look at almost any theorem that is commonly cited in physics. Usually, the short final message of the theorem is repeated over and over. However, you almost never hear about the assumptions that are absolutely crucial for the proof. This is especially harmful, because, as the example above demonstrated, our understanding of nature constantly changes. Physics is never as definitive as mathematics. Even theorems aren’t bulletproof in physics because the assumptions can turn out to be wrong with new experimental findings. What we currently think to be true about physics will be completely obsolete in 100 years. That’s what history teaches us. An example, closely related to the accelerating universe example from above, is the Coleman-Mandula theorem. There is probably no theorem that is cited more often. Most talks related to supersymmetry mention it at some point. It is no exaggeration when I say that I have heard at least 100 talks that mentioned the final message of the proof: “space-time and internal symmetries cannot be combined in any but a trivial way”. Yet, so far I’ve found no one who was able to discuss the assumptions of the theorem. The theorem got repeated so often in the last decades that it is almost universally accepted to be true. And yes, the proof is, of course, correct. However, what if one of the assumptions that go into the proof isn’t valid? Let’s have a look. An important condition, already mentioned in the abstract of the original paper is Poincare symmetry. This original paper was published in 1967 and then it was reasonably certain we are living in a universe with Poincare symmetry. However, as already mentioned above, we know since 1998 that this isn’t correct. The expansion of the universe is accelerating. This means the cosmological constant is nonzero. The correct symmetry group that preserves the constant speed of light and the value of a nonzero cosmological constant is the De Sitter group and not the Poincare group. In the limit of a vanishing cosmological constant, the De Sitter group contracts to the Poincare group. The cosmological constant is indeed tiny and therefore we aren’t too wrong if we use the Poincare group instead of the De Sitter group. Yet, for a mathematical proof like the one proposed by Coleman and Mandula, whether we use De Sitter symmetry or Poincare symmetry makes all the difference in the world. The Poincare group is a quite ugly group and consist of Lorentz transformations and translations: $ \mathbb{R}(3,1) \rtimes SL(2,\mathbb{C}) .$ The Coleman-Mandula proof makes crucial use of the inhomogeneous translation part of this group $\mathbb{R}(3,1)$. In contrast, the De Sitter group is a simple group. There is no inhomogeneous part. As far as I know, there is no Coleman-Mandula theorem if we replace the assumption: “Poincare symmetry” with “De Sitter symmetry”. This is an example where repetition is the strongest argument. The final message of the Coleman-Mandula theorem is universally accepted as a fact. Yet, almost no one had a look at the original paper and its assumptions. The strongest argument for the Coleman-Mandula theorem seems to be that is was repeated so often in the last decades. Maybe you think: what’s the big deal? Well, if the Coleman-Mandula no-go theorem is no longer valid, because we live in a universe with De Sitter symmetry, a whole new world would open up in theoretical physics. We could start thinking about how spacetime symmetries and internal symmetries fit together. The QCD Vacuum Here is another example of something people take for given only because it was repeated often enough: The structure of the CP vacuum. I’ve written about this at great length here. I talked to several Ph.D. students who work on problems related to the strong CP problem and the vacuum in quantum field theory. Few knew the assumptions that are necessary to arrive at the standard interpretation of the QCD vacuum. No one knew where the assumptions actually come from and if they are really justified. The thing is that when you dig deep enough you’ll notice that the restriction to gauge transformations that satisfy $U \to 1$ at infinity is not based on something bulletproof, but simply an assumption. This is a crucial difference and if you want to think about the QCD vacuum and the strong CP problem you should know this. However, most people take this restriction for granted, because it has been repeated often enough. Progress in Theoretical Physics without Experimental Guidance The longer I study physics the more I become convinced that people should be more careful about what they think is definitely correct. Actually, there are very few things we know for certain and it never hurts to ask: what if this assumption everyone uses is actually wrong? For a long time, physics was strongly guided by experimental findings. From what I’ve read these must have been amazing exciting times. There was tremendous progress after each experimental finding. However, in the last decades, there were no experimental results that have helped to understand nature better at a fundamental level. (I’ve written about the status of particle physics here). So currently a lot of people are asking: How can there be progress without experimental results that excite us? I think a good idea would be to take a step back and talk openly, clearly and precisely about what we know and understand and what we don’t. Already in 1996, Nobel Prize winner Sheldon Lee Glashow noted: The first step in this direction would be that more people were aware that while repetition is a strong argument, it is not a good one when we try to make progress. The examples above hopefully made clear that only because many people state that something is correct, does not mean that it is actually correct. The message of a theorem can be invalid, although the proof is correct, simply because the assumptions are no longer up to date. This is what science is all about. We should always question what we take for given. As for many things, Feynman said it best:
0c046bb13f75ccdc
Thursday, April 19, 2018 What is good science? When your understanding of physics establishes that empirical infinity is a large number and that the inverse is a small number establishing the scaling system of the universe, it soon becomes impossible to securely observe what must be the foundation details. It has been possible to image a neutron sort of. Yet that neutron could by constructed from 600 dark matter elements or so. Each dark matter element is additionally constructed from likely 1200 additional components. All while dropping the implied scale by a 1000 and then a million. So far there is no plausible way to use our clumsy hardware see any of this and we may never be able to.  Good science starts with learning to collect observations such as they are to establish a phenomena. It continues through learning to study observers and to find many of them. From that it is possible to enhance your potential conjecture to something you can trust to test. If it becomes impossible to collect data, then you must contrive blank sheet conjectures that you then learn to bound and test. This is what we really have with quantum theory and it has been a fruitful approach to the physical problem of seeing the physical at the scales involved. My Cloud cosmology is orthogonal to the quantum approach and thus allows me to start with creation itself and self assemble the universe to the point in which our observations become resolved. Both are good science as they attack the observations in two separate directions of inquiry.  Bad science takes to form of manipulating data to produce desired conclusions or outright ignoring the phenomena and bad mouthing it.. What is good science?  Adam Becker    is a writer and astrophysicist. He is currently a visiting scholar at the Office for History of Science and Technology at the University of California, Berkeley. His writing has appeared in New Scientist and on the BBC, among others. He is the author of What is Real? The Unfinished Quest for the Meaning of Quantum Physics (2018). He lives in Oakland in California. But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years, and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought. That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right. Falsifiability doesn’t work as a blanket restriction in science for the simple reason that there are no genuinely falsifiable scientific theories. I can come up with a theory that makes a prediction that looks falsifiable, but when the data tell me it’s wrong, I can conjure some fresh ideas to plug the hole and save the theory. The history of science is full of examples of this ex post facto intellectual engineering. In 1781, William and Caroline Herschel discovered the planet Uranus. Physicists of the time promptly set about predicting its orbit using Sir Isaac Newton’s law of universal gravitation. But in the following decades, as astronomers followed Uranus’s motion in its slow 84-year orbit around the Sun, they noticed that something was wrong. Uranus didn’t quite move as it should. Puzzled, they refined their measurements, took more and more careful observations, but the anomaly didn’t go away. Newton’s physics simply didn’t predict the location of Uranus over time. But astronomers of the day didn’t claim that the unexpected data falsified Newtonian gravity. Instead, they proposed another explanation for the strange motion of Uranus: something large and unseen was tugging on the planet. Calculations showed that it would have to be another planet, as large as Uranus and even farther from the Sun. In 1846, the French astrophysicist Urbain Le Verrier predicted the location of this hypothetical planet. Unable to get any French observatories interested in the hunt, he sent the details of his prediction to colleagues in Germany. That night, they pointed their telescopes where Le Verrier had told them to look, and within half an hour they spotted the planet Neptune. Newtonian physics, rather than being falsified, had been fabulously vindicated – it had successfully predicted the exact location of an entire unseen planet. For years, the mystery of Mercury was unsolved, without any suggestion that Newton was wrong Flush with success, Le Verrier went after another planetary puzzle. Several years after his discovery of Neptune, it became clear to him and other astronomers that Mercury wasn’t moving as it was supposed to, either. The point in its orbit where it made its closest approach to the Sun, known as the perihelion, shifted a little more than Newton’s gravity said it should each Mercurial year, adding up to 43 extra arcseconds (a unit of angular measurement) over the course of a century. This is a tiny amount – less than one-30,000th of a full orbit around the Sun – but just as with Uranus before, the anomaly didn’t go away with persistent observation. It stubbornly remained, defying the ghost of Newton. Once again, Newtonian gravity was not thrown out as falsified – at least, not immediately. Instead, Le Verrier tried the same trick again: pinning the anomaly on an unseen planet, a tiny rock so close to the Sun that it had been missed by all other astronomers throughout human history. He called the planet Vulcan, after the Roman god of the forge. Le Verrier and others sought Vulcan for years, lugging powerful telescopes to solar eclipses in an attempt to catch a glimpse of the unseen planet in the brief minutes of totality while the Sun was blocked by the Earth’s moon. Le Verrier never found Vulcan. After his death in 1877, the astronomy community gave up the search, concluding that Vulcan simply wasn’t there. But even so, Newton’s gravity wasn’t discarded. Instead, astronomers of the time collectively shrugged and moved on. For years, the mystery of Mercury’s perihelion was unsolved, without any serious suggestion that Newton was wrong. Falsification was simply not on the menu. Finally, in 1915, Albert Einstein used his brand-new theory of general relativity to show that he could succeed where Le Verrier had failed. General relativity was a new account of how gravity worked, superseding Newtonian physics – and it perfectly predicted the shift in the perihelion of Mercury. Einstein said he was ‘beside himself with joy’ when he realised that his theory could correctly solve this longstanding puzzle. Four years later, the British astronomer Arthur Eddington and his team took their powerful telescopes to an eclipse, not to hunt for Vulcan, but to confirm that starlight bent around the Sun as Einstein’s theory had predicted. They found that general relativity was right (though later investigations suggested that their results were marred by errors, despite reaching the correct conclusion); Einstein was instantly rocketed to fame as the man who had shown Newton wrong. So Newtonian gravity was ultimately thrown out, but not merely in the face of data that threatened it. That wasn’t enough. It wasn’t until a viable alternative theory arrived, in the form of Einstein’s general relativity, that the scientific community entertained the notion that Newton might have missed a trick. But what if Einstein had never shown up, or had been incorrect? Could astronomers have found another way to account for the anomaly in Mercury’s motion? Certainly – they could have said that Vulcan was there after all, and was merely invisible to telescopes in some way. This might sound somewhat far-fetched, but again, the history of science demonstrates that this kind of thing actually happens, and it sometimes works – as Pauli found out in 1930. At the time, new experiments threatened one of the core principles of physics, known as the conservation of energy. The data showed that in a certain kind of radioactive decay, electrons could fly out of an atomic nucleus with a range of speeds (and attendant energies) – even though the total amount of energy in the reaction should have been the same each time. That meant energy sometimes went missing from these reactions, and it wasn’t clear what was happening to it. The Danish physicist Niels Bohr was willing to give up energy conservation. But Pauli wasn’t ready to concede the idea was dead. Instead, he came up with his outlandish particle. ‘I have hit upon a desperate remedy to save … the energy theorem,’ he wrote. The new particle could account for the loss of energy, despite having almost no mass and no electric charge. But particle detectors at the time had no way of seeing a chargeless particle, so Pauli’s proposed solution was invisible. Nonetheless, rather than agreeing with Bohr that energy conservation had been falsified, the physics community embraced Pauli’s hypothetical particle: what came to be known as a ‘neutrino’ (the little neutral one), once the Italian physicist Enrico Fermi refined the theory a few years later. The happy epilogue was that neutrinos were finally observed in 1956, with technology that had been totally unforeseen a quarter-century earlier: a new kind of particle detector deployed in conjunction with a nuclear reactor. Pauli’s ghostly particles were real; in fact, later work revealed that trillions of neutrinos from the Sun pass through our body every second, totally unnoticed and unobserved. So invoking the invisible to save a theory from falsification is sometimes the right scientific move. Yet Pauli certainly didn’t believe that his particle could never be observed. He hoped that it could be seen eventually, and he was right. Similarly, Einstein’s general relativity was vindicated through observation. Falsification just can’t be the answer, or at least not the whole answer, to the question of what makes a good theory. What about observability? It’s certainly true that observation plays a crucial role in science. But this doesn’t mean that scientific theories have to deal exclusively in observable things. For one, the line between the observable and unobservable is blurry – what was once ‘unobservable’ can become ‘observable’, as the neutrino shows. Sometimes, a theory that postulates the imperceptible has proven to be the right theory, and is accepted as correct long before anyone devises a way to see those things. Take the debate within physics in the second half of the 1800s about atoms. Some scientists believed that they existed, but others were deeply skeptical. Physicists such as Ludwig Boltzmann in Austria, James Clerk Maxwell in the United Kingdom and Rudolf Clausius in Germany were convinced by the chemical and physical evidence that atomic theory was correct. Others, such as the Austrian physicist Ernst Mach, were unimpressed. Atoms were unobservable. Thus Mach condemned them as unreal and unnecessary To Mach, atoms were a wholly unnecessary hypothesis. After all, anything that wasn’t observable couldn’t be considered a part of a good scientific theory – in fact, such things couldn’t even be considered real. To him, the archetype for a perfect scientific theory was thermodynamics, the study of heat. This was a set of empirical laws relating directly observable quantities such as the temperature, pressure and volume of a gas. The theory was complete and perfect as it was, and made no reference to anything unobservable at all. But Boltzmann, Maxwell and Clausius had worked hard to show that Mach’s beloved thermodynamics was far from complete. Over the course of the rest of the 19th century, they and others, such as the American scientist Josiah Willard Gibbs, proved that the entirety of thermodynamics – and then some – could be re-derived from the simple assumption that atoms were real, and that all objects in everyday life were composed of a phenomenal number of them. While it was impossible in practice to predict the behaviour of every individual atom, in aggregate their behaviour obeyed regular patterns – and because there are so many atoms in everyday objects (way more than 100 billion billion of them in a thimbleful of air), those patterns were never visibly broken, even though they were the result only of statistical tendencies, not ironclad laws. The idea of demoting the laws of thermodynamics to mere patterns was repugnant to Mach; invoking things too small to be seen was even worse. ‘I don’t believe that atoms exist!’ he blurted out during a talk by Boltzmann in Vienna. Atoms were too small to see even with the most powerful microscope that could possibly be built at the time. Indeed, according to calculations carried out by Maxwell and the Austrian scientist Josef Loschmidt, atoms were hundreds of times smaller than the wavelength of visible light – and would thus be forever hidden from view of any microscope relying on light waves. Atoms were unobservable. Thus Mach condemned them as unreal and unnecessary, extraneous to the practice of science. Mach’s views were enormously influential in his native Austria and elsewhere in central Europe. His ideas led his compatriot Boltzmann to despair of convincing the rest of the physics community that atoms were real; this might have contributed to Boltzmann’s suicide in 1906. Yet physicists who did subscribe to Mach’s ideas often found themselves stymied in their work. Walter Kaufmann, a talented German experimental physicist, found in 1897 that cathode rays (the kind of rays used inside old TVs and computer monitors) had a constant ratio of charge to mass. But rather than accepting that cathode rays might consist of small particles with a fixed charge and mass, he heeded Mach’s warning not to postulate anything unobservable, and remained silent on the subject. Months later, the English physicist JJ Thomson found the same curious fact about cathode rays. But Mach’s views were less popular in England, and Thomson was comfortable suggesting the existence of a tiny particle that comprised cathode rays. He called it the electron, and won the Nobel Prize for its discovery in 1906 (as well as an eternal place in all introductory physics and chemistry textbooks). Mach’s ideas certainly weren’t all bad; his writing inspired the young Einstein in his early work on relativity. Mach’s influence also extended to his godson, Pauli, the child of two fellow intellectuals in Vienna. Mach’s ideas played a major role in Pauli’s early intellectual development, and the words of his godfather were probably ringing in Pauli’s ears when he first suggested the idea of the neutrino. Unlike Pauli, Einstein was not afraid of suggesting unobservable things. In 1905, the same year he published his theory of special relativity, he proposed the existence of the photon, the particle of light, to an unbelieving world. (He was not proven right about photons for nearly 20 years.) Mach’s ideas also inspired a vital movement in philosophy a generation later, known as logical positivism – broadly speaking, the idea that the only meaningful statements about the world were ones that could be directly verified through observation. Positivism originated in Vienna and elsewhere in the 1920s, and the brilliant ideas of the positivists played a major role in shaping philosophy from that time to the present day. But what makes something ‘observable’? Are things that can be seen only with specialised implements observable? Some of the positivists said the answer was no, only the unvarnished data of our senses would suffice – so things seen in microscopes were therefore not truly real. But in that case, ‘we cannot observe physical things through opera glasses, or even through ordinary spectacles, and one begins to wonder about the status of what we see through an ordinary windowpane,’ the philosopher Grover Maxwell wrote in 1962. Furthermore, Maxwell pointed out that the definition of what was ‘unobservable in principle’ depends on our best scientific theories and full understanding of the world, and so moves over time. Before the invention of the telescope, for example, the idea of an instrument that could make distant objects appear closer seemed impossible; consequently, a planet too faint to be seen with the naked eye, such as Neptune, would have been deemed ‘unobservable in principle’. Yet Neptune is undoubtedly there – and we’ve not only seen it, we sent Voyager 2 there in 1989. Similarly, what we consider unobservable in principle today might become observable in the future with the advent of new physical theories and observational technologies. ‘It is theory, and thus science itself, which tells us what is or is not … observable,’ Maxwell wrote. ‘There are no a priori or philosophical criteria for separating the observable from the unobservable.’ We use all of it, the observable and the unobservable, when we do science Even where theories propose identical observable outcomes, some are provisionally accepted while others are flatly rejected. Say I publish a theory stating that there are invisible microscopic unicorns with flowing hair, spiralled horns and a taste for partial differential equations; these unicorns are responsible for the randomness of the quantum world, pushing and pulling subatomic particles to ensure that they obey the Schrödinger equation, simply because they like that equation more than any other. This theory is, by its nature, totally observationally identical with quantum mechanics. But it is a profoundly silly theory, and would (I hope) be rejected by all physicists were someone to publish it. Putting aside this glib example, the choices we make between observationally identical theories have a big impact upon the practice of science. The American physicist Richard Feynman pointed out that two wildly different theories that have identical observational consequences can still give you different perspectives on problems, and lead you to different answers and different experiments to conduct in order to discover the next theory. So it’s not just the observable content of our scientific theories that matters. We use all of it, the observable and the unobservable, when we do science. Certainly, we are more wary about our belief in the existence of invisible entities, but we don’t deny that the unobservable things exist, or at least that their existence is plausible. Some of the most interesting scientific work gets done when scientists develop bizarre theories in the face of something new or unexplained. Madcap ideas must find a way of relating to the world – but demanding falsifiability or observability, without any sort of subtlety, will hold science back. It’s impossible to develop successful new theories under such rigid restrictions. As Pauli said when he first came up with the neutrino, despite his own misgivings: ‘Only those who wager can win.’ Henry said... I'm thinking that at least half this article could be mooted by clarifying the distinction made in science between a theory and a hypothesis. In English, the term theory has a vaguer meaning and can promote exactly this confusion. Yes, we need wild ideas in science, but as hypotheses, not as theories. Theories are what hypotheses become once they have sufficient evidentiary support to be considered proven. Bob Podolsky said... This article is a good illustration of the fact that new science is not created within the structure of science, but instead must involve ideas "outside the box" that science represents. My father was Boris Podolsky, who predicted the discovery of "Quantum Entanglement" ("spooky action at a distance") in 1935 in a landmark paper with Einstein and Rosen. It was some 30 years before experimental technology caught up with theory and made the phenomenon "observable". My father's explanation was that new science has to be "sufficiently crazy", by the standards of existing science, in order to have any chance of being a valuable addition to current scientific lore. Bob Podolsky
a43d722ebe2617c5
February 4th, 2019 I’ve of course been following the recent public debate about whether to build a circular collider to succeed the LHC—notably including Sabine Hossenfelder’s New York Times column arguing that we shouldn’t.  (See also the responses by Jeremy Bernstein and Lisa Randall, and the discussion on Peter Woit’s blog, and Daniel Harlow’s Facebook thread, and this Vox piece by Kelsey Piper.)  Let me blog about this as a way of cracking my knuckles or tuning my violin, just getting back into blog-shape after a long hiatus for travel and family and the beginning of the semester. Regardless of whether this opinion is widely shared among my colleagues, I like Sabine.  I’ve often found her blogging funny and insightful, and I wish more non-Lubos physicists would articulate their thoughts for the public the way she does, rather than just standing on the sidelines and criticizing the ones who do. I find it unfortunate that some of the replies to Sabine’s arguments dwelled on her competence and “standing” in physics (even if we set aside—as we should—Lubos’s misogynistic rants, whose predictability could be used to calibrate atomic clocks). It’s like this: if high-energy physics had reached a pathological state of building bigger and bigger colliders for no good reason, then we’d expect that it would take a semi-outsider to say so in public, so then it wouldn’t be a further surprise to find precisely such a person doing it. Not for the first time, though, I find myself coming down on the opposite side as Sabine. Basically, if civilization could get its act together and find the money, I think it would be pretty awesome to build a new collider to push forward the energy frontier in our understanding of the universe. Note that I’m not making the much stronger claim that this is the best possible use of $20 billion for science. Plausibly a thousand $20-million projects could be found that would advance our understanding of reality by more than a new collider would. But it’s also important to realize that that’s not the question at stake here. When, for example, the US Congress cancelled the Superconducting Supercollider midway through construction—partly, it’s believed, on the basis of opposition from eminent physicists in other subfields, who argued that they could do equally important science for much cheaper—none of the SSC budget, as in 0% of it, ever did end up redirected to those other subfields. In practice, then, the question of “whether a new collider is worth it” is probably best considered in absolute terms, rather than relative to other science projects. What I found most puzzling, in Sabine’s writings on this subject, was the leap in logic from 1. many theorists expected that superpartners, or other new particles besides the Higgs boson, had a good chance of being discovered at the LHC, based on statistical arguments about “natural” parameter values, and 2. the basic soundness of naturalness arguments was always open to doubt, and indeed the LHC results to date offer zero support for them, and 3. many of the same theorists now want an even bigger collider, and continue to expect new particles to be found, and haven’t sufficiently reckoned with their previous failed predictions, to … 4. therefore we shouldn’t build the bigger collider. How do we get from 1-3 to 4: is the idea that we should punish the errant theorists, by withholding an experiment that they want, in order to deter future wrong predictions? After step 3, it seems to me that Sabine could equally well have gone to: and therefore it’s all the more important that we do build a new collider, in order to establish all the more conclusively that there’s just an energy desert up there—and that I, Sabine, was right to emphasize that possibility, and those other theorists were wrong to downplay it! Like, I gather that there are independently motivated scenarios where there would be only the Higgs at the LHC scale, and then new stuff at the next energy scale beyond it. And as an unqualified outsider who enjoys talking to friends in particle physics and binge-reading about it, I’d find it hard to assign the totality of those scenarios less than ~20% credence or more than ~80%—certainly if the actual experts don’t either. And crucially, it’s not as if raising the collision energy is just one arbitrary direction in which to look for new fundamental physics, among a hundred a-priori equally promising directions. Basically, there’s raising the collision energy and then there’s everything else. By raising the energy, you’re not testing one specific idea for physics beyond Standard Model, but a hundred or a thousand ideas in one swoop. The situation reminds me a little of the quantum computing skeptics who say: scalable QC can never work, in practice and probably even in principle; the mainstream physics community only thinks it can work because of groupthink and hype; therefore, we shouldn’t waste more funds trying to make it work. With the sole, very interesting exception of Gil Kalai, none of the skeptics ever seem to draw what strikes me as an equally logical conclusion: whoa, let’s go full speed ahead with trying to build a scalable QC, because there’s an epochal revolution in physics to be had here—once the experimenters finally see that I was right and the mainstream was wrong, and they start to unravel the reasons why! Of course, $20 billion is a significant chunk of change, by the standards of science even if not by the standards of random government wastages (like our recent $11 billion shutdown). And ultimately, decisions do need to be made about which experiments are most interesting to pursue with limited resources. And if a future circular collider were built, and if it indeed just found a desert, I think the balance would tilt pretty strongly toward Sabine’s position—that is, toward declining to build an even bigger and more expensive collider after that. If the Patriots drearily won every Superbowl 13-3, year after year after year, eventually no one would watch anymore and the Superbowl would get cancelled (well, maybe that will happen for other reasons…). But it’s worth remembering that—correct me if I’m wrong—so far there have been no cases in the history of particle physics of massively expanding the energy frontier and finding absolutely nothing new there (i.e., nothing that at least conveyed multiple bits of information, as the Higgs mass did). And while my opinion should count for less than a neutrino mass, just thinking it over a-priori, I keep coming back to the question: before we close the energy frontier for good, shouldn’t there have been at least one unmitigated null result, rather than zero? The Winding Road to Quantum Supremacy January 15th, 2019 Greetings from QIP’2019 in Boulder, Colorado! Obvious highlights of the conference include Urmila Mahadev’s opening plenary talk on her verification protocol for quantum computation (which I blogged about here), and Avishay Tal’s upcoming plenary on his and Ran Raz’s oracle separation between BQP and PH (which I blogged about here). If you care, here are the slides for the talk I just gave, on the paper “Online Learning of Quantum States” by me, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Feel free to ask in the comments about what else is going on. I returned a few days ago from my whirlwind Australia tour, which included Melbourne and Sydney; a Persian wedding that happened to be held next to a pirate ship (the Steve Irwin, used to harass whalers and adorned with a huge Jolly Roger); meetings and lectures graciously arranged by friends at UTS; a quantum computing lab tour personally conducted by 2018 “Australian of the Year” Michelle Simmons; three meetups with readers of this blog (or more often, readers of the other Scott A’s blog who graciously settled for the discount Scott A); and an excursion to Grampians National Park to see wild kangaroos, wallabies, koalas, and emus. But the thing that happened in Australia that provided the actual occassion for this post is this: I was interviewed by Adam Ford in Carlton Gardens in Melbourne, about quantum supremacy, AI risk, Integrated Information Theory, whether the universe is discrete or continuous, and to be honest I don’t remember what else. You can watch the first segment, the one about the prospects for quantum supremacy, here on YouTube. My only complaint is that Adam’s video camera somehow made me look like an out-of-shape slob who needs to hit the gym or something. Update (Jan. 16): Adam has now posted a second video on YouTube, wherein I talk about my “Ghost in the Quantum Turing Machine” paper, my critique of Integrated Information Theory, and more. And now Adam has posted yet a third segment, in which I talk about small, lighthearted things like existential threats to civilization and the prospects for superintelligent AI. And a fourth, in which I talk about whether reality is discrete or continuous. Related to the “free will / consciousness” segment of the interview: the biologist Jerry Coyne, whose blog “Why Evolution Is True” I’ve intermittently enjoyed over the years, yesterday announced my existence to his readers, with a post that mostly criticizes my views about free will and predictability, as I expressed them years ago in a clip that’s on YouTube (at the time, Coyne hadn’t seen GIQTM or my other writings on the subject). Coyne also took the opportunity to poke fun at this weird character he just came across whose “life is devoted to computing” and who even mistakes tips for change at airport smoothie stands. Some friends here at QIP had a good laugh over the fact that, for the world beyond theoretical computer science and quantum information, this is what 23 years of research, teaching, and writing apparently boil down to: an 8.5-minute video clip where I spouted about free will, and also my having been arrested once in a comic mix-up at Philadelphia airport. Anyway, since then I had a very pleasant email exchange with Coyne—someone with whom I find myself in agreement much more often than not, and who I’d love to have an extended conversation with sometime despite the odd way our interaction started. Incompleteness ex machina December 30th, 2018 I have a treat with which to impress your friends at New Year’s Eve parties tomorrow night: a rollicking essay graciously contributed by a reader named Sebastian Oberhoff, about a unified and simplified way to prove all of Gödel’s Incompleteness Theorems, as well as Rosser’s Theorem, directly in terms of computer programs. In particular, this improves over my treatments in Quantum Computing Since Democritus and my Rosser’s Theorem via Turing machines post. While there won’t be anything new here for the experts, I loved the style—indeed, it brings back wistful memories of how I used to write, before I accumulated too many imaginary (and non-imaginary) readers tut-tutting at crass jokes over my shoulder. May 2019 bring us all the time and the courage to express ourselves authentically, even in ways that might be sneered at as incomplete, inconsistent, or unsound. December 27th, 2018 I’m planning to be in Australia soon—in Melbourne January 4-10 for a friend’s wedding, then in Sydney January 10-11 to meet colleagues and give a talk. It will be my first trip down under for 12 years (and Dana’s first ever). If there’s interest, I might be able to do a Shtetl-Optimized meetup in Melbourne the evening of Friday the 4th (or the morning of Saturday the 5th), and/or another one in Sydney the evening of Thursday the 10th. Email me if you’d go, and then we’ll figure out details. The National Quantum Initiative Act is now law. Seeing the photos of Trump signing it, I felt … well, whatever emotions you might imagine I felt. Frank Verstraete asked me to announce that the University of Vienna is seeking a full professor in quantum algorithms; see here for details. Why are amplitudes complex? December 17th, 2018 [By prior agreement, this post will be cross-posted on Microsoft’s Q# blog, even though it has nothing to do with the Q# programming language.  It does, however, contain many examples that might be fun to implement in Q#!] Why should Nature have been quantum-mechanical?  It’s totally unclear what would count as an answer to such a question, and also totally clear that people will never stop asking it. Short of an ultimate answer, we can at least try to explain why, if you want this or that piece of quantum mechanics, then the rest of the structure is inevitable: why quantum mechanics is an “island in theoryspace,” as I put it in 2003. In this post, I’d like to focus on a question that any “explanation” for QM at some point needs to address, in a non-question-begging way: why should amplitudes have been complex numbers?  When I was a grad student, it was his relentless focus on that question, and on others in its vicinity, that made me a lifelong fan of Chris Fuchs (see for example his samizdat), despite my philosophical differences with him. It’s not that complex numbers are a bad choice for the foundation of the deepest known description of the physical universe—far from it!  (They’re a field, they’re algebraically closed, they’ve got a norm, how much more could you want?)  It’s just that they seem like a specific choice, and not the only possible one.  There are also the real numbers, for starters, and in the other direction, the quaternions. Quantum mechanics over the reals or the quaternions still has constructive and destructive interference among amplitudes, and unitary transformations, and probabilities that are absolute squares of amplitudes.  Moreover, these variants turn out to lead to precisely the same power for quantum computers—namely, the class BQP—as “standard” quantum mechanics, the one over the complex numbers.  So none of those are relevant differences. Indeed, having just finished teaching an undergrad Intro to Quantum Information course, I can attest that the complex nature of amplitudes is needed only rarely—shockingly rarely, one might say—in quantum computing and information.  Real amplitudes typically suffice.  Teleportationsuperdense coding, the Bell inequality, quantum money, quantum key distribution, the Deutsch-Jozsa and Bernstein-Vazirani and Simon and Grover algorithms, quantum error-correction: all of those and more can be fully explained without using a single i that’s not a summation index.  (Shor’s factoring algorithm is an exception; it’s much more natural with complex amplitudes.  But as the previous paragraph implied, their use is removable even there.) It’s true that, if you look at even the simplest “real” examples of quantum systems—or as a software engineer might put it, at the application layers built on top of the quantum OS—then complex numbers are everywhere, in a way that seems impossible to remove.  The Schrödinger equation, energy eigenstates, the position/momentum commutation relation, the state space of a spin-1/2 particle in 3-dimensional space: none of these make much sense without complex numbers (though it can be fun to try). But from a sufficiently Olympian remove, it feels circular to use any of this as a “reason” for why quantum mechanics should’ve involved complex amplitudes in the first place.  It’s like, once your OS provides a certain core functionality (in this case, complex numbers), it’d be surprising if the application layer didn’t exploit that functionality to the hilt—especially if we’re talking about fundamental physics, where we’d like to imagine that nothing is wasted or superfluous (hence Rabi’s famous question about the muon: “who ordered that?”). But why should the quantum OS have provided complex-number functionality at all?  Is it possible to answer that question purely in terms of the OS’s internal logic (i.e., abstract quantum information), making minimal reference to how the OS will eventually get used?  Maybe not—but if so, then that itself would seem worthwhile to know. If we stick to abstract quantum information language, then the most “obvious, elementary” argument for why amplitudes should be complex numbers is one that I spelled out in Quantum Computing Since Democritus, as well as my Is quantum mechanics an island in theoryspace? paper.  Namely, it seems desirable to be able to implement a “fraction” of any unitary operation U: for example, some V such that V2=U, or V3=U.  With complex numbers, this is trivial: we can simply diagonalize U, or use the Hamiltonian picture (i.e., take e-iH/2 where U=e-iH), both of which ultimately depend on the complex numbers being algebraically closed.  Over the reals, by contrast, a 2×2 orthogonal matrix like $$ U = \left(\begin{array}[c]{cc}1 & 0\\0 & -1\end{array}\right)$$ has no 2×2 orthogonal square root, as follows immediately from its determinant being -1.  If we want a square root of U (or rather, of something that acts like U on a subspace) while sticking to real numbers only, then we need to add another dimension, like so: $$ \left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & -1 & 0\\0 & 0&-1\end{array}\right)=\left(\begin{array}[c]{ccc}1 & 0 & 0\\0 & 0 & 1\\0 & -1 & 0\end{array}\right) ^{2} $$ This is directly related to the fact that there’s no way for a Flatlander to “reflect herself” (i.e., switch her left and right sides while leaving everything else unchanged) by any continuous motion, unless she can lift off the plane and rotate herself through the third dimension.  Similarly, for us to reflect ourselves would require rotating through a fourth dimension. One could reasonably ask: is that it?  Aren’t there any “deeper” reasons in quantum information for why amplitudes should be complex numbers? Indeed, there are certain phenomena in quantum information that, slightly mysteriously, work out more elegantly if amplitudes are complex than if they’re real.  (By “mysteriously,” I mean not that these phenomena can’t be 100% verified by explicit calculations, but simply that I don’t know of any deep principle by which the results of those calculations could’ve been predicted in advance.) One famous example of such a phenomenon is due to Bill Wootters: if you take a uniformly random pure state in d dimensions, and then you measure it in an orthonormal basis, what will the probability distribution (p1,…,pd) over the d possible measurement outcomes look like?  The answer, amazingly, is that you’ll get a uniformly random probability distribution: that is, a uniformly random point on the simplex defined by pi≥0 and p1+…+pd=1.  This fact, which I’ve used in several papers, is closely related to Archimedes’ Hat-Box Theorem, beloved by friend-of-the-blog Greg Kuperberg.  But here’s the kicker: it only works if amplitudes are complex numbers.  If amplitudes are real, then the resulting distribution over distributions will be too bunched up near the corners of the probability simplex; if they’re quaternions, it will be too bunched up near the middle. There’s an even more famous example of such a Goldilocks coincidence—one that’s been elevated, over the past two decades, to exalted titles like “the Axiom of Local Tomography.”  Namely: suppose we have an unknown finite-dimensional mixed state ρ, shared by two players Alice and Bob.  For example, ρ might be an EPR pair, or a correlated classical bit, or simply two qubits both in the state |0⟩.  We imagine that Alice and Bob share many identical copies of ρ, so that they can learn more and more about it by measuring this copy in this basis, that copy in that basis, and so on. We then ask: can ρ be fully determined from the joint statistics of product measurements—that is, measurements that Alice and Bob can apply separately and locally to their respective subsystems, with no communication between them needed?  A good example here would be the set of measurements that arise in a Bell experiment—measurements that, despite being local, certify that Alice and Bob must share an entangled state. If we asked the analogous question for classical probability distributions, the answer is clearly “yes.”  That is, once you’ve specified the individual marginals, and you’ve also specified all the possible correlations among the players, you’ve fixed your distribution; there’s nothing further to specify. For quantum mixed states, the answer again turns out to be yes, but only because amplitudes are complex numbers!  In quantum mechanics over the reals, you could have a 2-qubit state like $$ \rho=\frac{1}{4}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1& 0 & 0 & 1\end{array}\right) ,$$ which clearly isn’t the maximally mixed state, yet which is indistinguishable from the maximally mixed state by any local measurement that can be specified using real numbers only.  (Proof: exercise!) In quantum mechanics over the quaternions, something even “worse” happens: namely, the tensor product of two Hermitian matrices need not be Hermitian.  Alice’s measurement results might be described by the 2×2 quaternionic density matrix $$ \rho_{A}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -i\\i & 1\end{array}\right), $$ and Bob’s results might be described by the 2×2 quaternionic density matrix $$ \rho_{B}=\frac{1}{2}\left(\begin{array}[c]{cc}1 & -j\\j & 1\end{array}\right), $$ and yet there might not be (and in this case, isn’t) any 4×4 quaternionic density matrix corresponding to ρA⊗ρB, which would explain both results separately. What’s going on here?  Why do the local measurement statistics underdetermine the global quantum state with real amplitudes, and overdetermine it with quaternionic amplitudes, being in one-to-one correspondence with it only when amplitudes are complex? We can get some insight by looking at the number of independent real parameters needed to specify a d-dimensional Hermitian matrix.  Over the complex numbers, the number is exactly d2: we need 1 parameter for each of the d diagonal entries, and 2 (a real part and an imaginary part) for each of the d(d-1)/2 upper off-diagonal entries (the lower off-diagonal entries being determined by the upper ones).  Over the real numbers, by contrast, “Hermitian matrices” are just real symmetric matrices, so the number of independent real parameters is only d(d+1)/2.  And over the quaternions, the number is d+4[d(d-1)/2] = 2d(d-1). Now, it turns out that the Goldilocks phenomenon that we saw above—with local measurement statistics determining a unique global quantum state when and only when amplitudes are complex numbers—ultimately boils down to the simple fact that $$ (d_A d_B)^2 = d_A^2 d_B^2, $$ but $$\frac{d_A d_B (d_A d_B + 1)}{2} > \frac{d_A (d_A + 1)}{2} \cdot \frac{d_B (d_B + 1)}{2},$$ and conversely $$ 2 d_A d_B (d_A d_B – 1) < 2 d_A (d_A – 1) \cdot 2 d_B (d_B – 1).$$ In other words, only with complex numbers does the number of real parameters needed to specify a “global” Hermitian operator, exactly match the product of the number of parameters needed to specify an operator on Alice’s subsystem, and the number of parameters needed to specify an operator on Bob’s.  With real numbers it overcounts, and with quaternions it undercounts. A major research goal in quantum foundations, since at least the early 2000s, has been to “derive” the formalism of QM purely from “intuitive-sounding, information-theoretic” postulates—analogous to how, in 1905, some guy whose name I forget derived the otherwise strange-looking Lorentz transformations purely from the assumption that the laws of physics (including a fixed, finite value for the speed of light) take the same form in every inertial frame.  There have been some nontrivial successes of this program: most notably, the “axiomatic derivations” of QM due to Lucien Hardy and (more recently) Chiribella et al.  Starting from axioms that sound suitably general and nontechnical (if sometimes unmotivated and weird), these derivations perform the impressive magic trick of deriving the full mathematical structure of QM: complex amplitudes, unitary transformations, tensor products, the Born rule, everything. However, in every such derivation that I know of, some axiom needs to get introduced to capture “local tomography”: i.e., the “principle” that composite systems must be uniquely determined by the statistics of local measurements.  And while this principle might sound vague and unobjectionable, to those in the business, it’s obvious what it’s going to be used for the second it’s introduced.  Namely, it’s going to be used to rule out quantum mechanics over the real numbers, which would otherwise be a model for the axioms, and thus to “explain” why amplitudes have to be complex. I confess that I was always dissatisfied with this.  For I kept asking myself: would I have ever formulated the “Principle of Local Tomography” in the first place—or if someone else had proposed it, would I have ever accepted it as intuitive or natural—if I didn’t already know that QM over the complex numbers just happens to satisfy it?  And I could never honestly answer “yes.”  It always felt to me like a textbook example of drawing the target around where the arrow landed—i.e., of handpicking your axioms so that they yield a predetermined conclusion, which is then no more “explained” than it was at the beginning. Two months ago, something changed for me: namely, I smacked into the “Principle of Local Tomography,” and its reliance on complex numbers, in my own research, when I hadn’t in any sense set out to look for it.  This still doesn’t convince me that the principle is any sort of a-priori necessity.  But it at least convinces me that it’s, you know, the sort of thing you can smack into when you’re not looking for it. The aforementioned smacking occurred while I was writing up a small part of a huge paper with Guy Rothblum, about a new connection between so-called “gentle measurements” of quantum states (that is, measurements that don’t damage the states much), and the subfield of classical CS called differential privacy.  That connection is a story in itself; let me know if you’d like me to blog about it separately.  Our paper should be on the arXiv any day now; in the meantime, here are some PowerPoint slides. Anyway, for the paper with Guy, it was of interest to know the following: suppose we have a two-outcome measurement E (let’s say, on n qubits), and suppose it accepts every product state with the same probability p.  Must E then accept every entangled state with probability p as well?  Or, a closely-related question: suppose we know E’s acceptance probabilities on every product state.  Is that enough to determine its acceptance probabilities on all n-qubit states? I’m embarrassed to admit that I dithered around with these questions, finding complicated proofs for special cases, before I finally stumbled on the one-paragraph, obvious-in-retrospect “Proof from the Book” that slays them in complete generality. Here it is: if E accepts every product state with probability p, then clearly it accepts every separable mixed state (i.e., every convex combination of product states) with the same probability p.  Now, a well-known result of Braunstein et al., from 1998, states that (surprisingly enough) the separable mixed states have nonzero density within the set of all mixed states, in any given finite dimension.  Also, the probability that E accepts ρ can be written as f(ρ)=Tr(Eρ), which is linear in the entries of ρ.  OK, but a linear function that’s determined on a subset of nonzero density is determined everywhere.  And in particular, if f is constant on that subset then it’s constant everywhere, QED. But what does any of this have to do with why amplitudes are complex numbers?  Well, it turns out that the 1998 Braunstein et al. result, which was the linchpin of the above argument, only works in complex QM, not in real QM.  We can see its failure in real QM by simply counting parameters, similarly to what we did before.  An n-qubit density matrix requires 4n real parameters to specify (OK, 4n-1, if we demand that the trace is 1).  Even if we restrict to n-qubit density matrices with real entries only, we still need 2n(2n+1)/2 parameters.  By contrast, it’s not hard to show that an n-qubit real separable density matrix can be specified using only 3n real parameters—and indeed, that any such density matrix lies in a 3n-dimensional subspace of the full 2n(2n+1)/2-dimensional space of 2n×2n symmetric matrices.  (This is simply the subspace spanned by all possible tensor products of n Pauli I, X, and Z matrices—excluding the Y matrix, which is the one that involves imaginary numbers.) But it’s not only the Braunstein et al. result that fails in real QM: the fact that I wanted for my paper with Guy fails as well.  As a counterexample, consider the 2-qubit measurement that accepts the state ρ with probability Tr(Eρ), where $$ E=\frac{1}{2}\left(\begin{array}[c]{cccc}1 & 0 & 0 & -1\\0 & 1 & 1 & 0\\0 & 1 & 1 & 0\\-1 & 0 & 0 & 1\end{array}\right).$$ I invite you to check that this measurement, which we specified using a real matrix, accepts every product state (a|0⟩+b|1⟩)(c|0⟩+d|1⟩), where a,b,c,d are real, with the same probability, namely 1/2—just like the “measurement” that simply returns a coin flip without even looking at the state at all.  And yet the measurement can clearly be nontrivial on entangled states: for example, it always rejects $$\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}},$$ and it always accepts $$ \frac{\left|00\right\rangle-\left|11\right\rangle}{\sqrt{2}}.$$ Is it a coincidence that we used exactly the same 4×4 matrix (up to scaling) to produce a counterexample to the real-QM version of Local Tomography, and also to the real-QM version of the property I wanted for the paper with Guy?  Is anything ever a coincidence in this sort of discussion? I claim that, looked at the right way, Local Tomography and the property I wanted are the same property, their truth in complex QM is the same truth, and their falsehood in real QM is the same falsehood.  Why?  Simply because Tr(Eρ), the probability that the measurement E accepts the mixed state ρ, is a function of two Hermitian matrices E and ρ (both of which can be either “product” or “entangled”), and—crucially—is symmetric under the interchange of E and ρ. Now it’s time for another confession.  We’ve identified an elegant property of quantum mechanics that’s true but only because amplitudes are complex numbers: namely, if you know the probability that your quantum circuit accepts every product state, then you also know the probability that it accepts an arbitrary state.  Yet, despite its elegance, this property turns out to be nearly useless for “real-world applications” in quantum information and computing.  The reason for the uselessness is that, for the property to kick in, you really do need to know the probabilities on product states almost exactly—meaning (say) to 1/exp(n) accuracy for an n-qubit state. Once again a simple example illustrates the point.  Suppose n is even, and suppose our measurement simply projects the n-qubit state onto a tensor product of n/2 Bell pairs.  Clearly, this measurement accepts every n-qubit product state with exponentially small probability, even as it accepts the entangled state  $$\left(\frac{\left|00\right\rangle+\left|11\right\rangle}{\sqrt{2}}\right)^{\otimes n/2}$$ with probability 1.  But this implies that noticing the nontriviality on entangled states, would require knowing the acceptance probabilities on product states to exponential accuracy. In a sense, then, I come back full circle to my original puzzlement: why should Local Tomography, or (alternatively) the-determination-of-a-circuit’s-behavior-on-arbitrary-states-from-its-behavior-on-product-states, have been important principles for Nature’s laws to satisfy?  Especially given that, in practice, the exponential accuracy required makes it difficult or impossible to exploit these principles anyway?  How could we have known a-priori that these principles would be important—if indeed they are important, and are not just mathematical spandrels? But, while I remain less than 100% satisfied about “why the complex numbers? why not just the reals?,” there’s one conclusion that my recent circling-back to these questions has made me fully confident about.  Namely: quantum mechanics over the quaternions is a flaming garbage fire, which would’ve been rejected at an extremely early stage of God and the angels’ deliberations about how to construct our universe. In the literature, when the question of “why not quaternionic amplitudes?” is discussed at all, you’ll typically read things about how the parameter-counting doesn’t quite work out (just like it doesn’t for real QM), or how the tensor product of quaternionic Hermitian matrices need not be Hermitian.  In this paper by McKague, you’ll read that the CHSH game is winnable with probability 1 in quaternionic QM, while in this paper by Fernandez and Schneeberger, you’ll read that the non-commutativity of the quaternions introduces an order-dependence even for spacelike-separated operations. But none of that does justice to the enormity of the problem.  To put it bluntly: unless something clever is done to fix it, quaternionic QM allows superluminal signaling.  This is easy to demonstrate: suppose Alice holds a qubit in the state |1⟩, while Bob holds a qubit in the state |+⟩ (yes, this will work even for unentangled states!)  Also, let $$U=\left(\begin{array}[c]{cc}1 & 0\\0 & j\end{array}\right) ,~~~V=\left(\begin{array}[c]{cc}1 & 0\\0& i\end{array}\right).$$ We can calculate that, if Alice applies U to her qubit and then Bob applies V to his qubit, Bob will be left with the state $$ \frac{j \left|0\right\rangle + k \left|1\right\rangle}{\sqrt{2}}.$$ By contrast, if Alice decided to apply U only after Bob applied V, Bob would be left with the state  $$ \frac{j \left|0\right\rangle – k \left|1\right\rangle}{\sqrt{2}}.$$ But Bob can distinguish these two states with certainty, for example by applying the unitary $$ \frac{1}{\sqrt{2}}\left(\begin{array}[c]{cc}j & k\\k & j\end{array}\right). $$ Therefore Alice communicated a bit to Bob. I’m aware that there’s a whole literature on quaternionic QM, including for example a book by Adler.  Would anyone who knows that literature be kind enough to enlighten us on how it proposes to escape the signaling problem?  Regardless of the answer, though, it seems worth knowing that the “naïve” version of quaternionic QM—i.e., the version that gets invoked in quantum information discussions like the ones I mentioned above—is just immediately blasted to smithereens by the signaling problem, without the need for any subtle considerations like the ones that differentiate real from complex QM. Update (Dec. 20): In response to this post, Stephen Adler was kind enough to email me with further details about his quaternionic QM proposal, and to allow me to share them here. Briefly, Adler completely agrees that quaternionic QM inevitably leads to superluminal signaling—but in his proposal, the surprising and nontrivial part is that quaternionic QM would reduce to standard, complex QM at large distances. In particular, the strength of a superluminal signal would fall off exponentially with distance, quickly becoming negligible beyond the Planck or grand unification scales. Despite this, Adler says that he eventually abandoned his proposal for quaternionic QM, since he was unable to make specific particle physics ideas work out (but the quaternionic QM proposal then influenced his later work). Unrelated Update (Dec. 18): Probably many of you have already seen it, and/or already know what it covers, but the NYT profile of Donald Knuth (entitled “The Yoda of Silicon Valley”) is enjoyable and nicely written. The NP genie December 11th, 2018 Hi from the Q2B conference! Every nerd has surely considered the scenario where an all-knowing genie—or an enlightened guru, or a superintelligent AI, or God—appears and offers to answer any question of your choice.  (Possibly subject to restrictions on the length or complexity of the question, to prevent glomming together every imaginable question.)  What do you ask? (Standard joke: “What question should I ask, oh wise master, and what is its answer?”  “The question you should ask me is the one you just asked, and its answer is the one I am giving.”) The other day, it occurred to me that theoretical computer science offers a systematic way to generate interesting variations on the genie scenario, which have been contemplated less—variations where the genie is no longer omniscient, but “merely” more scient than any entity that humankind has ever seen.  One simple example, which I gather is often discussed in the AI-risk and rationality communities, is an oracle for the halting problem: what computer program can you write, such that knowing whether it halts would provide the most useful information to civilization?  Can you solve global warming with such an oracle?  Cure cancer? But there are many other examples.  Here’s one: suppose what pops out of your lamp is a genie for NP questions.  Here I don’t mean NP in the technical sense (that would just be a pared-down version of the halting genie discussed above), but in the human sense.  The genie can only answer questions by pointing you to ordinary evidence that, once you know where to find it, makes the answer to the question clear to every competent person who examines the evidence, with no further need to trust the genie.  Or, of course, the genie could fail to provide such evidence, which itself provides the valuable information that there’s no such evidence out there. More-or-less equivalently (because of binary search), the genie could do what my parents used to do when my brother and I searched the house for Hanukkah presents, and give us “hotter” or “colder” hints as we searched for the evidence ourselves. To make things concrete, let’s assume that the NP genie will only provide answers of 1000 characters or fewer, in plain English text with no fancy encodings.  Here are the candidates for NP questions that I came up with after about 20 seconds of contemplation: • Which pieces of physics beyond the Standard Model and general relativity can be experimentally confirmed with the technology of 2018? What are the experiments we need to do? • What’s the current location of the Ark of the Covenant, or its remains, if any still exist?  (Similar: where can we dig to find physical records, if any exist, pertaining to the Exodus from Egypt, or to Jesus of Nazareth?) • What’s a sketch of a resolution of P vs. NP, from which experts would stand a good chance of filling in the details?  (Similar for other any famous unsolved math problem.) • Where, if anywhere, can we point radio telescopes to get irrefutable evidence for the existence of extraterrestrial life? • What happened to Malaysia Flight 370, and where are the remains by which it could be verified?  (Similar for Amelia Earhart.) • Where, if anywhere, can we find intact DNA of non-avian dinosaurs? Which NP questions would you ask the genie?  And what other complexity-theoretic genies would be interesting to consider?  (I thought briefly about a ⊕P genie, but I’m guessing that the yearning to know whether the number of sand grains in the Sahara is even or odd is limited.) Update: I just read Lenny Susskind’s Y Combinator interview, and found it delightful—pure Lenny, and covering tons of ground that should interest anyone who reads this blog. Airport idiocy November 28th, 2018 On Sunday, I returned to Austin with Dana and the kids from Thanksgiving in Pennsylvania.  The good news is that I didn’t get arrested this time, didn’t mistake any tips for change, and didn’t even miss the flight!  But I did experience two airports that changed decisively for the worse. In Newark Terminal C—i.e., one of the most important terminals of one of the most important airports in the world—there’s now a gigantic wing without a single restaurant or concession stand that, quickly and for a sane price, serves the sort of food that a child (say) might plausibly want to eat.  No fast food, not even an Asian place with rice and teriyaki to go.  Just one upscale eatery after the next, with complicated artisanal foods at brain-exploding prices, and—crucially—“servers” who won’t even acknowledge or make eye contact with the customers, because you have to do everything through a digital ordering system that gives you no idea how long the food might take to be ready, and whether your flight is going to board first.  The experience was like waking up in some sci-fi dystopia, where all the people have been removed from a familiar environment and replaced with glassy-eyed cyborgs.  And had we not thought to pack a few snacks with us, our kids would’ve starved. Based on this and other recent experiences, I propose the following principle: if a customer’s digitally-mediated order to your company is eventually going to need to get processed by a human being anyhow—a fallible human who could screw things up—and if you’re less competent at designing user interfaces than Amazon (which means: anyone other than Amazon), then you must make it easy for the customer to talk to one of the humans behind the curtain.  Besides making the customer happy, such a policy is good business, since when you do screw things up due to miscommunications caused by poor user interfaces—and you will—it will be on you to fix things anyway, which will eat into your profit margin.  To take another example, besides Newark Terminal C, all these comments apply with 3000% force to the delivery service DoorDash. Returning to airports, though: whichever geniuses ruined Terminal C at Newark are amateurs compared to those in my adopted home city of Austin.  Austin-Bergstrom International Airport (ABIA) chose Thanksgiving break—i.e., the busiest travel time of the year—to roll out a universally despised redesign where you now need to journey for an extra 5-10 minutes (or 15 with screaming kids in tow), up and down elevators and across three parking lots, to reach the place where taxis and Ubers are.  The previous system was that you simply walked out of the terminal, crossed one street, and the line of taxis was there. Supposedly this is to “reduce congestion” … except that, compared to other airports, ABIA never had any significant congestion caused by taxis.  I’d typically be the only person walking to them at a given time, or I’d join a line of just 3 or 4 people.  Nor does this do anything for the environment, since the city of Austin has no magical alternative, no subway or monorail to whisk you from the airport to downtown.  Just as many people will need a taxi or Uber as before; the only difference is that they’ll need to go ten times further out of their way as they’d need to go at a ten times busier airport.  For new visitors, this means their first experience of Austin will be one of confusion and anger; for Austin residents who fly a few times per month, it means that days or weeks have been erased from their lives.  From the conversations I’ve had so far, it appears that every single passenger of ABIA, and every single taxi and Uber driver, is livid about the change.  With one boneheaded decision, ABIA singlehandedly made Austin a less attractive place to live and work. Postscript I.  But if you’re a prospective grad student, postdoc, or faculty member, you should still come to UT!  The death of reason, and the triumph of the blank-faced bureaucrats, is a worldwide problem, not something in any way unique to Austin. Postscript II.  No, I don’t harbor any illusions that posts like this, or anything else I can realistically say or do, will change anything for the better, at my local airport let alone in the wider world.  Indeed, I sometimes wonder whether, for the bureaucrats, the point of ruining facilities and services that thousands rely on is precisely to grind down people’s sense of autonomy, to make them realize the futility of argument and protest.  Even so, if someone responsible for the doofus decisions in question happened to come across this post, and if they felt even the tiniest twinge of fear or guilt, felt like their victory over common sense wouldn’t be quite as easy or painless as they’d hoped—well, that would be reason enough for the post. November 22nd, 2018 Happy Thanksgiving! Can we/should we teach Quantum Theory in Junior High? by Terry Rudolph Should we? Reasons which suggest the answer is “yes” include: Can we? A pedagogical method covering nontrivial quantum theory using only basic arithmetic • No math more than basic arithmetic and distribution across brackets. • Be interpretationally neutral. Ten updates November 7th, 2018 October 2nd, 2018
b8721e90969cdbb3
Open main menu A visualisation of a solution to the two-dimensional heat equation with temperature represented by the third dimension In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a computer model. A special case is ordinary differential equations (ODEs), which deal with functions of a single variable and their derivatives. PDEs can be used to describe a wide variety of phenomena such as sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations. Partial differential equations (PDEs) are equations that involve rates of change with respect to continuous variables. For example, the position of a rigid body is specified by six parameters,[1] but the configuration of a fluid is given by the continuous distribution of several parameters, such as the temperature, pressure, and so forth. The dynamics for the rigid body take place in a finite-dimensional configuration space; the dynamics for the fluid occur in an infinite-dimensional configuration space. This distinction usually makes PDEs much harder to solve than ordinary differential equations (ODEs), but here again, there will be simple solutions for linear problems. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, and heat transfer. A partial differential equation (PDE) for the function u(x1,… xn) is an equation of the form If f is a linear function of u and its derivatives, then the PDE is called linear. Common examples of linear PDEs include the heat equation, the wave equation, Laplace's equation, Helmholtz equation, Klein–Gordon equation, and Poisson's equation. A relatively simple PDE is This relation implies that the function u(x,y) is independent of x. However, the equation gives no information on the function's dependence on the variable y. Hence the general solution of this equation is where f is an arbitrary function of y. The analogous ordinary differential equation is which has the solution where c is any constant value. These two examples illustrate that general solutions of ordinary differential equations (ODEs) involve arbitrary constants, but solutions of PDEs involve arbitrary functions. A solution of a PDE is generally not unique; additional conditions must generally be specified on the boundary of the region where the solution is defined. For instance, in the simple example above, the function f(y) can be determined if u is specified on the line x = 0. Existence and uniquenessEdit Although the issue of existence and uniqueness of solutions of ordinary differential equations has a very satisfactory answer with the Picard–Lindelöf theorem, that is far from the case for partial differential equations. The Cauchy–Kowalevski theorem states that the Cauchy problem for any partial differential equation whose coefficients are analytic in the unknown function and its derivatives, has a locally unique analytic solution. Although this result might appear to settle the existence and uniqueness of solutions, there are examples of linear partial differential equations whose coefficients have derivatives of all orders (which are nevertheless not analytic) but which have no solutions at all: see Lewy (1957). Even if the solution of a partial differential equation exists and is unique, it may nevertheless have undesirable properties. The mathematical study of these questions is usually in the more powerful context of weak solutions. An example of pathological behavior is the sequence (depending upon n) of Cauchy problems for the Laplace equation with boundary conditions where n is an integer. The derivative of u with respect to y approaches zero uniformly in x as n increases, but the solution is This solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y. The Cauchy problem for the Laplace equation is called ill-posed or not well-posed, since the solution does not continuously depend on the data of the problem. Such ill-posed problems are not usually satisfactory for physical applications. The existence of solutions for the Navier–Stokes equations, a partial differential equation, is part of one of the Millennium Prize Problems. In PDEs, it is common to denote partial derivatives using subscripts. That is: Especially in physics, del or nabla () is often used to denote spatial derivatives, and , ü for time derivatives. For example, the wave equation (described below) can be written as where Δ is the Laplace operator. Some linear, second-order partial differential equations can be classified as parabolic, hyperbolic and elliptic. Others, such as the Euler–Tricomi equation, have different types in different regions. The classification provides a guide to appropriate initial and boundary conditions and to the smoothness of the solutions. Equations of first orderEdit Linear equations of second orderEdit Assuming uxy = uyx, the general linear second-order PDE in two independent variables has the form where the coefficients A, B, C... may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy-plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section: More precisely, replacing x by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the top degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification. Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B2 − 4AC, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B2AC due to the convention of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic form) is (2B)2 − 4AC = 4(B2AC), with the factor of 4 dropped for simplicity. 1. B2AC < 0 (elliptic partial differential equation): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x < 0. 2. B2AC = 0 (parabolic partial differential equation): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x = 0. 3. B2AC > 0 (hyperbolic partial differential equation): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x > 0. If there are n independent variables x1, x2 ,… xn, a general linear partial differential equation of second order has the form The classification depends upon the signature of the eigenvalues of the coefficient matrix ai,j. 1. Elliptic: the eigenvalues are all positive or all negative. 2. Parabolic: the eigenvalues are all positive or all negative, save one that is zero. 3. Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative. 4. Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. There is only a limited theory for ultrahyperbolic equations (Courant and Hilbert, 1962). Systems of first-order equations and characteristic surfacesEdit The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for ν = 1, 2,… n. The partial differential equation takes the form where the coefficient matrices Aν and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form where φ has a non-zero gradient, then S is a characteristic surface for the operator L at a given point if the characteristic form vanishes: The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation do not determine the normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S. 1. A first-order system Lu = 0 is elliptic if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S. 2. A first-order system is hyperbolic at a point if there is a spacelike surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation Q(λξ + η) = 0 has m real roots λ1, λ2,… λm. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ) = 0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has m sheets, and the axis ζ = λξ runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets. Equations of mixed typeEdit If a PDE has coefficients that are not constant, it is possible that it will not belong to any of these categories but rather be of mixed type. A simple but important example is the Euler–Tricomi equation which is called elliptic-hyperbolic because it is elliptic in the region x < 0, hyperbolic in the region x > 0, and degenerate parabolic on the line x = 0. Infinite-order PDEs in quantum mechanicsEdit In the phase space formulation of quantum mechanics, one may consider the quantum Hamilton's equations for trajectories of quantum particles. These equations are infinite-order PDEs. However, in the semiclassical expansion, one has a finite system of ODEs at any fixed order of ħ. The evolution equation of the Wigner function is also an infinite-order PDE. The quantum trajectories are quantum characteristics, with the use of which one could calculate the evolution of the Wigner function. Analytical solutionsEdit Separation of variablesEdit Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a characteristic of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is the solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.[2] In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately. This generalizes to the method of characteristics, and is also used in integral transforms. Method of characteristicsEdit In special cases, one can find characteristic curves on which the equation reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics. More generally, one may find characteristic surfaces. Integral transformEdit An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator. An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. Change of variablesEdit Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes PDE is reducible to the heat equation by the change of variables (for complete details see Solution of the Black Scholes Equation at the Wayback Machine (archived April 11, 2008)) Fundamental solutionEdit Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source), then taking the convolution with the boundary conditions to get the solution. This is analogous in signal processing to understanding a filter by its impulse response. Superposition principleEdit The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin x + sin x = 2 sin x. The same principle can be observed in PDEs where the solutions may be real or complex and additive. superposition If u1 and u2 are solutions of linear PDE in some function space R, then u = c1u1 + c2u2 with any constants c1 and c2 are also a solution of that PDE in the same function space. Methods for non-linear equationsEdit There are no generally applicable methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Computational solution to the nonlinear PDEs, the split-step method, exist for specific equations like nonlinear Schrödinger equation. Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems. The method of characteristics (similarity transformation method) can be used in some very special cases to solve partial differential equations. In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers. Lie group methodEdit From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact. A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. Semianalytical methodsEdit The adomian decomposition method, the Lyapunov artificial small parameter method, and He's homotopy perturbation method are all special cases of the more general homotopy analysis method. These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality. Numerical solutionsEdit The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called Meshfree methods, which were made to solve problems where the before mentioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), Element-Free Galerkin Method (EFGM), Interpolating Element-Free Galerkin Method (IEFGM), etc. Finite element methodEdit The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc. Finite difference methodEdit Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives. Finite volume methodEdit Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design. See alsoEdit 1. ^ Sciavicco, Lorenzo; Siciliano, Bruno (2001-02-19). Modelling and Control of Robot Manipulators. Springer Science & Business Media. ISBN 9781852332211. 2. ^ Gershenfeld, Neil (2000). The nature of mathematical modeling (Reprinted (with corr.) ed.). Cambridge: Cambridge Univ. Press. p. 27. ISBN 0521570956. • Adomian, G. (1994). Solving Frontier problems of Physics: The decomposition method. Kluwer Academic Publishers. • Courant, R. & Hilbert, D. (1962), Methods of Mathematical Physics, II, New York: Wiley-Interscience. • Evans, L. C. (1998), Partial Differential Equations, Providence: American Mathematical Society, ISBN 0-8218-0772-2. • Holubová, Pavel Drábek ; Gabriela (2007). Elements of partial differential equations ([Online-Ausg.]. ed.). Berlin: de Gruyter. ISBN 9783110191240. • Ibragimov, Nail H (1993), CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3, Providence: CRC-Press, ISBN 0-8493-4488-3. • John, F. (1982), Partial Differential Equations (4th ed.), New York: Springer-Verlag, ISBN 0-387-90609-6. • Jost, J. (2002), Partial Differential Equations, New York: Springer-Verlag, ISBN 0-387-95428-7. • Lewy, Hans (1957), "An example of a smooth linear partial differential equation without solution", Annals of Mathematics, Second Series, 66 (1): 155–158, doi:10.2307/1970121. • Liao, S.J. (2003), Beyond Perturbation: Introduction to the Homotopy Analysis Method, Boca Raton: Chapman & Hall/ CRC Press, ISBN 1-58488-407-X • Olver, P.J. (1995), Equivalence, Invariants and Symmetry, Cambridge Press. • Petrovskii, I. G. (1967), Partial Differential Equations, Philadelphia: W. B. Saunders Co.. • Pinchover, Y. & Rubinstein, J. (2005), An Introduction to Partial Differential Equations, New York: Cambridge University Press, ISBN 0-521-84886-5. • Polyanin, A. D. (2002), Handbook of Linear Partial Differential Equations for Engineers and Scientists, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-299-9. • Polyanin, A. D. & Zaitsev, V. F. (2004), Handbook of Nonlinear Partial Differential Equations, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-355-3. • Polyanin, A. D.; Zaitsev, V. F. & Moussiaux, A. (2002), Handbook of First Order Partial Differential Equations, London: Taylor & Francis, ISBN 0-415-27267-X. • Solin, P. (2005), Partial Differential Equations and the Finite Element Method, Hoboken, NJ: J. Wiley & Sons, ISBN 0-471-72070-4. • Solin, P.; Segeth, K. & Dolezel, I. (2003), Higher-Order Finite Element Methods, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-438-X. • Stephani, H. (1989), Differential Equations: Their Solution Using Symmetries. Edited by M. MacCallum, Cambridge University Press. • Wazwaz, Abdul-Majid (2009). Partial Differential Equations and Solitary Waves Theory. Higher Education Press. ISBN 978-3-642-00251-9. • Wazwaz, Abdul-Majid (2002). Partial Differential Equations Methods and Applications. A.A. Balkema. ISBN 90-5809-369-7. • Zwillinger, D. (1997), Handbook of Differential Equations (3rd ed.), Boston: Academic Press, ISBN 0-12-784395-7. • Fakher Iqbal, Partial Differential Equations (Radial Basis function) - Thermal Colony Muzaffargarh (2019) • Gershenfeld, N. (1999), The Nature of Mathematical Modeling (1st ed.), New York: Cambridge University Press, New York, NY, USA, ISBN 0-521-57095-6. • Krasil'shchik, I.S. & Vinogradov, A.M., Eds. (1999), Symmetries and Conserwation Laws for Differential Equations of Mathematical Physics, American Mathematical Society, Providence, Rhode Island, USA, ISBN 0-8218-0958-X. • Krasil'shchik, I.S.; Lychagin, V.V. & Vinogradov, A.M. (1986), Geometry of Jet Spaces and Nonlinear Partial Differential Equations, Gordon and Breach Science Publishers, New York, London, Paris, Montreux, Tokyo, ISBN 2-88124-051-8. • Vinogradov, A.M. (2001), Cohomological Analysis of Partial Differential Equations and Secondary Calculus, American Mathematical Society, Providence, Rhode Island, USA, ISBN 0-8218-2922-X. Further readingEdit External linksEdit
82cc13bcf311e758
söndag 2 maj 2010 TGD and D.Rakovic. The quantum field body, a comprision. The problem of superpositions and quantum field body. A comparision between two models. Both TGD and the Rakovic model are based on p-V pair corresponding to the geometric, classic existence and is replaced with generalized force-generalized coordinate pairs in quantum fluctuating degrees of freedom, the same as in Popps model. Quark - N pair corresponds to 'objective existence' defined by quantum histories and N is generalized to a number of particle like excitations in the photon number state resulting in the state preparation. Rakovic talks of biomolecules where Pitkänen talks of particles and dark matter. The difference between the models is in the consciousness mechanism, mainly. In TGD cognition and intention are linked to p-adic primes; Rakovic see no need to talk of dark matter and primes, only complex valued quantum field holography. How that holography arise remain unclear to me. "I suggest you to read my attached paper on integrative medicine to see my viewpoint on the origin of quantum-coherent states in acupuncture system/consciousness", said Dejan Rakovic in a mail, regarding the superconduction in biology. Quantum-informational medicine and quantum-holographic informatics, with special reference to numerous psychosomatic-cognitive implications, is the approach 'in developed countries', he says. "Psychosomatic diseases indicates the necessity of application of holistic methods, implying their macroscopic quantum origin. The most intelligent way is the treatment of man as a whole and not diseases as symptoms of disorders of this wholeness. The focus of these quantum-holistic methods are the body’s acupuncture system and consciousness, incl. quantum-informational medicine with quantum-holographic informatics - with surprisingly significant psychosomatic and cognitive implications." Rakovic call his model the biophysical quantum-holographic/quantum-relativistic model of consciousness. TGD involves a quite far-reaching generalization of the space-time concept and, apart from the notion of quantum jump, reduces quantum theory to in finite-dimensional geometry, which is highly unique from the mere requirement that it exists. Quantum TGD requires the introduction of several new mathematical tools and concepts. Dark matter hierarchy with p-adic primes in a scaled up topological way. TGD is also a classic physical theory based on 'the wholy trinity' from the elemental particle physics. The emergence of the notions of zero energy ontology and hierarchy of Planck constants together with the increased understanding of the special features of number theoretical universality have led to a considerable deepening of the understanding during last half decade. Can this new physics give the answer? What are the similarities between TGD and Rakovic model? Non-locality (infiniteness) in both space and time is given in quantum TGD. It is the interaction between non-locality and locality that make up the interactions in the (quantal) magnetic body. This means also that the time concept will change, which is behind the consciousness. Quantum information and holography. Rakovic says: "The quantum-informational structure with memory attractors as a possible quantum-holographic informational basis of psychosomatic diseases." Then what is these structures with attractors? To get an answer on that we need to look at quantum non-material memory (from quantum computer science). Quantum memory. The first Bose-Einstein condensates were generated in 1995. A Bose–Einstein condensate represents the most 'classical' form of a matter wave, just as an optical laser emits the most classical form of an electromagnetic wave. Optical lattices are interference patterns comprised of laser beams, which are shone onto the atomic cloud and force their periodic structure onto it, with the creation of crystal-like formations, as results. Atoms and molecules move completely freely and randomly in gases unlike they do in solids. All vectors are free? The interesting aspect is that the movement of the atoms in an optical lattice within a quantum gas is similar to the behavior of electrons in solid bodies. Quantum gases are thus able to simulate the physical properties of solid bodies. After ultracold atoms are maneuvered into superpositions they are released to allow interference (noise-production) of each atom's two 'selves' (wave and matter). They are then illuminated with light, which casts a shadow, revealing a characteristic interference pattern, with red representing higher atom density. The variations in density are caused by the alternating constructive and destructive interference between the two 'parts' of each atom, magnified by thousands of atoms acting in unison. Each site is split into two wells, about 400 nanometers apart. Under the rules of the quantum world, the atom doesn't choose between the two sites but rather assumes a "superposition," located in both places simultaneously. From National Institute of Standards and Technology, NIST So, in quantum world are no choises (no 'environment' because of the non-locality?). The free will, the intention, where does these things emerge? This is true for matter, but certainly also for living matter. Quantum computing relies on the transmission of entangled qubits, or 'quantum bits', over distance in the form of photons. As photons travel along the lines they degrade steadily. Therefore, a device that can read and retransmit the signal would allow for the transmission of quantum data over any distance. This is what happens in the meridians in Popps model. The photons functions as solitons. The same function as in the nerve pulse of TGD and the Heimburg group. An optical lattice uses laser beams of different frequencies, shining in the same area, to create energy 'buckets' that hold atoms in place. The lattice can be tuned to place atoms where they are desired, and it can be used to extend the lifetime of quantum memory. The Georgia Tech team. The red spheres are atoms sitting in the energy 'buckets' created by interfering laser beams. Wikipedia. To encode information onto the atoms, a laser beam carrying a signal is shined on the array of atoms confined to the lattice. Each atom stores a portion of the quantum information, which is sensitive to the relative locations of the atoms. The lattice, therefore, allows the information to be stored longer by preserving the locations of the atoms more effectively. This memory is physical (atomic)? The quantum memory is degrees of freedom on an atom. These energy 'buckets' are the same looking as the cells cytoskeleton tensions inducing 'buckets' on the environment in the extracellular matrix (Ingber 1998). So memory can be physical, stored in tensions in the tissues? Physical + quantal memory. Cleland group do research on quantum integrated circuits, based on the Josephson phase qubit, a superconducting device whose operation relies on the physics of the Josephson junction (low electrical impedance and relatively large capacitance). Other groups have used spin etc. The qubit is coupled = entangled groups of degrees of freedoms. This was an extension, but I think it was necessary to understand the quantum memory, at least for me. Rakovic: "Peruš’s theoretical investigations show that any quantum system has formal mathematical structure of quantum-holographic Hopfield-like neural network . Then memory attractors of the acupuncture network can be treated as psychosomatic disorders representing EM MW/ULF-modulated (quantum)holistic records = gives a biophysical basis of (quantum)holistic local psychosomatics? (removed only holistically + extreme efficiency of the quantum-informational therapies, that consequently erase the very information of the psychosomatic disorders). And according to the Tibetan traditional medicine, an acupuncture procedure must be repeated every several months – presumably as a consequence of restituted patient’s mental loads from his mental-transpersonal-environment of closely related relatives/enemies/deceased, that remained non-reprogrammed on the level of quantum-holographic collective consciousness." So, if one changed, then also the other, but in an iniert way, as life itself is iniert. The memory is iniert. The mental aspect is most important. It is (quantum)holistic. Is memory an EM-system or even quantal? And a memory can be a collective/individual stress trapped in the tissue network. Pitkänen: "The identification of quantum jump between quantum histories as moment of consciousness defines microscopic theory of consciousness whereas the notions of self and self hierarchy allow to understand macroscopic aspects of consciousness absolutely essential for brain consciousness. Self is identified as a sub-system effectively behaving like its own subuniverse quantum mechanically, and physically self is a sub-system able to not generate bound state entanglement with environment during subsequent quantum jumps. Also self measurements are possible leading to a completely unentangled state." History requres memory and memory requires a self. And we have two self, one quantal and one physical. The quantal self has the memories? Also two consciousnesses. Rakovic: "For modeling most cognitive and especially psychosomatic functions the subtle biophysical acupuncture neural networks (modulated by UNF EM fields of brainwaves) are necessary as well, combined with quantum decoherence theory. It appeared, within the Feynman propagator version of the Schrödinger equation, that the quantum level is described by analogous mathematical formalism as Hopfield-like quantum-holographic associative neural network. How the quantum parallel processing level gives rise to classical parallel processing level, which is a general problem of the relationship between quantum and classical levels within the quantum decoherence theory. They demonstrate existence of two cognitive modes of consciousness (direct religious/creative one, characteristic of quantum-coherent transitional and altered states of consciousness, and indirect perceptively/rationally mediated one characteristic of classically-reduced normal states of consciousness) – together with conditions for their mutual transformations." Might this correspond to the different brain halfs? Pitkänen: The reduction of the standard quantum measurement theory to fundamental quantum physics is a triumph of TGD approach. Each quantum jump involves localization in the so called zero modes having interpretation as classical variables characterizing the observable geometric properties of the space-time surface, and thus of external macroscopic observer (magnetic body), together with an additional condition guaranteing that the density matrix characterizing the entanglement between quantum fluctuating degrees of freedom and zero modes is diagonal, implies standard quantum measurement theory. Rakovic: "Quantum neural holography combined with quantum decoherence might be very significant element of the feedback bioinformatics, from the level of cell to the level of organism. Whole psychosomatics is quantum hologram, both on the level of individual and collective consciousness, and that quantum-holographic hierarchical parts carry information on wholeness, enabling quantum-holographic fractal coupling of various hierarchical levels. Acupuncture-based-quantum-informational (un)intentional control of ontogenesis and morphogenesis; quantum-holographic language-influence on the genes expression, with implications of great psychosomatic significance of thought-emotional contents; and global fractal information coupling of various hierarchical levels in Nature with fundamental holistic implications on the origin of miraculous deep creativities and determinism of the History through the coupling with the existing evolving state of collective consciousness. This forecasts a great synthesis of two cognitive/consciousness modes. The personal role becomes indispensable due to the influence and care for collective mental environment, which is fundamental question of both mental health and civil decency, i.e. of both spiritual and civil morality. Some manifestations of consciousness must have deeper quantum origin – with significant psychosomatic and transpersonal implications." Pitkänen: The dynamics of self measurement is governed by Negentropy Maximization Principle (NMP), which speci es which subsystems are subject to quantum measurement in a given quantum jump. NMP can be regarded as a basic law for the dynamics of quantum jumps and states that the information content of the conscious experience is maximized. In p-adic context NMP dictates the dynamics of cognition. It is the quantal self that is repaired (by max. information, magnetic body, observer), and it direct our physical self which is hierarchial. The last stage is 'arousal' or awareness, attention etc. The energy grows with computation/entanglement. This is usually called the 'self' in biology. The archetypes in Jung, that is information from the past, but today is also told of an influence from the future (Holger Nielsen among others). There are a big confuseness on the terminology. I usually prefer Damasios terminology. Pitkänen: "TGD predicts two kinds of super-conformal symmetries, super-Kac Moody symmetries assignable to partonic 2-surfaces identi able as time = constant sections of light-like causal determinants (diamonds) of 4-surfaces correspond to the gauge symmetries of fundamental interactions. Super-symplectic symmetries act at imbedding space level, with some future/past light-cone of space. There is a fractal hierarchy of quantum holograms inside quantum holograms. Super-symplectic representations correspond to genuine quantum gravitational e effcts since wave functionals in the space of threesurfaces are involved: space-time ceases to be a passive arena of quantum dynamics. Super-symplectic degrees of freedom makes massless extremals ideal candidates for the correlates of higher level consciousness." The models shortly. Rakovic considers information processing within the central nervous system as occurring through hierarchically organized and interconnected neural networks. Hierarchical models of brain’s neural networks can be divided into: - Kohonen’s selforganized mapping feature mapping unidirectionally oriented multilayer neural networks - Hopfield’s associative or attractor , massively and bidirectionally connected neural networks, - Haken’s synergetic classical and - Peruš’s neuro-quantum multilayer neural networks. "It is firstly noted that models of brain’s hierarchical neural networks demonstrate encouraging advances in modeling cognitive functions – which is not surprising bearing in mind that information processing on the level of central nervous system is going via hierarchically organized and interconnected neural networks. It seems that this hierarchy of biological neural networks is going down to sub-cellular cytoskeleton mesoscopic level, being a kind of interface between neural and quantum levels." This sounds like the reductionistic emergent kind of consciousness, as a consequence of network complexity. But also Pitkänens self-entanglement is of the same kind. No essential difference. Pitkänen: "Self measurements are governed by Negentropy Maximization Principle, and give rise to quantum level self repair mechanism. In p-adic context NMP is the basic variational principle of cognition. The quantum jump at a given level of hierarchy corresponds to a sequence of quantum jumps at lower levels, which also contributes to the experience of the higher level self. The last stage gets the highest energy level (because these sub-selves entangle and give energy) and is the 'wake-up' self." "The proposal is that the dynamics of consciousness is governed by Negentropy Maximization Principle, which states the information content of conscious experience is maximal." "Negentropy Maximization Principle (NMP) codes for the dynamics of standard state function reduction and states that the state function reduction process following U-process gives rise to a maximal reduction of entanglement entropy at each step. In the generic case this implies at each step a decomposition of the system to unique unentangled subsystems and the process repeats itself for these subsystems. The process stops when the resulting subsystem cannot be decomposed to a pair of free systems since energy conservation makes the reduction of entanglement kinematically impossible in the case of bound states. The natural assumption is that self loses consciousness when it entangles via bound state entanglement." Arousal-stage where dopamine is important? In the model of Hopfield’s classical neural network, emergent collective computation is either regulated by global (variational) minimization of the Hamiltonian energy function or by local (interactional) network learning in discrete or continuous forms (incorporating spatio-temporal description of neuronal and synaptic activities). Haken has shown that introduction of biologically more plausible neuronal oscillatory activities gives richer dynamics of the neural network, with Hopfield’s classical neural net real-valued variables replaced by the complex-valued ones (similarly to quantum variables, although in contrast to thus conveniently modified classical formalism, the complex-valued quantum formalism is essential). A step further was done with quantum generalization of Hopfield’s classical neural network: Sutherland’s holographic neural network and, equivalent to it, Peruš’s model of Hopfield-like quantum associative neural network. Again, not so big difference. P-adics versus complex-valued quantum formalism are used. Kohonen maps:"Kohonen - a system model that is composed of at least two interacting subsystems of different nature. One of these subsystems is a competitive neural network that implements the winner-take-all function (digital), but there is also another subsystem that is controlled by the neural network and which modifies (modifying neurotransmitters) the local synaptic plasticity of the neurons in learning (LTP). The learning is restricted spatially to the local neighborhood of the most active neurons. The plasticity-control subsystem could be based on nonspecific neural interactions, but more probably it is a chemical control effect. Only by means of the separation of the neural signal transfer and the plasticity control has it become possible to implement an effective and robust self-organizing system. The SOM principle can also be expressed mathematically in a pure abstract form, without reference to any underlying neural or other components. The first application area of the SOM was speech recognition." The SOM is one of the most realistic models of the biological brain function, acc. to his latest book. Dynamic modeling using nonlinear state-space models, on his web page, with simulations. Also relational models, with many real-world sequences such as protein secondary structures or shell logs exhibit rich internal structures, logical hidden Markov models, etc. We combine two of these methods, relational Markov networks and hierarchical nonlinear factor analysis (HNFA), resulting in using nonlinear models in a structure determined by the relations in the data. This model is a structure and a field oscillating on the structure. Gao et al 2008: "Inference of latent chemical species in biochemical interaction networks is a key problem in estimation of the structure and parameters of the genetic, metabolic and protein interaction networks that underpin all biological processes. We present a framework for Bayesian marginalization of these latent chemical species through Gaussian process priors. A challenging problem for parameter estimation in ODE models occurs where one or more chemical species influencing the dynamics are controlled outside of the sub-system being modelled. For example, a signalling pathway can be triggered by a signal external to the pathway itself. In a regulatory sub-system, one or more transcription factors (TFs) may influence the expression of a set of target genes, but these TFs may not be regulated at the transcriptional level, instead being activated by another sub-system such as a signalling pathway. Similarly, in a metabolic pathway external metabolites and enzymes will influence the dynamics of the pathway. If these external chemical species have a constant influence, e.g. as in the case of steady state behaviour of a metabolic pathway, then they can simply be treated as additional parameters of the model and their effect can be estimated along with the other model parameters. However, more often these external factors are time-varying quantities." Hopfield-like neural network have certain shortcomings as descibed by Joya et al, 2002. They are widely used in neural computings. The complex neural fields can change both the amplitude and the phase, and their dynamics has a close analogy with the dynamics of self-oscillation generated in a phase-conjugate resonator. Torres et al.2003 in 'Influence of topology on the performance of a neural network': "the capacity to store and retrieve binary patterns is higher for attractor neural network with scale–free topology than for highly random–diluted Hopfield networks with the same number of synapses." Topology is better? Pitkänen: In standard physics context the existence of the required macroscopic quantum phases is not at all obvious whereas the new physics implied by TGD predicts their existence. The point is that the Universe according to TGD is a quantum critical system. Quantum criticality is mathematically very similar to thermodynamical criticality and implies long range quantum correlations in all length scales. This in turn implies the existence of macroscopic quantum phases. TGD Universe is also quantum spin glass with state degeneracy broken only by the classical gravitational energy of the space-time sheets having same induced Kähler fi eld. This degeneracy makes it possible to have quantum coherence over time periods longer than CP2 time of order 10^-39 seconds characterizing the duration of single quantum jump so that biosystems can act as quantum computers in macroscopic time scales. Mitja Perus, 2000. NEURAL NETWORKS AS A BASIS FOR QUANTUM ASSOCIATIVE NETWORKS. "We have got a lot of experience with computer simulations of Hopfield’s and holographic neural net models. Starting with these models, an analogous quantum information processing system, called quantum associative network, is presented in this article. It was obtained by translating an associative neural net model into the mathematical formalism of quantum theory in order to enable microphysical implementation of associative memory and pattern recognition. In a case of successful quantum implementation of the model, expected benefits would be significant increase in speed, in miniaturization, in efficiency of performance, and in memory capacity, mainly because of additionally exploiting quantum-phase encoding. We have earlier presented in detail how and where the mathematical formalism of associative neural network models by Hopfield and Haken is analogous to the mathematical formalism of quantum theory. By saying Hopfield-like we mean a system that is based on the Hopfield model of neural networks or spin glass systems (i.e., amorphous assemblies of spins or small 'magnets'), respectively. The results depend very much on the correlation structure of a specific set of input patterns. In other words, beside the 'hardware' (implementation) and the 'software' ('algorithm'), also the 'virtual software' (i.e., the input-data correlation structure) is essential." Small magnets are used also in the creation of the magnetic boby (magnetic flux tubes). Baez says it in another way in What's the energy density of the vacuum?: You need to know not just the answer, but also the assumptions and reasonings that went into the answer. Silent knowledge is important. This is the quantal self. Pitkänen: "For algebraic entanglement p-adic Shannon entropies obtained by replacing logarithms of probabilities with the logarithms of their p-adic norms are well-defined, and there is a prime p for which Shannon entropy is negative and minimum and identifiable as negentropy. The vision about dark matter hierarchy leads to a more refi ned view about self hierarchy and hierarchy of moments of consciousness." Perus: "If we try to implement Shannon-type information processing in quantum networks, some essential quantum features, manifested in the complex-valued formalism, are unavoidably partly eliminated. However, nature has obviously always performed a sort of non-Shannon-type implicit-information processing (Bohm uses term 'active information') where the 'users of information' are not human observers, but natural systems (e.g., immune systems) themselves. Neuro-quantum (classical-quantum) cooperation, as manifested in the collapse-readout, is thus essential for the brain which combines Shannon-type cognitive information and non-Shannon-type consciousness phenomenalism." The quantal self is also the dark matter self, as a mirror-self of 'antimatter'? It is achieved by "The hierarchy of dark matter levels and is labelled by the values of Planck constant having quantized but arbitrarily large values." This is the spooky action of quantum mechanics in a nutshell. Pitkänen: "Dark matter hierarchy leads to a quantitative model for high Tc superconductivity as quantum critical phenomenon predicting the basic length scales L(149) and L(151) associated with lipid layers of cell membrane and cell membrane itself. Also cell size emerges naturally. At higher levels of dark matter hierarchy scaled up versions of this basic structure appear. Cyclotron states at magnetic flux tubes are carriers of Bose-Einstein condensates of Cooper pairs and of bosonic ions. For instance, neurons correspond to k(em) = 3 level of dark matter hierarchy whereas ordinary EEG correspond to k(em) = 4, with Josephson junctions as the (superconductive) mediating tool." Rakovic: "An additional support that the acupuncture system is really related to consciousness is provided by meridian (psycho)therapies (with very fast removing of traumas, phobias, allergies, posttraumatic stress, and other psychosomatic disorders – with simultaneous effects of visualization and tapping/touching of acupuncture points." This I have personal experiencies of. Sometimes you can just wonder what a miracle is happening also to very severe, lifelong traumas, seen in many of my patients. - "A ’smearing’ and associative integration of memory attractors of the psychosomatic disorders, through successive imposing new boundary conditions in the acupuncture energy-state space during visualizations of the psychosomatic problems." Pitkänen: "The connection between thermodynamics and qualia was the real breakthrough in the development of ideas. In some sense this finding is not a news: the close connection between pressure sense and temperature sense and thermodynamics is basic facts of psychophysics (also neural nets, my comm.). In TGD framework the contents of consciousness is determined as some kind of average over a sequence of very large number of quantum jump. Thus non-geometric qualia should allow a statistical description generalizing ordinary thermodynamical ensemble to the ensemble formed by the prepared states in the sequence of quantum jumps occurred after the last 'wake-up' of self. For instance, this picture allows to see the ageing of self with respect to subjective time as an approach to thermal equilibrium." The world of conscious experience is classical, means standard quantum measurement theory follows." Pitkänen: "There are two basic mechanisms generating sensory qualia. 1. Quantum phase transition in which single particle transition occurs coherently for some macroscopic quantum phase produces qualia de fined by the increments of quantum numbers in the transition. The magnetic quantum phase transitions at super-conducting magnetic flux tubes provide a basic example of this mechanism, and the quantum model of hearing relies on Z0 magnetic quantum phase transitions. 2. The flow of particles with fixed quantum numbers between "electrodes" of what might be called a quantum capacitor induces qualia defined by the quantum numbers of the particles involved. The "electrodes" carry opposite net quantum numbers. Second electrode corresponds to the sub-self defining the quale mental image. Obviously cell interior and exterior are excellent candidates for the electrodes of the quantum capacitor. Also neuron and postsynaptic neuron. In fact, living matter is full of electrets defining capacitor like structures. The capacitor model applies to various chemical qualia and also to color vision and predicts that also cells should have senses. The ensuing general model of how cell membrane acts as a sensory receptor has unexpected implications for the entire TGD inspired view about biology." The 'Islands of Life'? Pitkänen: "The observation that Shannon entropy allows an infinite number of number theoretic variants for which the entropy can be negative in the case that probabilities are algebraic numbers leads to the idea that living matter in a well-defined sense corresponds to the intersection of real and p-adic worlds. This would mean that the mathematical expressions for the space-time surfaces (or at least 3-surfaces or partonic 2-surfaces and their 4-D tangent planes) make sense in both real and p-adic sense for some primes p. Same would apply to the expressions defining quantum states. In particular, entanglement probabilities would be rationals or algebraic numbers so that entanglement can be negentropic and the formation of bound states in the intersection of real and p-adic worlds generates information and is thus favored by NMP. For n=2 it contains all rational multiples of integer valued points defining Pythagorean triangles, for n=3,4,... only origin is as rational point . The mappings of the real geometric structures to their p-adic counterparts interpreted as cognitive mappings plays also key role in TGD inspired theory of consciousness." At center of the interception lies factors like protein transcription and folding mechanism, DNA memory and coding, phase transitions, phosphorylation etc. these are mechanisms that in fact are coding for life itself. That border between classical and quantum physics are nets like the nervenets and acupuncture meridian nets? They are doing the 'dance' in a sensitive, collecting way? What is special is that the nerves can be seen as skin, invoked in the body, as 'skin channels'. Skin and nerves belongs to the same embryological tissue. Skin is extremely sensitive. Cognition and consciousness are hierarchic. Pitkänen: Cognition are of two types. "Quantum TGD requires the introduction of several new mathematical tools and concepts, in particular p-adic numbers. p-Adic number fields Rp (one number field for each prime p = 2; 3; 5; :::) are analogous to real numbers but differ from them in that p-adic numbers are not well-ordered. p-Adic physics describes the physics of cognitive representations and matter-mind decomposition at space-time level corresponds to the decomposition of space-time surface to real and p-adic regions. The higher the value of p, the better the resolution of cognitive experience is, so that p serves as kind of intelligence quotient. p-Adic length scale hypothesis provides quantitative realization for the hierarchy of space-time sheets and is in key role in TGD inspired theory of consciousness. The spin flipping transitions associated with the fermionic generators of super-symplectic algebra might give rise to Boolean consciousness with intrinsic meaning ('This is true'). Quantum numbers at opposite light-like elementary particle horizons of a wormhole contact could correspond to Boolean cognition in which the presence/absence of fermion representing to 1/0. Boolean statements might be just bit sequences without consciously experienced intrinsic meaning (1/0 instead of true/false)." Of some reason I think of left, versus right brain half. And 'enlightment'. Rakovic: "It should be especially pointed out that quantum decoherence might play fundamental role in biological quantum-holographic neural networks, through adaptation of the energystate hypersurface of acupuncture system/consciousness (in contrast to low-temperature artificial qubit quantum processors where it must be avoided until the very read-out act of quantum computation! = the 'virus model' presented earlier) – which implies that Nature presumably has chosen elegant room-temperature solution for biological quantum-holographic information processing, permanently fluctuating between quantum-coherent states and classically-reduced states of acupuncture system/consciousness, through non-stationary interactions with out-of-body farther environment and through decoherence by bodily closer environment." Pitkänen: "Everything is conscious and consciousness can be only lost" with the additional statement "This happens when non-algebraic bound state entanglement is generated and the system does not remain in the intersection of real and p-adic worlds anymore". This is the reason the Schrödinger cat hasn't been recognized? It is 'out-of body experiencies'; not dead or alive, although death is one form of this quantum state. It is dead and alive. Life itself is a paranormal phenomen. Life is a critical phenomenon. The body is no border for our 'self'. There is a 'magnetic body' too. What is the purpose of this magnetic body, this oscillating wave-function? Is it the collection of qualias; the use of our classical, decoherenced body as a sensor and motor organ for our magnetic body, as Pitkänen suggests? He writes in General Theory of Qualia: "The idea is that EEG (or ZEG, WEG, or GEG) massless extremals can be assigned with entanglement of a subself of magnetic body with sub-self of biological representing various mental images. That sub-selves can entangle with selves remaining themselves unentangled is one aspect of the generalized notion of sub-system and inspired by the hierarchy of space-time sheets allowing to identify the space-time correlate for this kind of entanglement as join along boundaries bonds connecting space-time sheets representing the sub-systems of disjoint space-time sheets. The entanglement in question could be in cyclotron degrees of freedom, charge entanglement, or color entanglement. Although EEG and its generalizations seem to serve communication and control purposes rather than representing qualia directly, the notion of spectroscopy of consciousness makes still sense. Furthermore, the identification of the fractal hierarchy of EWEGs and GEGs means a dramatic generalization of this notion making precise quantitative predictions in a huge range of frequency scales resulting by simple scaling from EEG. Cyclotron frequency and Josephson frequency, can be assigned with the communications of sensory data to magnetic body and the n frequencies from both types with the quantum control performed by the magnetic body. For ordinary EEG the harmonics of cyclotron frequencies of bosonic ions correspond to alpha band and its harmonics assignable to quantum control. Beta and theta bands and their analogs for the harmonics of alpha band correspond to the communication of sensory and cognitive data to the magnetic body. The rough correlations of EEG with the state of consciousness can be understood." The hierarchies. Rakovic: "Lower hierarchical quantum-holographic macroscopic open quantum cellular enzyme-gene level, which might be also functioning on the level of permanent quantum-conformational quantum-holographic like biomolecular recognition – so that quantum neural holography combined with quantum decoherence might be very significant element of the feedback bioinformatics, from the level of cell to the level of organism. This presumably represents a natural framework for explanation of psychosomatic diseases related to somatization of environmentally-generated memory attractor’s states of the open macroscopic quantum acupuncture system/consciousness – quantum-holographically projected upon lower hierarchical cellular level, thus changing the expression of genes (so called macroscopic 'downward causation', as biofeedback control of microscopic 'upward causation' in ontogenesis and morphogenesis, starting from the first fertilised cell division which initialises differentiation of the acupuncture system of (electrical synaptic) 'gap-junctions'." This means that the lower levels get the conscious inputs from the surroundings and modify them. Eventually the genes can also be incorporated in this change (epigenetics), if necessary. The conscious part is reflected also to the genes, analogy to the sensory input into the nerves are reflected to the brain? And in fact the conscious part is diminished/computated and transformed/made negentropic in the process. The consciousness is computated in the networks in an upwards regulation, and the motoric output is a result of that computation, seen in the readiness potential by Libet. Perus: "It seems that, in the brain, neural networks trigger the collapse of quantum wave-function and thus transform the quantum complex-valued, probabilistic dynamics into the neural(classical) real-valued, deterministic dynamics. So, neural nets serve as an interface between classical environment of the organism and its quantum basis." Rakovic: "It seems that this hierarchy of biological neural networks is going down sub-cellular cytoskeleton mesoscopic level, being a kind of interface between neural and quantum levels. Quantum-holographic hierarchical parts carry information on wholeness, enabling quantum-holographic fractal coupling of various hierarchical levels." Pitkänen: "Self hierarchy has as a geometric correlate, the hierarchy of space-time sheets. The scaling of hbar (ground state) by wavelength^k means also scaling for various basic quantum scales like Compton length and - time and de Broglie wave length. Compton length correspond to a size of space-time sheet associated with the particle. The energy of dark photons with frequency f is scaled up by wavelength^k. For instance, EEG photons at k(em) = 4 level of dark matter hierarchy have energy which is above thermal energy at room temperature: this is absolutely essential for understanding the role of EEG photons. The phase transition increasing k(em) by one unit means that the sizes of space-time sheets are scaled up by wavelength. If the density of particles is high enough, the overlap criterion for the formation of a macroscopic quantum phase is satisfied in the resulting dark phase and Bose-Einstein condensates become possible. Decoherence phase transition for dark matter particle means that the size of the space-time sheet is scaled down by 1/wavelength." Rakovic: The whole psychosomatics is quantum hologram, both on the level of individual and collective consciousness, and that quantum-holographic hierarchical parts carry information on wholeness, enabling quantum-holographic fractal coupling of various hierarchical levels. Phenomenologically approved quantum-holographic (fractal) coupling of various hierarchical quantum levels – from-biological cell-to-acupuncture system/consciousness-to-collective consciousness (quantum-holographic feedback on cells’ conformational protein changes and genes’ expression (so called macroscopic ’downward causation’), and not only reversed (microscopic ’upward causation’), with mutual quantum-informational control of ontogenesis/embryogenesis and morphogenesis, starting from the first division of the fertilized cell when differentiation of the acupuncture system begins – with significant psychosomatic and cognitive bioinformational implications). With implications of great psychosomatic significance of thought-emotional contents; and global fractal information coupling of various hierarchical levels in Nature with fundamental holistic implications on the origin of miraculous deep creativities and determinism of the History through the coupling with the existing evolving state of collective consciousness. Resonant Recognition Model There is significant correlation between spectra of the numerical presentation of constitutive elements of primary sequences (amino acids, nucleotides) and their biological activity or interaction in corresponding biomolecules (proteins, DNAs). The presence of peak with significant signal-to-noise ratio in a multiple cross-spectral function of a group of sequences with the same biological function means that all of the analysed sequences within the group have this single-electron RRM frequency component in common, with the following general conclusions: (1) such a peak exists only for the group of biomolecules with the same function; (2) no significant peak exists for biologically unrelated biomolecules; (3) peak frequencies are different for different biological function; (4) ligand-proteins and their biomolecular target-receptors have the same characteristic frequency in common but almost opposite phase – providing also novel theoretical possibilities for protein de novo design with desired functions! In the context of the RRM-model, the same characteristic single-electron RRM frequency, and almost opposite phase, presumably characterises not only biomolecular protein and target general function, but also their macroscopic quantum biomolecular recognition interaction on the level of biological cell – possibly by externally activated (compositionally/chemically, by averaged intermolecular approaching of proteins and targets necessary for non-vanishing overlap integrals of the corresponding electronic and vibrational wave functions, or thermally/optically, by supplying vibrational energy necessary for making conditions for electronic-vibrational nonradiative resonant transitions between two isomers, giving rise to dynamic modification of the many-electron hypersurface of the cell's protein macroscopic quantum system. Pitkänen talks much of resonances, cyclotronic and other. Psychosomatic implications. Three front lines of psychosomatic medicine do exist: 1. spirituality; prayer for others mentally/emotionally involved erases for good mutual memory attractors on the level of collective consciousness 2. traditional holistic Eastern medicine and deep psychotherapeutic techniques; temporary erase memory attractors on the level of acupuncture system/individual consciousness, and prevent or alleviate their somatization 3. modern symptomatic Western medicine, whose activities through immunology, pharmacology, biomedical diagnostics, and surgery, hinder or soothe somatic consequences of the carelessness on the first two levels. Pitkänen: The concept of many-sheeted space-time leads to fresh proposals for how biosystems manage to be macroscopic quantum systems. Examples of these mechanisms are so called wormhole superconductivity, electronic high Tc super-conductivity, neutrino super-conductivity, ionic and a mechanism for generating coherent light and gravitons. Rakovic: I am ascribing consciousness to the macroscopic quantum (electromagnetic) field of the acupuncture system (which is then generalized to quantum (electromagnetic, and probably even unified) field of any quantum (sub)system, living or nonliving… including Universe…), and not to molecules themselves… and I don’t find it necessary to look at dark matter to understand consciousness… And he wants to add, p. 89: So, our theoretical investigations imply real origin of religious and other transpersonal experiences of various traditions of East and West – and according to our elaborated theoretical relationship consciousness/acupuncture EM-ionic quantum-holographic Hopfield-like associative neural network, esoteric notions like astral body (manomaya, lingasarira, manovijnana, ka, psyche, subtle body, psychic body, soul...) and mental body (vijnanamaya, suksmasarira, manas, ba, thymos, noetic body, spiritual body, spirit...) might be biophysically related to out-of-body displaced part (connected with the body by miniature wormhole’ space-time tunnel) of the ionic acupuncture system, and with embedded EM component of ionic MW/ULF-modulated currents, respectively.” Rakovic, with some very warm words: So it seems that science is closing the circle, by re-discovering two cognitive modes of consciousness and at the same time by imposing its own epistemological limitations – as it was preserved for millennia in shamanistic tribal traditions, or as it was concisely described by Patanjali in Yoga Sutras, pointing out that mystical experience (samadhi) is ’filled with truth’ and that ’it goes beyond inference and scriptures’. Pitkänen: This encourages to think the possibility of replacing the idea of a fixed axiomatic system with a living and dynamically evolving system becoming conscious of new axioms from which new theorems can grow. Mathematician would not be anymore an outsider but and active participator affecting the mathematical system he is studying. For instance, when paradoxal statement represented symbolically becomes conscious in quantum jump sequence, also the context in which it was originally stated changes. The ancient time cease, the new magic time comes. This is of course only a small part of the extensive litterature from both authors. The similarities are striking, although there of course also are big differences. I wanted to use citations not to distort the meanings too much, although this is like reading the Bible through small verses. To compare these models may though lead forward quite a lot. Next I will compare with Popps model. As a rule TGD goes much deeper into the explanations and its strength is the coordination with theorethical physics, although the same features can be seen in Rakovic works at a higher hierarchial level. Amit D.: Modeling Brain Functions (The World of Attractor Neural Nets). Cambridge Univ. Press, Cambridge, 1989. (the Hopfield model of neural networks) P. Gao, A. Honkela, M. Rattray, N. D. Lawrence. (2008). Gaussian process modelling of latent chemical species: applications to inferring transcription factor activities. Bioinformatics 24(16), pp. i70-i75. In Proceedings of ECCB 2008. doi:10.1093/bioinformatics/btn278 Haken H.: Synergetic Computers and Cognition (A Top-Down Approach to Neural Nets). Springer, Berlin, 1991. Mitja Perus, 2000: Neural networks as a basis for quantum associative networks. NEURAL NETWORK WORLD vol. 10 (2000) pp. 1001-1013.http://citeseerx.ist.psu.edu/viewdoc/download?doi= (computer simulations of Hopfield’s and holographic neural net models) Matti Pitkänen, 2010: General Theory of Qualia (udated version). http://tgd.wippiespace.com/public_html/pdfpool/qualia.pdf - Matter, Mind, Quantum, http://tgd.wippiespace.com/public_html/pdfpool/conscic.pdf - Negentropy Maximization Principle, http://tgd.wippiespace.com/public_html/tgdconsc/tgdconsc.html#nmpc - Self and Binding, http://tgd.wippiespace.com/public_html/tgdconsc/tgdconsc.html#selfbindc - Quantum Model for Sensory Representations, http://tgd.wippiespace.com/public_html/tgdconsc/tgdconsc.html#expc - Quantum Theory of Self-Organization, http://tgd.wippiespace.com/public_html/bioselforg/bioselforg.html#selforgac - Quantum Control and Coordination in Bio-Systems: Part I, http://tgd.wippiespace.com/public_html/bioselforg/bioselforg.html#qcococI - Worm-Hole Magnetic Fields, http://tgd.wippiespace.com/public_html/bioware/bioware.html#wormc -Time, Space-time, and Consciousness, http://tgd.wippiespace.com/public_html/hologram/hologram.html#time - Bio-Systems as Conscious Holograms, http://tgd.wippiespace.com/public_html/hologram/hologram.html#hologram - TGD Inspired Model for Nerve Pulse, http://tgd.wippiespace.com/public_html/tgdeeg/tgdeeg.html#pulse D. Raković, "Integrative Biophysics, Quantum Medicine, and Quantum-Holographic Informatics: Psychosomatic-Cognitive Implications", IASC & IEPSP, Belgrade, 2009, http://www.dejanrakovicfound.org/knjige/2009-Integ-Biophys-Quant-Medic.pdf - "Quantum medicine: Phenomenology and quantum-holographic implications", Med Data Rev, Vol. 1, No. 2, pp. 71-73 (2009). http://www.dejanrakovicfound.org/papers/2009-MED-DATA-REV.pdf - "Quantum-informational medicine and quantum-holographic informatics: Psychosomatic-cognitive implications", Speech and Language: Interdisciplinary Research III, S. Jovičić, M. Sovilj, eds., IEPSP, Belgrade (2009). http://www.dejanrakovicfound.org/papers/2009b-IEPSP.pdf - "Thinking and language: EEG maturation and model of contextual language learning", Proc. "Speech and Language", Belgrade (2003). http://www.dejanrakovicfound.org/papers/2003-IEFPG.pdf Donald E. Ingber, 1998: The Architecture of Life. A universal set of building rules seems to guidethe design of organic structures — from simple carbon compounds to complex cells and tissues. Scientific American January 1998, 48 - 57. 5 kommentarer: 1. From http://matpitka.blogspot.com/2010/04/solution-to-dark-energy-problem.html#comments Matti Pitkanen said... The holographic view (I have met Mitja Perus for a couple of times in Liege) emerges also in TGD framework. The basic problem is what holography really means. Mathematics allows huge number of options but physics would suggest that counterpart of laser light is involved. The problem of standard physics is that hbar is too small to allow coherence in brain scale. Fractality would be certainly involved since holograms are basic exampled of information theoretic fractals. Neurons should be able to provide representations about part of organism and even entire organism. Bioholography indeed supports this view experimentally (for some reason these findings have not received the attention that they deserve). One view about holography in TGD framework is based on the model for a fractal generalization of EEG based on dark Josephson photons. They are generated by cell membranes acting as Josephson junctions and their energies are proportional to the membrane voltage so that each type of cell has its own characteristic Josephson energy These energies are in the range of visible and UV photon energies (one would expect .04-.08 eV energies characterizing IR light but the realization that cell membranes as critical systems are most naturally almost vacuum extremals led to a modification of the model; as a consequence the energy range is scaled up). Why I take this modification seriously is that it predicts large breaking of parity symmetry which indeed characterizes living matter and is a complete mystery in standard model. One particular, quantitative success is correct prediction for the frequencies of peak sensitivity for photoreceptors. Josephson frequencies are proportional to 1/hbar and vary in huge range from about 10^14 Hz to the time scale of decades. The most amazing outcome is that EEG photons result in the decays of these Josephson radiation to bunches of ELF photons and biophotons result when they transform to singe ordinary photon. Two seemingly totally unrelated notions are unified which is always a good sign. Josephson radiation is what makes biosystem a hologram in purely physical sense. Magnetic body included since Josephson radiation communicates sensory data to the magnetic body by inducing cyclotron transitions. Motor action proceeds in reverse time direction by negative energy Josephson radiation. To be continued.... 2. Continuation to the previous email. Josephson radiation can in principle regenerate basic qualia along sensory pathway. Sensory receptors have specialized to experience and nothing else: they use most of their lipids for this purpose in cells along the sensory pathway. Neurons become kind of sensory homunculi with every lipid of at least axon serving as a pixel with colors defined by the qualia associated with it. This means fractality and a very concrete realization of holography. This could have been guessed: already genomes make cells informational fractals/holograms so that only a generalization of this observation is in question. One can assign to each cell a value of Planck constant characterizing its evolutionary level. The corresponding Josephson frequency is inversely proportional to 1/hbar and is the lower the higher the level is since the time scale of planned action and memory becomes longer with increasing hbar. For instance, the neurons in the associative regions of brain are rather intelligent and civilized and pixels carry quantum associations of various qualia like colors, tastes, etc... In cerebellum the neurons are not so educated. The time scale of synchrony is 160-200 Hz whereas in hippocampus it is around 5-12 Hz and 40 Hz at primary sensory areas. There would be also cells producing kHz Josephson radiation and with frequency which corresponds to time scales of years. Also the glia, in particular astrocytes are in fundamental role. In fact, they might correspond to highest representations of data between magnetic body and sensory receptors. At this level the oscillations of membrane potential reflecting themselves as frequency modulations of frequencies of generalized EEG would represent the sensory data. Brain would be like a flock of singing whales or orchestra playing instruments whose octaves would be characterized by Planck constant and speech and singing would be one particular example of a completely general representation of information as frequency modulation of Josephson frequencies. Note that also ordinary speech relies on frequency modulation as becomes clear by playing a recorded speech very slowly so that speech and singing are not so different basically. One new piece of theory is a detailed proposal for how the Mersenne primes and Gaussian Mersennes (as many as four of them in the crucial length scale range 10 nm-2580 nm with big at both sides!) are involved. The proposal is that there is hierarchy of copies of weak interactions and color interactions at these length scales plus their dark scaled up variants. The idea is that when dark scaled up variant of Mersenne scale is second Mersenne scale a resonance of the two physics occurs. Dark gauge bosons transform to their non-variants with much smaller mass. This would occurs only near quantum criticality (near vacuum extremal property for cell memrane). The phases near vacuum extremals could provide generic explanation for very large class of anomalous luminous phenomena (sonofusion, cold fusion, tornadoes, anomalies of rotating magnetic systems, earth lights and UFOs identified as earth lights,...) This leads to rather detailed vision about evolution allowing to put large number of ideas under common conceptual umbrella including the earlier observation that the model for dark nucleons predcits counterparts of RNA, DNA, tRNA, and aminoacids as states of nucleons and also vertebrate genetic code emerges naturally. Genetic code would be realized already at the level of dark nuclear physics and realized in terms of nuclear strings, 3. Dear Ulla, Please be free to add my last clarification as a comment… I noticed that you had written: ‘How that holography arise remain unclear to me.’ However, this is very simple consequence of the Perus’ discovery that any quantum system has formal mathematical structure of Fopfield-like quantum-holographic neural network. Its propagator (quantum-holographic memory of the quantum system) enables at the input of the Hopfield quantum-holographic neural network a successive reconstruction of the wave functions of the memory states (complete, i.e. amplitudes and phases!) in recognizing wave functions of the states shown at their input (which is the basis of any holography!), whereas everything is simplified as compared to standard laser holography (which requires coherent reference and object laser beams)! [see Ch. 2 of my monograph "Integrative Biophysics, Quantum Medicine, and Quantum-Holographic Informatics: Psychosomatic-Cognitive Implications", for details]. Best regards, Dejan Rakovic 4. I have added the comments that belongs to the text. I thought the qubit is the answer. A qubit can be of different size depending on the function. It can be coupled (decoherence) or decoupled (quantum state). The dance happens between these both stages. The size of the qubit is not that important. The stressors come from the minimal side, means quantal side. They are then emotions etc. electromagnetic disturbances. 5. Dear Ulla, Thank you for the comments you have added to the text. In quantum-holographic neural networks there are no qubits with two states 0 and 1 in quantum superpositions – but generalization on multiple states 0, 1, 2… with multiple occupations n0, n1, n2… (as explained in mentioned Fig. 3.2)… Best regards, Dejan Rakovic
9ecffbb131d5fbff
• Heading towards a Schrödinger’s Brexit! Above, the Schrödinger equation. Today we are deeply indebted to the learned Doctor Richard North who has brilliantly coined the term; “Schrödinger’s Brexit” (one that cannot exist in the framework set by the real world) to the present mess the UK is in! Herewith below his blog post. GOTO: http://eureferendum.com/blogview.aspx?blogno=86863 For those unacquainted with quantum mechanics, herewith a brief explanation of Schrödinger’s cat. A cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal monitor (e.g. Geiger counter) detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison, which kills the cat. The Copenhagen interpretation of quantum mechanics – applied to everyday objects such as a cat – implies that after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead not both alive and dead. This poses the question of when exactly quantum superposition ends and reality collapses into one possibility or the other. Schrödinger coined the term Verschränkung (entanglement) to describe this situation. Applied to Brexit; for the cat, substitute the UK economy. For the internal monitor replace a Geiger counter with the Confederation of British Industry. For radioactivity, replace a single atom decaying with transport disruption on the M2. For the flask which is shattered, replace with “Operation Stack”. For the dead cat, replace with the death of the prospects for Madame Mayhem remaining in office for longer than a month. But that of course is one reality. The other reality is postulated below; GOTO: https://brexitcentral.com/dont-disheartened-brexiteers-heres-confident-upbeat/ Write a comment
7d6c9902690e004a
Classical physics, the physics existing before quantum mechanics, describes nature at ordinary (macroscopic) scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale.[3] Quantum mechanics differs from classical physics in that energy, momentum, angular momentum and other quantities of a bound system are restricted to discrete values (quantization); objects have characteristics of both particles and waves (wave-particle duality); and there are limits to the precision with which quantities can be measured (uncertainty principle).[note 1] Important applications of quantum theory[5] include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.[6] Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations.[7] In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper titled On the nature of light and colours. This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays. These studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[8] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets) precisely matched the observed patterns of black-body radiation. Following Max Planck's solution in 1900 to the black-body radiation problem (reported 1859), Albert Einstein offered a quantum-based theory to explain the photoelectric effect (1905, reported 1887). Around 1900–1910, the atomic theory and the corpuscular theory of light[10] first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, and Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld.[11] This phase is known as old quantum theory. Max Planck is considered the father of the quantum theory. where h is Planck's constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[12] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.[13] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. He won the 1921 Nobel Prize in Physics for this work. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). In 1926 Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states – whose properties turned out to be exactly the same as implied by matrix mechanics.[15] From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.[citation needed] By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann[16] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. Its speculative modern developments include string theory and quantum gravity theories. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.[citation needed] While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors,[17] and superfluids.[18] The word quantum derives from the Latin, meaning "how great" or "how much".[19] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[20][better source needed] Some fundamental aspects of the theory are still actively studied.[21] However, later, in October 2018, physicists reported that quantum behavior can be explained with classical physics for a single particle, but not for multiple particles as in quantum entanglement and related nonlocality phenomena.[23][24] Mathematical formulationsEdit In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac,[25] David Hilbert,[26] John von Neumann,[27] and Hermann Weyl,[28] the possible states of a quantum mechanical system are symbolized[29] as unit vectors (called state vectors). Formally, these reside in a complex separable Hilbert space – variously called the state space or the associated Hilbert space of the system – that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function, also referred to as state vector in a complex vector space.[30] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, to arbitrary precision. For instance, electrons may be considered (to a certain probability) to be located somewhere within a given region of space, but with their exact positions unknown. Contours of constant probability density, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum.[31] According to one interpretation, as the result of a measurement, the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable – which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the relative state interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[32] Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wave function will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wave function collapse, a controversial and much-debated process[36] that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wave function collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wave function that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result.[32] It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.[37] The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that – given a wave function at an initial time – it makes a definite prediction of what the wave function will be at any later time.[38] During a measurement, on the other hand, the change of the initial wave function into another, later wave function is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here.[39][40] Fig. 1: Probability densities corresponding to the wave functions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momenta (increasing across from left to right: s, p, d, ...). Denser areas correspond to higher probability density in a position measurement. Such wave functions are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics, and are modes of oscillation as well, possessing a sharp energy and, thus, a definite frequency. The angular momentum and energy are quantized, and take only discrete values like those shown (as is the case for resonant frequencies in acoustics) Some wave functions produce probability distributions that are constant, or independent of time – such as when in a stationary state of constant energy, time vanishes in the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wave function surrounding the nucleus (Fig. 1) (note, however, that only the lowest angular momentum states, labeled s, are spherically symmetric).[42] The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the "wave-like" behavior of quantum states. As it turns out, analytic solutions of the Schrödinger equation are available for only a very small number of relatively simple model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom are the most important representatives. Even the helium atom – which contains just one more electron than does the hydrogen atom – has defied all attempts at a fully analytic treatment. Mathematically equivalent formulations of quantum mechanicsEdit There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger).[43] Interactions with other scientific theoriesEdit The rules of quantum mechanics are fundamental. They assert that the state space of a system is a Hilbert space (crucially, that the space has an inner product) and that observables of that system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical mechanics when a system moves to higher energies or, equivalently, larger quantum numbers, i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero. In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, then attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit.   Unsolved problem in physics: In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wave function collapse", give rise to the reality we perceive? (more unsolved problems in physics) Quantum mechanics and classical physicsEdit Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.[48] According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems of objects (or a statistical quantum mechanics of a large collection of particles).[49] The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[50] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum coherence is an essential difference between classical and quantum theories as illustrated by the Einstein–Podolsky–Rosen (EPR) paradox – an attack on a certain philosophical interpretation of quantum mechanics by an appeal to local realism.[51] Quantum interference involves adding together probability amplitudes, whereas classical "waves" infer that there is an adding together of intensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[52] Quantum coherence is not typically evident at macroscopic scales, though an exception to this rule may occur at extremely low temperatures (i.e. approaching absolute zero) at which quantum behavior may manifest itself macroscopically.[53] This is in accordance with the following observations: Copenhagen interpretation of quantum versus classical kinematicsEdit A big difference between classical and quantum mechanics is that they use very different kinematic descriptions.[56] In Niels Bohr's mature view, quantum mechanical phenomena are required to be experiments, with complete descriptions of all the devices for the system, preparative, intermediary, and finally measuring. The descriptions are in macroscopic terms, expressed in ordinary language, supplemented with the concepts of classical mechanics.[57][58][59][60] The initial condition and the final condition of the system are respectively described by values in a configuration space, for example a position space, or some equivalent space such as a momentum space. Quantum mechanics does not admit a completely precise description, in terms of both position and momentum, of an initial condition or "state" (in the classical sense of the word) that would support a precisely deterministic and causal prediction of a final condition.[61][62] In this sense, advocated by Bohr in his mature writings, a quantum phenomenon is a process, a passage from initial to final condition, not an instantaneous "state" in the classical sense of that word.[63][64] Thus there are two kinds of processes in quantum mechanics: stationary and transitional. For a stationary process, the initial and final condition are the same. For a transition, they are different. Obviously by definition, if only the initial condition is given, the process is not determined.[61] Given its initial condition, prediction of its final condition is possible, causally but only probabilistically, because the Schrödinger equation is deterministic for wave function evolution, but the wave function describes the system only probabilistically.[65][66] General relativity and quantum mechanicsEdit Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. Many prominent physicists, including Stephen Hawking, have labored for many years in the attempt to discover a theory underlying everything. This TOE would combine not only the different models of subatomic physics, but also derive the four fundamental forces of nature – the strong force, electromagnetism, the weak force, and gravity – from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, he has concluded that one is not obtainable, and has stated so publicly in his lecture "Gödel and the End of Physics" (2002).[71] Attempts at a unified field theoryEdit The quest to unify the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory in competition with general relativity,[72][73] has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field.[74] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However  – and while special relativity is parsimoniously incorporated into quantum electrodynamics  – the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory. One of those searching for a coherent TOE is Edward Witten, a theoretical physicist who formulated the M-theory, which is an attempt at describing the supersymmetrical based string theory. M-theory posits that our apparent 4-dimensional spacetime is, in reality, actually an 11-dimensional spacetime containing 10 spatial dimensions and 1 time dimension, although 7 of the spatial dimensions are – at lower energies – completely "compactified" (or infinitely curved) and not readily amenable to measurement or probing. Philosophical implicationsEdit Albert Einstein, himself one of the founders of quantum theory, did not accept some of the more philosophical or metaphysical interpretations of quantum mechanics, such as rejection of determinism and of causality. He is famously quoted as saying, in response to this aspect, "God does not play with dice".[77] He rejected the concept that the state of a physical system depends on the experimental arrangement for its measurement. He held that a state of nature occurs in its own right, regardless of whether or how it might be observed. In that view, he is supported by the currently accepted definition of a quantum state, which remains invariant under arbitrary choice of configuration space for its representation, that is to say, manner of observation. He also held that underlying quantum mechanics there should be a theory that thoroughly and directly expresses the rule against action at a distance; in other words, he insisted on the principle of locality. He considered, but rejected on theoretical grounds, a particular proposal for hidden variables to obviate the indeterminism or acausality of quantum mechanical measurement. He considered that quantum mechanics was a currently valid but not a permanently definitive theory for quantum phenomena. He thought its future replacement would require profound conceptual advances, and would not come quickly or easily. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. In arguing for his views, he produced a series of objections, the most famous of which has become known as the Einstein–Podolsky–Rosen paradox. John Bell showed that this EPR paradox led to experimentally testable differences between quantum mechanics and theories that rely on added hidden variables. Experiments have been performed confirming the accuracy of quantum mechanics, thereby demonstrating that quantum mechanics cannot be improved upon by addition of hidden variables.[78] Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have definitively verified quantum entanglement. By the early 1980s, experiments had shown that such inequalities were indeed violated in practice – so that there were in fact correlations of the kind suggested by quantum mechanics. At first these just seemed like isolated esoteric effects, but by the mid-1990s, they were being codified in the field of quantum information theory, and led to constructions with names like quantum cryptography and quantum teleportation.[79] Entanglement, as demonstrated in Bell-type experiments, does not, however, violate causality, since no transfer of information happens. Quantum entanglement forms the basis of quantum cryptography, which is proposed for use in high-security commercial applications in banking and government. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[80] This is not accomplished by introducing some "new axiom" to quantum mechanics, but on the contrary, by removing the axiom of the collapse of the wave packet. All of the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical – not just formally mathematical, as in other interpretations – quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can only observe the universe (i.e., the consistent state contribution to the aforementioned superposition) that we, as observers, inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, these "parallel universes" will never be accessible to us. The inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away at the speed of light towards the other end of the universe. In order to prove that the wave function did not collapse, one would have to bring all these particles back and measure them again, together with the system that was originally measured. Not only is this completely impractical, but even if one could theoretically do this, it would have to destroy any evidence that the original measurement took place (including the physicist's memory). In light of these Bell tests, Cramer (1986) formulated his transactional interpretation[81] which is unique in providing a physical explanation for the Born rule.[82] Relational quantum mechanics appeared in the late 1990s as the modern derivative of the Copenhagen Interpretation. Quantum mechanics has had enormous[83] success in explaining many of the features of our universe. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Quantum mechanics has strongly influenced string theories, candidates for a Theory of Everything (see reductionism). Quantum mechanics is also critically important for understanding how individual atoms are joined by covalent bond to form molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Quantum mechanics can also provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others and the magnitudes of the energies involved.[84] Furthermore, most of the calculations performed in modern computational chemistry rely on quantum mechanics. In many aspects modern technology operates at a scale where quantum effects are significant. Many modern electronic devices are designed using quantum mechanics. Examples include the laser, the transistor (and thus the microchip), the electron microscope, and magnetic resonance imaging (MRI). The study of semiconductors led to the invention of the diode and the transistor, which are indispensable parts of modern electronics systems, computer and telecommunication devices. Another application is for making laser diode and light emitting diode which are a high-efficiency source of light. A working mechanism of a resonant tunneling diode device, based on the phenomenon of quantum tunneling through potential barriers. (Left: band diagram; Center: transmission coefficient; Right: current-voltage characteristics) As shown in the band diagram(left), although there are two barriers, electrons still tunnel through via the confined states between two barriers(center), conducting current. Many electronic devices operate under effect of quantum tunneling. It even exists in the simple light switch. The switch would not work if electrons could not quantum tunnel through the layer of oxidation on the metal contact surfaces. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells. Some negative differential resistance devices also utilize quantum tunneling effect, such as resonant tunneling diode. Unlike classical diodes, its current is carried by resonant tunneling through two or more potential barriers (see right figure). Its negative resistance behavior can only be understood with quantum mechanics: As the confined state moves close to Fermi level, tunnel current increases. As it moves away, current decreases. Quantum mechanics is necessary to understanding and designing such electronic devices. An inherent advantage yielded by quantum cryptography when compared to classical cryptography is the detection of passive eavesdropping. This is a natural result of the behavior of quantum bits; due to the observer effect, if a bit in a superposition state were to be observed, the superposition state would collapse into an eigenstate. Because the intended recipient was expecting to receive the bit in a superposition state, the intended recipient would know there was an attack, because the bit's state would no longer be in a superposition.[85] Quantum computingEdit Another goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Instead of using classical bits, quantum computers use qubits, which can be in superpositions of states. Quantum programmers are able to manipulate the superposition of qubits in order to solve problems that classical computing cannot do effectively, such as searching unsorted databases or integer factorization. IBM claims that the advent of quantum computing may progress the fields of medicine, logistics, financial services, artificial intelligence and cloud security.[86] Macroscale quantum effectsEdit While quantum mechanics primarily applies to the smaller atomic regimes of matter and energy, some systems exhibit quantum mechanical effects on a large scale. Superfluidity, the frictionless flow of a liquid at temperatures near absolute zero, is one well-known example. So is the closely related phenomenon of superconductivity, the frictionless flow of an electron gas in a conducting material (an electric current) at sufficiently low temperatures. The fractional quantum Hall effect is a topological ordered state which corresponds to patterns of long-range quantum entanglement.[87] States with different topological orders (or different patterns of long range entanglements) cannot change into each other without a phase transition. Quantum theoryEdit Free particleEdit For example, consider a free particle. In quantum mechanics, a free matter is described by a wave function. The particle properties of the matter become apparent when we measure its position and velocity. The wave properties of the matter become apparent when we measure its wave properties like interference. The wave–particle duality feature is incorporated in the relations of coordinates and operators in the formulation of quantum mechanics. Since the matter is free (not subject to any interactions), its quantum state can be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with complete precision. However, one can measure the position (alone) of a moving free particle, creating an eigenstate of position with a wave function that is very large (a Dirac delta) at a particular position x, and zero everywhere else. If one performs a position measurement on such a wave function, the resultant x will be obtained with 100% probability (i.e., with full certainty, or complete precision). This is called an eigenstate of position – or, stated in mathematical terms, a generalized position eigenstate (eigendistribution). If the particle is in an eigenstate of position, then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum, then its position is completely unknown.[90] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[91] Particle in a boxEdit 1-dimensional potential energy box (or infinite potential well) The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the   direction, the time-independent Schrödinger equation may be written[92] With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state   in this case having energy   coincident with the kinetic energy of the particle. or, from Euler's formula, and D = 0. At x = L, The ground state energy of the particles is E1 for n=1. Energy of particle in the nth state is En =n2E1, n=2,3,4,..... Particle in a box with boundary condition V(x)=0 -a/2<x<+a/2 A particle in a box with a little change in the boundary condition. In this condition the general solution will be same, there will a little change to the final result, since the boundary conditions are changed At x=0, the wave function is not actually zero at all value of n. Clearly, from the wave function variation graph we have, At n=1,3,4,...... the wave function follows a cosine curve with x=0 as origin At n=2,4,6,...... the wave function follows a sine curve with x=0 as origin Wave Function Variation with x and n. From this observation we can conclude that the wave function is alternatively sine and cosine. So in this case the resultant wave equation is ψn(x) = Acos(knx) n=1,3,5,............. = Bsin(knx) n=2,4,6,............. Finite potential wellEdit The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Rectangular potential barrierEdit Harmonic oscillatorEdit This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by where Hn are the Hermite polynomials and the corresponding energy levels are This is another example illustrating the quantification of energy for bound states. Step potentialEdit The potential in this case is given by: with coefficients A and B determined from the boundary conditions and by imposing a continuous derivative on the solution, and where the wave vectors are related to the energy via See alsoEdit 1. ^ Born, M. (1926). "Zur Quantenmechanik der Stoßvorgänge". Zeitschrift für Physik. 37 (12): 863–867. Bibcode:1926ZPhy...37..863B. doi:10.1007/BF01397477. Retrieved 16 December 2008. 2. ^ Feynman, Richard; Leighton, Robert; Sands, Matthew (1964). The Feynman Lectures on Physics, Vol. 3. California Institute of Technology. p. 1.1. ISBN 978-0201500646. 3. ^ Jaeger, Gregg (September 2014). "What in the (quantum) world is macroscopic?". American Journal of Physics. 82 (9): 896–905. Bibcode:2014AmJPh..82..896J. doi:10.1119/1.4878358. 4. ^ Section 3.2 of Ballentine, Leslie E. (1970), "The Statistical Interpretation of Quantum Mechanics", Reviews of Modern Physics, 42 (4): 358–381, Bibcode:1970RvMP...42..358B, doi:10.1103/RevModPhys.42.358. This fact is experimentally well-known for example in quantum optics (see e.g. chap. 2 and Fig. 2.1 Leonhardt, Ulf (1997), Measuring the Quantum State of Light, Cambridge: Cambridge University Press, ISBN 0 521 49730 2 5. ^ Matson, John. "What Is Quantum Mechanics Good for?". Scientific American. Retrieved 18 May 2016. 6. ^ The Nobel laureates Watson and Crick cited Pauling, Linus (1939). The Nature of the Chemical Bond and the Structure of Molecules and Crystals. Cornell University Press. for chemical bond lengths, angles, and orientations. 8. ^ Mehra, J.; Rechenberg, H. (1982). The historical development of quantum theory. New York: Springer-Verlag. ISBN 978-0387906423. 9. ^ Kragh, Helge (2002). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. ISBN 978-0-691-09552-3. Extract of p. 58 10. ^ Ben-Menahem, Ari (2009). Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1. Springer. ISBN 978-3540688310. Extract of p, 3678 11. ^ E Arunan (2010). "Peter Debye" (PDF). Resonance. 15 (12). 12. ^ Kuhn, T. S. (1978). Black-body theory and the quantum discontinuity 1894–1912. Oxford: Clarendon Press. ISBN 978-0195023831. 14. ^ Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [On a heuristic point of view concerning the production and transformation of light]. Annalen der Physik. 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607. Reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149–166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134–148. 15. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1056. ISBN 978-1-57955-008-0. 16. ^ van Hove, Leon (1958). "Von Neumann's contributions to quantum mechanics" (PDF). Bulletin of the American Mathematical Society. 64 (3): Part 2:95–99. doi:10.1090/s0002-9904-1958-10206-2. 17. ^ Feynman, Richard. "The Feynman Lectures on Physics III 21-4". California Institute of Technology. Retrieved 2015-11-24. "...it was long believed that the wave function of the Schrödinger equation would never have a macroscopic representation analogous to the macroscopic representation of the amplitude for photons. On the other hand, it is now realized that the phenomena of superconductivity presents us with just this situation. 18. ^ Richard Packard (2006) "Berkeley Experiments on Superfluid Macroscopic Quantum Effects" Archived November 25, 2015, at the Wayback Machine accessdate=2015-11-24 19. ^ "Quantum – Definition and More from the Free Merriam-Webster Dictionary". Merriam-webster.com. Retrieved 2012-08-18. 20. ^ Thall, Edwin. "Thall's History of Quantum Mechanics". Florida Community College at Jacksonville. Archived from the original on October 7, 2009. Retrieved May 23, 2009. 21. ^ "ysfine.com". Retrieved 11 September 2015. 22. ^ "Quantum Mechanics". geocities.com. 2009-10-26. Archived from the original on 2009-10-26. Retrieved 2016-06-13. 23. ^ Staff (11 October 2018). "Where is it, the foundation of quantum reality?". EurekAlert!. Retrieved 13 October 2018. 24. ^ Blasiak, Pawel (13 July 2018). "Local model of a qudit: Single particle in optical circuits". Physical Review. 98 (012118) (1): 012118. Bibcode:2018PhRvA..98a2118B. doi:10.1103/PhysRevA.98.012118. 26. ^ D. Hilbert Lectures on Quantum Theory, 1915–1927 29. ^ Dirac, P.A.M. (1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press, Oxford, p. ix: "For this reason I have chosen the symbolic method, introducing the representatives later merely as an aid to practical calculation." 30. ^ Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 978-3-540-58080-5., Chapter 1, p. 52 31. ^ "Heisenberg – Quantum Mechanics, 1925–1927: The Uncertainty Relations". Aip.org. Retrieved 2012-08-18. 32. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215. ISBN 978-0-7637-2470-2., Chapter 8, p. 215 33. ^ Lodha, Suresh K.; Faaland, Nikolai M.; et al. (2002). "Visualization of Uncertain Particle Movement (Proceeding Computer Graphics and Imaging)" (PDF). Actapress.com. Archived (PDF) from the original on 2018-08-01. Retrieved 2018-08-01. 34. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Cambridge University Press. p. 265. ISBN 978-0-521-80412-7., Chapter, p. 35. ^ "dict.cc dictionary :: eigen :: German-English translation". dict.cc. Retrieved 11 September 2015. 39. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well – Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15. 41. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 978-0-07-096510-2., Chapter 2, p. 36 43. ^ Rechenberg, Helmut (1987). "Erwin Schrödinger and the creation of wave mechanics" (PDF). Acta Physica Polonica B. 19 (8): 683–695. Retrieved 13 June 2016. 44. ^ Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124–128, 285–826. 45. ^ "Archived copy" (PDF). Archived from the original (PDF) on 2011-07-19. Retrieved 2009-06-04.CS1 maint: Archived copy as title (link) 47. ^ Carl M. Bender; Daniel W. Hook; Karta Kooner (2009-12-31). "Complex Elliptic Pendulum". arXiv:1001.0131 [hep-th]. 49. ^ Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 ed.). W.H. Freeman and Company. pp. 160–161. ISBN 978-0-7167-7550-8. 51. ^ Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can quantum-mechanical description of physical reality be considered complete?". Phys. Rev. 47 (10): 777. Bibcode:1935PhRv...47..777E. doi:10.1103/physrev.47.777. 52. ^ N.P. Landsman (June 13, 2005). "Between classical and quantum" (PDF). Retrieved 2012-08-19. Handbook of the Philosophy of Science Vol. 2: Philosophy of Physics (eds. John Earman & Jeremy Butterfield). 54. ^ "Atomic Properties". Academic.brooklyn.cuny.edu. Retrieved 2012-08-18. 55. ^ http://assets.cambridge.org/97805218/29526/excerpt/9780521829526_excerpt.pdf 61. ^ a b Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here [1], "But in the rigorous formulation of the law of causality, – "If we know the present precisely, we can calculate the future" – it is not the conclusion that is faulty, but the premise." 67. ^ Bohr, N. (1928). "The Quantum postulate and the recent development of atomic theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0. 70. ^ "There is as yet no logically consistent and complete relativistic quantum field theory.", p. 4.   – V.B. Berestetskii, E.M. Lifshitz, L.P. Pitaevskii (1971). J.B. Sykes, J.S. Bell (translators). Relativistic Quantum Theory 4, part I. Course of Theoretical Physics (Landau and Lifshitz) ISBN 0-08-016025-5 71. ^ "Stephen Hawking; Gödel and the end of physics". cam.ac.uk. Retrieved 11 September 2015. 72. ^ Hawking, Stephen; Penrose, Roger (2010). The Nature of Space and Time. ISBN 978-1400834747. Retrieved 11 September 2015. 73. ^ Tatsumi Aoyama; Masashi Hayakawa; Toichiro Kinoshita; Makiko Nio (2012). "Tenth-Order QED Contribution to the Electron g-2 and an Improved Value of the Fine Structure Constant". Physical Review Letters. 109 (11): 111807. arXiv:1205.5368v2. Bibcode:2012PhRvL.109k1807A. doi:10.1103/PhysRevLett.109.111807. PMID 23005618. 77. ^ Harrison, Edward (2000). Cosmology: The Science of the Universe. Cambridge University Press. p. 239. ISBN 978-0-521-66148-5. 79. ^ Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1058. ISBN 978-1-57955-008-0. 81. ^ The Transactional Interpretation of Quantum Mechanics by John Cramer Reviews of Modern Physics 58, 647–688, July (1986) 82. ^ The Transactional Interpretation of quantum mechanics. R.E. Kastner. Cambridge University Press. 2013. ISBN 978-0-521-76415-5. p. 35. 83. ^ See, for example, the Feynman Lectures on Physics for some of the technological applications which use quantum mechanics, e.g., transistors (vol III, pp. 14–11 ff), integrated circuits, which are follow-on technology in solid-state physics (vol II, pp. 8–6), and lasers (vol III, pp. 9–13). 84. ^ Pauling, Linus; Wilson, Edgar Bright (1985). Introduction to Quantum Mechanics with Applications to Chemistry. ISBN 9780486648712. Retrieved 2012-08-18. 85. ^ Schneier, Bruce (1993). Applied Cryptography (2nd ed.). Wiley. p. 554. ISBN 978-0471117094. 86. ^ "Applications of Quantum Computing". research.ibm.com. Retrieved 28 June 2017. 87. ^ Chen, Xie; Gu, Zheng-Cheng; Wen, Xiao-Gang (2010). "Local unitary transformation, long-range quantum entanglement, wave function renormalization, and topological order". Phys. Rev. B. 82 (15): 155138. arXiv:1004.3835. Bibcode:2010PhRvB..82o5138C. doi:10.1103/physrevb.82.155138. 88. ^ Anderson, Mark (2009-01-13). "Is Quantum Mechanics Controlling Your Thoughts? | Subatomic Particles". Discover Magazine. Retrieved 2012-08-18. 90. ^ Davies, P.C.W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. ISBN 978-0-7487-4446-6., [https://books.google.com/books?id=XRyHCrGNstoC&pg=PA79 Chapter 6, p. 79 91. ^ Baofu, Peter (2007). The Future of Complexity: Conceiving a Better Way to Understand Order and Chaos. Bibcode:2007fccb.book.....B. ISBN 9789812708991. Retrieved 2012-08-18. 92. ^ Derivation of particle in a box, chemistry.tidalswan.com 1. ^ N.B. on precision: If   and   are the precisions of position and momentum obtained in an individual measurement and  ,   their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements   and  , but the standard deviations will always satisfy  ".[4] More technical: Further readingEdit On Wikibooks External linksEdit Course material
f9040701dc11b1c6
Quotations About The Meaning Of Life It can be frustrating to devote time deciding on and getting your Plus Size Clothes only to uncover out the clothes do not match properly. The American Spirit has returned house following its brief holiday and its Indian counterpart has regained its rightful place. Please do leave your comments beneath on the outpourings of the American Spirit while it was in posession of my mind. coordinate measurement instrument  So with interpretations it is all a minefield, exactly where you can step on any distinct mine. None of these has a solution to the measurement problem. If we ever did come up with a formalism that told us how an outcome happens, say one of the diagonal probabilities in the decohered density matrix, this would imply some sort of details procedure that would uphold on this substratum” the Bell inequalities. In other words it would contradict QM itself. Pilot error is a term employed to describe the cause for an aircraft accident. We have heard broadcasts from the news and study reports from the National Transportation Security Board (NTSB) that states pilot error as the result in for an accident. For instance, Atlas flight 3591 that crashed in Texas killed all on board. Many statements recommend that human error played a part in the fatal Amazon Air cargo crash. Yes, it is clearly incorrect. The continuous and deterministic evolution of the wavefunction has confused a lot of men and women. The Schrödinger equation is clearly at odds with the jumps and randomness that lie at the heart of quantum physics. It is a mistake to consider of the Schrödinger equation as describing an person quantum system. The explanation you have to hold checking the pulse in your left wrist is that when it disappears, it’s a signal that you have pushed the stress in the cuff above your systolic pressure. As blood varies so significantly, distinct folks need to commence measuring if from distinct levels. Of course, it could be suggested that everybody commence from 350 mmHg or far more, higher than any systolic pressure even in people with extreme high blood pressure, but this would inflict a lot of unnecessary discomfort on the massive majority of folks whose systolic pressures lies at 150 mmHg or lower than this – so tailor the process to 20 mmHg or so above your own systolic blood pressure. After a whilst, you are going to be in a position to do this with out checking your wrist pulse, simply because you will have a rough notion of what to count on. Start off by writing a description of your business, including what stage of improvement it is at present in (conception, commence-up, initial year, mature, exit) and your plans for development. Go over the nature of your enterprise, the principal merchandise and solutions you supply, the market place for your products and services, and how and by whom the company is operated. So Daniel, you look like you did you couple of seconds of analysis of his dress blues image. Did it escape your narrow vision that he was sitting down? Very easy, specifically for a MSgt to have his chevrons covered on the best with his blouse becoming scrunched a bit although seated. No fkin jacked up medals, as he only has on ribbons. His rifle and pistol badges are both spaced and centered and he has a fresh haircut in the pic. Did it ever happen to you morons that some individuals do not post each and every single time they get a new rank to Facebook? Sit down. Be humble.
39dd8b5776b06ac3
Quantum machine learning of quantum data with NISQ devices Introduction: the advent of quantum computers and machine learning For some problems in physical science, we need to take care of quantum information to compute or simulate target physics because it is described by quantum mechanics. In most cases, however, it is hard to handle such quantum information on classical computers. Given that, it is natural to think of processing quantum information as it is by a quantum computer, like Feynman’s famous quote, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical”[1]. Today, we are witnessing the realization of prototypical digital quantum computers; quantum devices consisting of hundreds to thousands of non-fault tolerant qubits (quantum bits), the so-called noisy intermediate-scale quantum (NISQ) devices, have been developed rapidly in the past few years. Some commercial companies start providing access to their early-stage NISQ devices to their customers, although the devices seem still too small and noisy to execute any industrially-competitive tasks. Meanwhile, the rapid growth of machine learning algorithms has been applied to various fields in physical science, allowing us to solve problems that have no analytical solution and require a huge amount of computational resources even with numerical methods. So, even though the allowed computations on the NISQ devices are quite limited, combining machine learning algorithms with the quantum devices may circumvent such computational burdens for solving quantum problems, and they may accelerate the applications of the NISQ devices to reach the industrial level. Background: solving quantum chemistry by a quantum machine Quantum chemistry, which studies properties of chemical materials based on quantum mechanics, is one of those fields with wide applications of machine learning. The main task of quantum chemistry is to solve the Schrödinger equation, a fundamental equation of quantum mechanics, to predict various chemical phenomena such as chemical reactions. Although many computational methods for solving the equation on classical computers have been developed for decades, they often require gigantic computational resources, which prevent us from investigating many interesting and industrially-important processes such as photo-excitation dynamics of molecules. One of the promising ways to make use of the NISQ devices for quantum chemistry is the so-called variational quantum eigensolver (VQE), which is a variational algorithm finding the ground state (the most stable state) of a given quantum system [2]. The key idea of the VQE is to combine a programmable and parametrized quantum circuit implemented on the NISQ devices with a classical optimization technique. By optimizing the quantum circuit classically, the VQE algorithm can find the ground state of the system with only a shallow quantum circuit that is presumably executable even on the NISQ devices. The ground state does not describe the whole physics of the quantum system, but the excited states are also essential to analyze it. Nonetheless, even though extensions of the VQE to find excited states have been proposed recently by several researchers [3], they usually require a larger number of runs of the quantum circuit than the VQE does, and therefore it is not so easy to run them on the real NISQ devices. Our proposal: machine learning of "quantum" data To present another approach to find the excited states of quantum systems, Hiroki Kawai (a former intern at QunaSys Inc.) and Yuya O. Nakagawa (a lead researcher at QunaSys Inc.) proposed a simple quantum machine learning scheme, or a machine learning algorithm utilizing a quantum computer [4]. Their proposal aims to learn physical quantities from quantum data. Its capability was demonstrated to predict the properties of electronic excited states of small molecules. The model is trained with supervised machine learning from only its ground state wavefunction realized as a qubit state on the NISQ devices. It has the following distinct features: ・It learns wavefunctions of the target system that are "quantum" data. Most quantum machine learning algorithms treat classical data such as text, images, etc., and have the difficulty of encoding classical data in quantum states on quantum computers. Learning from quantum data is a natural setup of performing quantum machine learning, and there is no overhead in encoding the data into quantum states because the data are already quantum. ・Quantum information should give a more accurate description of the molecular system than classical information, which is so far used for classical machine learning methods applied to quantum chemistry. ・ It reduces the cost of finding the excited states because it only requires the quantum data of the ground state obtained by the usual VQE whose resource requirements are less than finding the excited states by the extensions of the VQE. ・It may be executable on the NISQ devices since the required circuit depth is shallow. Figure 1. Schematic diagram of our model. |ψ> is the input state (the ground state of a molecular Hamiltonian in our setup). U_ent is the quantum reservoir. The gauges indicate single-qubit measurements from which one obtains the classical vector x, and the vector is fed into the machine learning unit f with weights W. Let us describe the details of the algorithm. As well as the VQE, it has both the quantum and classical parts (Fig. 1). First, we input a quantum state that is the ground state of the target Hamiltonian, which is computed by the VQE beforehand. The quantum part consists of a random quantum circuit called quantum reservoir (specified as U_ent in Fig. 1) and the single-qubit Pauli measurements. The reservoir, which can be any kind of random quantum circuit and is fixed during the learning, increases the entanglement of the input quantum state so that one can obtain non-local information from only the measurements of the local, single-qubit Pauli operators (we note that quantum reservoir was introduced in [5]). The measurements in the quantum part of the algorithm give a 3N-dimensional classical real-valued vector where N is the number of qubits. Conducting only the single-qubit Pauli measurements makes it easy to implement the algorithm on the near-term NISQ devices. Moreover, as few as three different circuits are needed to obtain the vector x since the single-qubit operators acting on different qubits commute with each other, and hence the measurements of them can be performed simultaneously. After obtaining the classical vector, the algorithm proceeds to the classical part which is a simple classical machine learning unit, e.g. linear regression. The classical machine learning unit is trained to predict the excited-state properties of the target Hamiltonian from this vector x. More details can be found in the original paper [4]. Numerical Results In [4], numerical simulations for small molecules (LiH and H_4) with various geometric structures were performed to demonstrate the applicability of the method to problems in quantum chemistry. The Hamiltonians for electronic states of the two molecules are considered. The first and second excited energies of the Hamiltonian and the transition dipole moment from the ground state to the first excited state are chosen as the target excited-state properties that the proposed algorithm seeks to predict from the ground state wavefunctions. The authors numerically simulated two situations; one is the noiseless situation, where the ideal outputs from the quantum circuits are available, and the other is the noisy situation, where the quantum circuit has noise and the measurement results have statistical and systematic errors. Figure 2. The prediction results by the trained model for LiH with various bond lengths for the noiseless simulations (top row) and the noisy simulations (bottom row). The left, center, right columns present the results for the 1st excitation energy, the 2nd excitation energy, and the transition dipole moment, respectively. AE represents the absolute error between the predictions and the exact values, and MAE does the mean of AE. The prediction results for the LiH molecule as an example are shown in Fig. 2. We can see that the proposed algorithm predicts the exact values of the excited-state properties with high precision for the noiseless cases. Even in noisy cases resembling the actual NISQ devices, it reproduces the approximative values. In conclusion, Kawai and Nakagawa introduced a new quantum machine learning scheme for predicting the excited-state properties of a molecular Hamiltonian from its ground state wavefunction. It consists of a random quantum circuit called a quantum reservoir and a simple classical machine learning unit. The numerical simulations demonstrated that it can accurately predict the target excited-state properties and has the potential to be implemented on a near-term NISQ device. Although numerical demonstrations were performed only for the small molecules, we expect that it is applicable to larger molecules appearing in more practical applications in quantum chemistry and material science in the future. [1] Feynman R P, International Journal of Theoretical Physics 21 467–488 (1982) [2] Peruzzo A, McClean J, Shadbolt P, Yung M H, Zhou X Q, Love P J, Aspuru-Guzik A and O’Brien J L, Nature Communications 5 4213 (2014) McClean J R, Kimchi-Schwartz M E, Carter J and de Jong W A, Phys. Rev. A 95, 042308 (2017), Colless J I, Ramasesh V V, Dahlen D, Blok M S, Kimchi-Schwartz M E, McClean J R, Carter J, de Jong W A and Siddiqi I, Phys. Rev. X 8, 011021 (2018), Nakanishi K M, Mitarai K and Fujii K, Phys. Rev. Research 1, 033062 (2019), see also this blog post, Parrish R M, Hohenstein E G, McMahon P L and Martınez T J, Phys. Rev. Lett. 122, 230401 (2019), Higgott O, Wang D and Brierley S, Quantum 3 156 (2019), Jones T, Endo S, McArdle S, Yuan X and Benjamin S C, Phys. Rev. A 99 062304 (2019), Ollitrault P J, Kandala A, Chen C F, Barkoutsos P K, Mezzacapo A, Pistoia M, Sheldon S,Woerner S, Gambetta J and Tavernelli I, Phys. Rev. Research 2, 043140 (2020), Tilly J, Jones G, Chen H, Wossnig L and Grant E, Phys. Rev. A 102, 062425 (2020) [4] Kawai H and Nakagawa Y O, Mach. Learn.: Sci. Technol. 1, 045027 (2020)[5] Fujii K and Nakajima K, Phys. Rev. Applied 8, 024030 (2017) Written by Hiroki Kawai and Yuya O. Nakagawa. QunaSys keeps developing efficient quantum algorithms to accelerate various applications of quantum computers. Our mission is to enthusiastically develop technologies that bring out the maximum potential of quantum computers and to continually deliver innovations to society. Blogs about quantum algorithm developments written by a bunch of quantum native people. Follow to catch up with the most recent quantum news. Recommended from Medium Linear Regression Jigsaw Unintended Bias in Toxicity Classification The tale of GANs: “Can AI be creative?” AdaBoost- A Boosting Algoritm| In-depth Math Intuition How to build an image duplicate finder Construction site accident analysis A Deep Dive into Stacking Ensemble Machine Learning — Part II What Are The Feature Transformation Techniques? Get the Medium app QunaSys Tech Blog QunaSys Tech Blog More from Medium Assay models: blue-collar AI in the biology lab LiDAR [ Light Detection and Ranging Sensor] Run an inference with tflite-runtime inside a Raspberry Pi 4B. ANN training using two time series as input
afa62362a317e0d4
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Daniel Boyd Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Austin Farrer Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus Tim Maudlin James Martineau Nicholas Maxwell Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Marcello Barbieri Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Bernard d'Espagnat Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Benjamin Gal-Or Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Emil Roduner Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Franco Selleri Claude Shannon Charles Sherrington David Shiang Abner Shimony Herbert Simon Dean Keith Simonton Edmund Sinnott B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Teilhard de Chardin Libb Thims William Thomson (Kelvin) Richard Tolman Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll C. S. Unnikrishnan C. H. Waddington John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Günther Witzany Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium Interpretations of Quantum Mechanics Wikipedia has a most comprehensive page on the Interpretations of Quantum Mechanics. As with our analysis of positions on the free will problem, there are many interpretations, some very popular (with many adherents), some with only a few supporters. The popular views are defended in hundreds or journal articles and published books. Just as with philosophers, the supporters of an interpretation often have their own jargon which sometimes makes communication between the different positions difficult. The standard "orthodox" interpretation of quantum mechanics includes a projection postulate. This is the idea that once one of the possible locations for a particle becomes actual at one position, the probabilities for actualization at all other positions becomes instantly zero. This sudden disappearance of possibilities/probabilities at locations remote from where a particle is actually found is called nonlocality. It was first seen as early as 1905 by Albert Einstein. "Projection" or "reduction of the wave packet" is known as the "collapse of the wave function," although the wave itself function does not "collapse" in the sense of gathering itself together at the single point where a particle is found. All that changes is our knowledge about the particle, where it is actually found. What changes is only abstract immaterial information about the particle's location. In the two-slit experiment, for example, the wave function actually does not change at all, since it just depends on the boundary conditions in the experiment, which do not change because one particle has been found. Every future experiment with the same conditions has exactly the same wave function and thus the same probabilities for finding a particle. Unless, of course, we change from one slit open to both open, or vice versa. Another similar particle entering the same space, after the first particle has been detected and thus removed from the space, would have the same probability distribution, since the wave function is determined by the solution of the Schrödinger equation, given the boundary conditions for the space and the wavelength of the particle. The wave function is simply immaterial information. It remains a mystery how it controls (if it controls) the motions of individual particles so their the predicted probabilities agree perfectly with the statistics of large numbers of identical experiments. Today there appear to be about as many unorthodox interpretations, denying Paul Dirac projection postulate, as there are more standard views. No-Collapse Interpretations Pilot-Wave Theory - deterministic, non-local, hidden variables, no observer, particles (de Broglie-Bohm, 1952) Many-Worlds Interpretation - deterministic, local, hidden variables, no observer (Everett-De Witt, 1957) Time-Symmetric Theory (Aharonov, 1964) Decoherence - deterministic, local, no particles (Zeh-Zurek, 1970) Consistent Histories - local (Griffiths-Omnès-Gell-Mann-Hartle, 1984) Consistent Histories - local Cosmological Interpretation (Aguirre and Tegmark, 2012) Collapse Interpretations Copenhagen Interpretation - indeterministic, non-local, observer (Bohr-Heisenberg-Born-Jordan, 1927) Conscious Observer - indeterministic, non-local, observer (Von Neumann-Wigner) Statistical Ensemble - indeterministic, non-local, no observer (Einstein-Born- Ballentine) Objective Collapse - indeterministic, non-local, no observer (Ghirardi-Rimini-Weber, 1986; Penrose, 1989) Transactional Interpretation - indeterministic, non-local, no observer, no particles (Cramer, 1986) Relational Interpretation - local, observer (Rovelli, 1994) Pondicherry Interpretation - indeterministic, non-local, no observer (Mohrhoff, 2005) Information Interpretation - indeterministic, non-local, no observer (Doyle, 2015) From the earliest days of quantum theory, when Max Planck in 1900 hypothesized an abstract "quantum of action" and Albert Einstein in 1905 hypothesized that energy comes in physical quanta, there have been disagreements about "interpretations," misunderstandings about the underlying "reality" of the external world that could account for the apparent agreement between quantum theory and the observed experimental facts. For example, the inventor of the quantum of action used his constant h as a heuristic device to calculate the probabilities of various virtual oscillators (distributing them among energy states using Boltzmann's statistical mechanics ideas, the partition function, etc.). He quantized these mechanical oscillators, but not the radiation field itself. In 1913, Bohr similarly quantized the oscillators (electrons) in the "old quantum theory" and his planetary model of the electrons orbiting the Rutherford nucleus. Bohr's electrons "jump" discontinuously from orbit to orbit, emitting or absorbing discrete amounts of energy En - Em where n and m are orbital "quantum numbers." But Bohr insisted that the energy radiated in a quantum jump was continuous, ignoring Einstein's hypothesis. By comparison to Planck, Einstein had already in 1905 quantized the continuous electromagnetic radiation field as light quanta (today's photons). Planck denied the physical "reality" of any quanta (including his own) until 1910 at the earliest. And Bohr did not accept photons as being emitted and absorbed during quantum jumps until twenty years after Einstein proposed them - if then. Photons are now universally accepted, of course, and (sadly) standard quantum mechanics says they are emitted and absorbed during Bohr's "quantum jumps" of the electrons. Einstein saw clearly that if the radiation emitted by an atom were to spread out diffusely as a classical wave into a large volume of space, how could the energy collect itself together again instantly to be absorbed by another atom - without having that energy travel faster than light speed as it gathered itself together in the absorbing atom? He clearly saw that a discrete, discontinuous "jump" was involved, something denied by many of the modern "interpretations" of quantum mechanics. He also saw that the wave that filled space moments before the detection of the whole quantum of energy must disappear instantly as all the energy in the quantum is absorbed by a single atom in a particular location. This was a collapse of a light wave twenty years before there was a "wave function" and Erwin Schrōdinger's wave equation! Later Einstein interpreted the wave at a point as the probability of light quanta at that point. many years before Max Born's statistical interpretation of the wave function! The idea of something (later called the wave function) associated with the particle led to the problem of wave-particle duality, described first by Einstein in 1909. In 1927, he expressed concern that what came to be called nonlocality violates his special theory of relativity. To this day, it drives the idea that quantum physics cannot be reconciled with relativity. It can. The nadir of interpretation was probably the most famous interpretation of all, the one developed in Copenhagen, the one Niels Bohr's assistant Leon Rosenfeld said was not an interpretation at all, but simply the "standard orthodox theory" of quantum mechanics. It was the nadir of interpretation because Copenhagen wanted to put a stop to "interpretation" in the sense of understanding or "visualizing" an underlying reality. The Copenhageners said we should not try to "visualize" what is going on behind the collection of observable experimental data. Just as Kant said we could never know anything about the "thing in itself," the Ding-an-sich, so the positivist philosophy of Comte, Mach, Russell, and Carnap and the British empiricists Locke and Hume claim that knowledge stops at the "secondary" sense data or perceptions of phenomena, preventing access to the primary "objects." Einstein's views on quantum mechanics have been seriously distorted (and his early work largely forgotten), perhaps because of his famous criticisms of Born's "statistical interpretation" and Werner Heisenberg's claim that quantum mechanics was "complete" without describing what particles are doing from moment to moment. Though its foremost critic, Einstein frequently said that quantum mechanics was a most successful theory, the very best theory so far at explaining microscopic phenomena, but that he hoped his ideas for a continuous field theory would someday add to the discrete particle theory and its "non-local" phenomena. It would allow us to get a deeper understanding of underlying reality, though at the end he despaired for his continuous field theory compared to particle theories. Many of the "interpretations" of quantum mechanics deny a central element of quantum theory, one that Einstein himself established in 1916, namely the role of indeterminism, or "chance," to use its traditional name, as Einstein did in physics (in German, Zufall) and as William James did in philosophy in the 1880's. These interpretations hope to restore the determinism of classical mechanics. Einstein hoped for a return to deterministic physics, but even more important for him was a physics based on continuous fields, rather than discrete discontinuous particles. We can therefore classify various interpretations by whether they accept or deny chance, especially in the form of the so-called "collapse" of the wave function, also known as the "reduction" of the wave packet or what Paul Dirac called the "projection postulate." Most "no-collapse" theories are deterministic. "Collapses" in standard quantum mechanics are irreducibly indeterministic. And a great surprise is that the wave function in fact does not collapse! Many interpretations are attempts to wrestle with still another problem that Einstein saw as early as 1905, in "non-local" events something appears to be moving faster than light and thus violating his special theory of relativity (which he formulated in 1905). So we can classify interpretations by whether they accept the instantaneous nature of the collapse, especially the collapse of the two-particle wave function of "entangled" systems, where two particles appear instantly in widely separated places, with correlated properties that conserve energy, momentum, angular momentum, spin, etc. These interpretations are concerned about nonlocality - the idea that "reality" is "nonlocal" with simultaneous events in widely separated places correlated perfectly - a sort of "action-at-a-distance." Many interpretations prefer wave mechanics to quantum mechanics, seeing wave theories as continuous field theories. They like to regard the wave function as a real entity rather than an abstract possibilities function. De Broglie's pilot-wave theory and its variations (e.g., Bohmian mechanics, Schrödinger's view) hoped to represent the particle as a "wave packet" composed of many waves of different frequencies, such that the packet has non-zero values in a small volume of space. Schrödinger and others found such a wave packet rapidly disperses . Finally, we may also classify interpretations by their definitions of what constitutes a "measurement," and particularly what they see as the famous "problem of measurement." Niels Bohr, Werner Heisenberg, and John von Neumann had a special role for the "conscious observer"in a measurement. Eugene Wigner claimed that the observer's conscious mind caused the wave function to collapse in a measurement. So we have three major characterizations - indeterministic-discrete-discontinuous "collapse" vs. deterministic-continuous "no-collapse" theories, nonlocality-faster-than-light vs. local "elements of reality" in "realistic theories, and the role of the observer. Another way to look at an interpretation is to ask which basic element (or elements) of standard quantum mechanics does the interpretation question or just deny? For example, some interpretations deny the existence of particles. They admit only waves that evolve unitarily under the Schrōdinger equation. We can begin by describing those elements, using the formulation of quantum mechanics that Einstein thought most perfect, that of P. A. M. Dirac. A Brief Introduction to Basic Quantum Mechanics Einstein said of Dirac in 1930, "Dirac, to whom, in my opinion, we owe the most perfect exposition, logically, of this [quantum] theory" All of quantum mechanics rests on the Schrōdinger equation of motion that deterministically describes the time evolution of the probabilistic wave function, plus three basic assumptions, the principle of superposition (of wave functions), the axiom of measurement (of expectation values for observables), and the projection postulate (which describes the collapse of the wave function that introduces indeterminism or chance during interactions). Dirac's "transformation theory" then allows us to "represent" the initial wave function (before an interaction) in terms of a "basis set" of "eigenfunctions" appropriate for the possible quantum states of our measuring instruments that will describe the interaction. Elements in the "transformation matrix" immediately give us the probabilities of measuring the system and finding it in one of the possible quantum states or "eigenstates," each eigenstate corresponding to an "eigenvalue" for a dynamical operator like the energy, momentum, angular momentum, spin, polarization, etc. Diagonal (n, n) elements in the transformation matrix give us the eigenvalues for observables in quantum state n. Off-diagonal (n, m) matrix elements give us transition probabilities between quantum states n and m. Notice the sequence - possibilities > probabilities > actuality: the wave function gives us the possibilities, for which we can calculate probabilities. Each experiment gives us one actuality. A very large number of identical experiments confirms our probabilistic predictions to thirteen significant figures (decimal places), the most accurate physical theory ever discovered. 1. The Schrōdinger Equation. The fundamental equation of motion in quantum mechanics is Erwin Schrōdinger's famous wave equation that describes the evolution in time of his wave function ψ. iℏ δψ / δt = H ψ         (1) Max Born interpreted the square of the absolute value of Schrōdinger's wave function |ψn |2 (or < ψn | ψn > in Dirac notation) as providing the probability of finding a quantum system in a particular state n. As long as this absolute value (in Dirac bra-ket notation) is finite, < ψn | ψn > ≡ ψ* (q) ψ (q) dq < ,         (2) then ψ can be normalized, so that the probability of finding a particle somewhere < ψ | ψ > = 1, which is necessary for its interpretation as a probability. The normalized wave function can then be used to calculate "observables" like the energy, momentum, etc. For example, the probable or expectation value for the position r of the system, in con figuration space q, is < ψ | r | ψ > = ψ* (q) r ψ (q) dq.         (3) 2. The Principle of Superposition. The Schrōdinger equation (1) is a linear equation. It has no quadratic or higher power terms, and this introduces a profound - and for many scientists and philosophers a disturbing - feature of quantum mechanics, one that is impossible in classical physics, namely the principle of superposition of quantum states. If ψa and ψb are both solutions of equation (1), then an arbitrary linear combination of these, | ψ > = ca | ψa > + cb | ψb >,         (4) with complex coefficients ca and cb, is also a solution. Together with Born's probabilistic (statistical) interpretation of the wave function, the principle of superposition accounts for the major mysteries of quantum theory, some of which we hope to resolve, or at least reduce, with an objective (observer-independent) explanation of irreversible information creation during quantum processes. Observable information is critically necessary for measurements, though observers can come along anytime after the information comes into existence as a consequence of the interaction of a quantum system and a measuring apparatus. The quantum (discrete) nature of physical systems results from there generally being a large number of solutions ψn (called eigenfunctions) of equation (1) in its time independent form, with energy eigenvalues En. H ψn = En ψn,         (5) The discrete spectrum energy eigenvalues En limit interactions (for example, with photons) to specifi c energy diff erences En - Em. In the old quantum theory, Bohr postulated that electrons in atoms would be in "stationary states" of energy En, and that energy differences would be of the form En - Em = , where ν is the frequency of the observed spectral line. Einstein, in 1916, derived these two Bohr postulates from basic physical principles in his paper on the emission and absorption processes of atoms. What for Bohr were assumptions, Einstein grounded in quantum physics, though virtually no one appreciated his foundational work at the time, and few appreciate it today, his work eclipsed by the Copenhagen physicists. The eigenfunctions ψn are orthogonal to each other < ψn | ψm > = δnm         (6) where the "delta function" δnm = 1, if n = m, and = 0, if n ≠ m.         (7) Once they are normalized, the ψn form an orthonormal set of functions (or vectors) which can serve as a basis for the expansion of an arbitrary wave function φ  | φ > = n = 0 n = ∞ cn | ψn >.         (8) The expansion coefficients are cn = < ψn | φ >.         (9) In the abstract Hilbert space, < ψn | φ > is the "projection" of the vector φ onto the orthogonal axes ψn of the ψn "basis" vector set. 2.1 An example of superposition. Dirac tells us that a diagonally polarized photon can be represented as a superposition of vertical and horizontal states, with complex number coefficients that represent "probability amplitudes." Horizontal and vertical polarization eigenstates are the only "possibilities," if the measurement apparatus is designed to measure for horizontal or vertical polarization. | d > = ( 1/√2) | v > + ( 1/√2) | h >          (10) The vectors (wave functions) v and h are the appropriate choice of basis vectors, the vector lengths are normalized to unity, and the sum of the squares of the probability amplitudes is also unity. This is the orthonormality condition needed to interpret the (squares of the) wave functions as probabilities. When these (in general complex) number coefficients (1/√2) are squared (actually when they are multiplied by their complex conjugates to produce positive real numbers), the numbers (1/2) represent the probabilities of finding the photon in one or the other state, should a measurement be made on an initial state that is diagonally polarized. Note that if the initial state of the photon had been vertical, its projection along the vertical basis vector would be unity, its projection along the horizontal vector would be zero. Our probability predictions then would be - vertical = 1 (certainty), and horizontal = 0 (also certainty). Quantum physics is not always uncertain, despite its reputation. 3. The Axiom of Measurement. The axiom of measurement depends on the idea of "observables," physical quantities that can be measured in experiments. A physical observable is represented as an operator A that is "Hermitean" (one that is "self-adjoint" - equal to its complex conjugate, A* = A). The diagonal n, n elements of the operator's matrix, < ψn | A | ψn > = ∫ ∫ ψ* (q) A (q) ψ (q) dq,         (11) The molecule suffers a recoil in the amount of hν/c during this elementary process of emission of radiation; the direction of the recoil is, at the present state of theory, determined by "chance"... The weakness of the theory is, on the one hand, that it does not bring us closer to a link-up with the wave theory; on the other hand, it also leaves time of occurrence and direction of the elementary processes a matter of "chance." Albert Einstein, 1916 It is the intrinsic quantum probabilities that provide the ultimate source of indeterminism, and consequently of irreducible irreversibility, as we shall see. Transitions between states are irreducibly random, like the decay of a radioactive nucleus (discovered by Rutherford in 1901) or the emission of a photon by an electron transitioning to a lower energy level in an atom (explained by Einstein in 1916). The axiom of measurement is the formalization of Bohr's 1913 postulate that atomic electrons will be found in stationary states with energies En. In 1913, Bohr visualized them as orbiting the nucleus. Later, he said they could not be visualized, but chemists routinely visualize them as clouds of probability amplitude with easily calculated shapes that correctly predict chemical bonding. The off-diagonal transition probabilities are the formalism of Bohr's "quantum jumps" between his stationary states, emitting or absorbing energy = En - Em. Einstein explained clearly in 1916 that the jumps are accompanied by his discrete light quanta (photons), but Bohr continued to insist that the radiation was classical for another ten years, deliberately ignoring Einstein's foundational efforts in what Bohr might have felt was his area of expertise (quantum mechanics). The axiom of measurement asserts that a large number of measurements of the observable A, known to have eigenvalues An, will result in the number of measurements with value An that is proportional to the probability of fi nding the system in eigenstate ψn. Quantum mechanics is a probabilistic and statistical theory. The probabilities are theories about what experiments will show. Experiments provide the statistics (the frequency of outcomes) that confirm the predictions of quantum theory - with the highest accuracy of any theory ever discovered! 4. The Projection Postulate. The third novel idea of quantum theory is often considered the most radical. It has certainly produced some of the most radical ideas ever to appear in physics, in attempts by various "interpretations" to deny it. The projection postulate is actually very simple, and arguably intuitive as well. It says that when a measurement is made, the system of interest will be found in (will instantly "collapse" into) one of the possible eigenstates of the measured observable. We have several possibilities for eigenvalues. We can calculate the probabilities for each eigenvalue. Measurement simply makes one of these actual, and it does so, said Max Born, in proportion to the absolute square of the probability amplitude wave function ψn. Note that Einstein saw the chance in quantum theory at least ten years before Born In this way, ontological chance enters physics, and it is partly this fact of quantum randomness that bothered Einstein ("God does not play dice") and Schrōdinger (whose equation of motion for the probability-amplitude wave function is deterministic). The projection postulate, or collapse of the wave function, is the element of quantum mechanics most often denied by various "interpretations." The sudden discrete and discontinuous "quantum jumps" are considered so non-intuitive that interpreters have replaced them with the most outlandish (literally) alternatives. The famous "many-worlds interpretation" substitutes a "splitting" of the entire universe into two equally large universes, massively violating the most fundamental conservation principles of physics, rather than allow a diagonal photon arriving at a polarizer to suddenly "collapse" into a horizontal or vertical state. 4.1 An example of projection. Given a quantum system in an initial state | φ >, we can expand it in a linear combination of the eigenstates of our measurement apparatus, the | ψn >. In the case of Dirac's polarized photons, the diagonal state | d > is a linear combination of the horizontal and vertical states of the measurement apparatus, | v > and | h >. When we square the (1/√2) coefficients, we see there is a 50% chance of measuring the photon as either horizontal or vertically polarized. 4.2 Visualizing projection. When a photon is prepared in a vertically polarized state | v >, its interaction with a vertical polarizer is easy to visualize. We can picture the state vector of the whole photon simply passing through the polarizer unchanged. The same is true of a photon prepared in a horizontally polarized state | h > going through a horizontal polarizer. And the interaction of a horizontal photon with a vertical polarizer is easy to understand. The vertical polarizer will absorb the horizontal photon completely. The diagonally polarized photon | d >, however, fully reveals the non-intuitive nature of quantum physics. We can visualize quantum indeterminacy, its statistical nature, and we can dramatically visualize the process of collapse, as a state vector aligned in one direction must rotate instantaneously into another vector direction. As we saw above (Figure 2.1), the vector projection of | d > onto | v >, with length (1/√2), gives us the probability 1/2 for photons to emerge from the vertical polarizer. But this is only a statistical statement about the expected probability for large numbers of identically prepared photons. When we have only one photon at a time, we never get one-half of a photon coming through the polarizer. Critics of standard quantum theory sometimes say that it tells us nothing about individual particles, only ensembles of identical experiments. There is truth in this, but nothing stops us from imagining the strange process of a single diagonally polarized photon interacting with the vertical polarizer. There are two possibilities. We either get a whole photon coming through (which means that it "collapsed" or the diagonal vector was "reduced to" a vertical vector) or we get no photon at all. This is the entire meaning of "collapse." It is the same as an atom "jumping" discontinuously and suddenly from one energy level to another. It is the same as the photon in a two-slit experiment suddenly appearing at one spot on the photographic plate, where an instant earlier it might have appeared anywhere. We can even visualize what happens when no photon appears. We can imagine that the diagonal photon was reduced to a horizontally polarized photon and was completely absorbed. Why can we see the statistical nature and the indeterminacy? First, statistically, in the case of many identical photons, we can say that half will pass through and half will be absorbed. The indeterminacy is that in the case of one photon, we have no ability to know which it will be. This is just as we cannot predict the time when a radioactive nucleus will decay, or the time and direction of an atom emitting a photon. This indeterminacy is a consequence of our diagonal photon state vector being "represented" (transformed) into a linear superposition of vertical and horizontal photon state vectors. Thus the principle of superposition together with the projection postulate provides us with indeterminacy, statistics, and a way to "visualize" the collapse of a superposition of quantum states into one of the basis states. For Teachers For Scholars Chapter 1.1 - Creation Chapter 1.3 - Information Home Part Two - Knowledge Normal | Teacher | Scholar
ed0f137cea73fb96
Physics 2 Course Description Solid state materials elasticity. Mechanical oscillation and mechanical waves. Sound waves. Doppler?s effect. Electromagnetic waves. Maxwell?s equations. Wave equation, wave propagation. Geometrical optics, mirrors, lenses and prisms. Physical optics. Interference, diffraction and polarization. Photometry. Quantum nature of light. Blackbody radiation, quantization. Photo effect and Compton?s effect. Atom structure. Atomic specters. X-rays. Atomic nucleus. Radioactivity. Fission and fusion. Basic nature forces and elementary particles. General Competencies Students completing this course will: understand, appreciate and utilize basics of optics, wave theory and atomic physics in modern technologies and its devices; understand the fundamental principles of physics to prepare students to continue education in modern science and technology, as well as forming a foundation for life-long learning. Learning Outcomes 1. Analyze oscillatory systems in mechanics. 2. Apply the linearization technique to equations of motion of oscillatory systems. 3. Explain the wave equation in nondispersive medium. 4. Derive the electromagnetic wave equation from the Maxwell equations. 5. Analyze optical systems using the methods of geometrical optics. 6. Explain the phenomena of interference, diffraction and polarization of light. 7. Explain Planck's law of black body radiation. 8. Relate the atomic spectrum to quantization of energy levels. Forms of Teaching Lectures are delivered to groups of approximately 120 students using electronic presentations, detailed derivations on the blackboard and demonstration experiments. Written mid-term exam and the final exam consist of four excercises and a number of multiple choice questions. Laboratory Work Students perform six laboratory experiments, carry out the analisys of the measured data and write the final report for each experiment. Lectures are supported by demonstration experiments that illustrate the concepts of physics. Approximately 30 mins / week. At least once a week each professor is available to the students for consultations. During the semester homework assignments are delivered to the students through the e-learning system Merlin (Moodle). Other Forms of Group and Self Study In approximately 4 terms per semester additional exercise-solving skills are demonstrated by assistants. Grading Method Continuous Assessment Exam Type Threshold Percent of Grade Threshold Percent of Grade Laboratory Exercises 5 % 10 % 5 % 10 % Homeworks 0 % 10 % 0 % 10 % Mid Term Exam: Written 0 % 40 % 0 % Final Exam: Written 0 % 40 % Exam: Written 0 % 80 % In the mid-term exam and in the final exam one (of four) excercises must be correctly completed. In the written exam two (of six) excercises must be correctly completed. Week by Week Schedule 1. The theory of elasticity-introduction. Oscillations. Tension and compression. Modulus of elasticity. Poisson ratio. Examples and sample problems. Simple harmonic motion: the force law, equation of motion, initial conditions. Experiments with the spring. Phasors. Harmonic motion: energy considerations. 2. Pendula. The simple pendulum. The physical pendulum. Torsion pendulum. Equations of motion. Torsion modules. Experiments and examples. Damped simple harmonic motion. Equation of motion, solution for weak damping, analogy with electrical oscillatory circuit. Examples and sample problems. 3. Damped harmonic motion II. Forced oscillations and resonance. Logarithmic damping decrement and Q-factor. Damped harmonic motion: energy consideration. Forced oscillations and resonance. Experiments. Analogy with electrical oscillatory circuits. Oberbeck’s pendulums. Experiments. Modulated oscillations. Lissajouss’s curves. Experiments. Computer exercises. 4. Waves. Progressive waves. Reflection and superposition. Wave speed. Reflection and refraction. The principle of superposition. Waves in a stretched string. Standing waves. Frequency specter. Experiments. Fourier analysis of waves in a stretched string. Wave equation. Energy and power in a traveling wave. 5. Longitudinal waves. Sound waves. Intensity and sound level. Longitudinal wave equation. Experiments with the spring. Longitudinal standing waves. Experiments – Kundt’s tube. Standing waves in solid state materials. Ultrasound – generation and the application. The Doppler’s sound effect. 6. Basics of electromagnetism. Electromagnetic waves. Gauss law and the First Maxwell’s Equation (both in differential and integral form). Ampere-Maxwell theorem and Second Maxwell’s Equation. Third Maxwell’s Equation. Faraday’s law of induction. Faraday’s experiments. Fourth Maxwell’s equation. Wave equation. Experiments with polarization of EM waves. Poynting’s theorem and Poynting’s vector. Energy transport. Fresnel equations. Computer exercises. 7. Photometry. Geometrical optics I. Basic photometric units and quantities. Experiments. Basic laws of geometrical optics. Experiments. Fermat’s principle – reflection and refraction. Fresnel equation and geometrical optic laws. Experimental and demonstrational set. Computer exercises. mirrors. Spherical refracting surfaces. Thin lenses formulas. Aberration. Prisms. Dispersion. Experiments. 8. E X A M 9. Physical optics I. Interference. Light as a wave. Young’s experiment. Coherent light. Intensity in Double-slit interference. Minimums and maximums. Light interference devices. Fresnel biprism experiment. Michelson’s interferometer. Newton circle. Polarization of light. Holography. Brewster law. Selective absorption. 10. Physical Optics II. Diffraction of light. Multiple source interference. Gratings: dispersion and resolving power specter. Experiments. Single-slit diffraction. Diffraction and optical grating. Polarization of light. Holography. Brewster law. Selective absorption. Polarization and absorption device. Faraday’s and Tyndal’s effect. 11. Introduction to modern physics I. Blackbody radiation. Rayleigh-Jeans law of blackbody radiation, Stefan-Boltzman law. Wien laws. Planck law. Quantum hypothesis. 12. Introduction to modern physics II. The photoelectric effect. Compton effect. The Bohr model of the hydrogen atom. Clarical explanation of photo effect. Experiments. Compton scattering. Thomson and Rutherford model of the atom. Computer exercises. 13. The Bohr’s model of atom. Quantization. Balmer’s formula. Energy and orbit quantization conditions. Absorption and emission of light. Atomic specters. Franck-Hertz experiment. X-rays. Experiments. Bohr-Sommerfeld model of the atom. The Schrödinger equation. The Heisenberg uncertainty principles. 14. Quantum numbers. Nucleus and nuclei-composition of nuclei. Radioactivity and nuclear reactions. Elementary particles. The Pauli exclusion principle and the structure of many-electron atoms. Experiments: detector types, beta-particle detection and gamma-radiation, radiation protection. 15. E X A M Study Programmes University undergraduate [FER2-HR] Computing - study Elective course from a group of mandatory courses-Physics 2 (3. semester) [FER2-HR] Electrical Engineering and Information Technology - study (3. semester) V.Henč-Bartolić, P.Kulišić (2004.), Valovi i optika, Školska kniga, Zagreb D. Horvat (2011.), Fizika II - Titranje, valovi, elektromagnetizam, optika i uvod u modernu fiziku, Neodidakta, Zagreb V. Henč-Bartolić, M. Baće. P. Kulišić, L. Bistričić, D. Horvat, Z. Narančić, T. Petković, D. Pevec (2002.), Riješeni zadaci iz valova i optike, Školska knjiga, Zagreb D. Halliday, R. Resnick, J. Walker (2003.), Fundamentals of Physics D. Halliday, R. Resnick, J. Walker J. Wiley, New York 1993, J. Wiley, New York Laboratory exercises For students ID 31487   Winter semester L1 English Level L2 e-Learning 75 Lectures 0 Seminar 0 Exercises 15 Laboratory exercises 0 Project laboratory Grading System 85 Excellent 75 Very Good 60 Good 50 Acceptable
42b88b9ce10e681c
Skip to main content Physics LibreTexts 4.6: The Hydrogen Atom • Page ID Factoring Out the Center of Mass Motion The hydrogen atom consists of two particles, the proton and the electron, interacting via the Coulomb potential \(V(\vec{r_1}-\vec{r_2})=e^2/r\), where as usual \(r=|\vec{r_1}-\vec{r_2}|\). Writing the masses of the two particles as \(m_1, m_2\) Schrödinger’s equation for the atom is: \[ \left( -\frac{\hbar^2}{2m_1}\vec{\nabla_1}^2-\frac{\hbar^2}{2m_2}\vec{\nabla_2}^2-\frac{e^2}{r}\right) \psi(\vec{r_1},\vec{r_2})=E\psi(\vec{r_1},\vec{r_2}). \label{4.6.1}\] But \(\vec{r_1},  \vec{r_2}\) are not the most natural position variables for describing this system: since the potential depends only on the relative position, a better choice is \(\vec{r}, \vec{R}\) defined by: \[ \vec{r}=\vec{r_1}-\vec{r_2},\;\; \vec{R}=\frac{m_1\vec{r_1}+m_2\vec{r_2}}{m_1+m_2} \label{4.6.2}\] so \(\vec{R}\) is the center of mass of the system. It is convenient at the same time to denote the total mass by \(M=m_1+m_2\), and the reduced mass by \(m=\frac{m_1m_2}{m_1+m_2}\). Transforming in straightforward fashion to the variables \(\vec{r},  \vec{R}\) Schrödinger’s equation becomes \[ \left(-\frac{\hbar^2}{2M}\vec{\nabla_R}^2-\frac{\hbar^2}{2m}\vec{\nabla_r}^2-\frac{e^2}{r}\right)\psi(\vec{R}, \vec{r})=E\psi(\vec{R}, \vec{r}). \label{4.6.3}\] Writing the wave function \[ \psi(\vec{R}, \vec{r})=\Psi(\vec{R})\psi(\vec{r}) \label{4.6.4}\] we can split the equation into two: \[ \begin{matrix} \left( -\frac{\hbar^2}{2M}\vec{\nabla_R}^2\right) \Psi(\vec{R})=E_R\Psi(\vec{R}) \\ \left( -\frac{\hbar^2}{2m}\vec{\nabla_r}^2+V(\vec{r})\right) \psi(\vec{r})=Er\psi(\vec{r}) \end{matrix}\label{4.6.5}\] and the total system energy is \(E=E_R+E_r\). Note that the motion of the center of mass is (of course) just that of a free particle, having a trivial plane wave solution. From now on, we shall only be concerned with the relative motion of the particles. Since the proton is far heavier than the electron, we will almost always ignore the difference between the electron mass and the reduced mass, but it should be noted that the difference is easily detectable spectroscopically: for example, the lines shift if the proton is replaced by a deuteron (heavy hydrogen). We’re ready to write Schrödinger’s equation for the hydrogen atom, dropping the r suffixes in the second equation above, and writing out \(\vec{\nabla}^2\) explicitly in spherical coordinates: \[ \begin{matrix} -\frac{\hbar^2}{2m}\left( \frac{1}{r^2}\frac{\partial}{\partial r}\left( r^2\frac{\partial\psi}{\partial r}\right) +\frac{1}{r^2 \sin\theta}\frac{\partial}{\partial\theta}\left( \sin\theta\frac{\partial\psi}{\partial\theta}\right) +\frac{1}{r^2\sin^2\theta}\frac{\partial^2\psi}{\partial\varphi^2}\right) -\frac{e^2}{r}\psi \\ =E\psi. \end{matrix} \label{4.6.6}\] Factoring Out the Angular Dependence: the Radial Equation - \(R(r)\) Since the potential is spherically symmetric, the Hamiltonian \(H\) commutes with the angular momentum operators \(L^2\), \(L_z\) so we can construct a common set of eigenkets of the three operators \(H\), \(L^2\), \(L_z\). The angular dependence of these eigenkets is therefore that of the \(Y^m_l\)’s, so the solutions must be of the form \[ \psi_{Elm}(r,\theta,\phi)=R_{Elm}(r)Y^m_l(\theta,\phi). \label{4.6.7}\] Now, notice that in the Schrödinger equation above, the angular part of \(\vec{\nabla}^2\) is exactly the differential operator \(L^2/2mr^2\), so operating on \(\psi_{Elm}(r,\theta,\phi)=R_{Elm}(r)Y^m_l(\theta,\phi)\) it will give \(\hbar^2l(l+1)/2mr^2\). The spherical harmonic \(Y^m_l\) can then be cancelled from the two sides of the equation leaving: \[ \begin{matrix} -\dfrac{\hbar^2}{2m}\left( \dfrac{1}{r^2}\dfrac{d}{dr}(r^2\dfrac{d}{dr})-\dfrac{l(l+1)}{r^2}\right) R_{El}(r)-\dfrac{e^2}{r}R_{El}(r) \\ =ER_{El}(r) \end{matrix} \label{4.6.8}\] it now being apparent that \(R(r)\) cannot depend on \(m\). The radial derivatives simplify if one factors out \(1/r\) from the function \(R\), writing \[ R_{El}(r)=\dfrac{u(r)}{r} \label{4.6.9}\] and temporarily suppressing the \(E\) and \(l\) to reduce clutter. The equation becomes: \[ -\dfrac{\hbar^2}{2m}\left( \dfrac{d^2}{dr^2}-\dfrac{l(l+1)}{r^2}\right) u(r)-\dfrac{e^2}{r}u(r)=Eu(r). \label{4.6.10}\] \[ -\dfrac{\hbar^2}{2m}\dfrac{d^2u(r)}{dr^2}+\left( \dfrac{\hbar^2}{2m}\dfrac{l(l+1)}{r^2}-\dfrac{e^2}{r}\right) u(r)=Eu(r). \label{4.6.11A}\] Note that this is the same as the Schrödinger equation for a particle in one dimension, restricted to \(r>0\), in a potential (for \(l\neq 0\) ) going to positive infinity at the origin, then negative and going to zero at large distances, so it always has a minimum for some positive \(r\). We are interested in bound states of the proton-electron system, so \(E\) will be a negative quantity. At large separations, the wave equation simplifies to \[ -\dfrac{\hbar^2}{2m}\dfrac{d^2u(r)}{dr^2}\cong E u(r) (for\; large\; r) \label{4.6.11B}\] having approximate solutions \(e^{\kappa r}\), \(e^{-\kappa r}\) where \(\kappa =\sqrt{-2mE/\hbar^2}\). The bound states we are looking for, of course, have exponentially decreasing wave functions at large distances. Going to a Dimensionless Variable To further simplify the equation, we introduce the dimensionless variable \[ \rho=\kappa r,\;\; \kappa =\sqrt{-2mE/\hbar^2} \label{4.6.12}\] giving \[ \frac{d^2u(\rho)}{d\rho^2}=\left( 1-\frac{2\nu}{\rho}+\frac{l(l+1)}{\rho^2}\right) u(\rho) \label{4.6.13}\] where (for reasons which will become apparent shortly) we have introduced \(\nu\) defined by \[ 2\nu=e^2\kappa /E. \label{4.6.14}\] Notice that in transforming from \(r\) to the dimensionless variable \(\rho\) the scaling factor \(\kappa\) depends on energy, so will be different for different energy bound states! Consider now the behavior of the wave function near the origin. The dominant term for sufficiently small \(\rho\) is the centrifugal one, so \[ \frac{d^2u(\rho)}{d\rho^2}\cong \frac{l(l+1)}{\rho^2}u(\rho) \label{4.6.15}\] for which the solutions are \(u(\rho)\sim \rho^{-l}\), \(u(\rho)\sim \rho^{l+1}\). Since the wave function cannot be singular, we choose the second. We have established that the wave function decays as \(e^{-\kappa r}=e^{-\rho}\) at large distances, and goes as \(\rho^{l+1}\) close to the origin. Factoring out these two asymptotic behaviors, define \(w(\rho)\) by \[ u(\rho)=e^{-\rho}\rho^{l+1}w(\rho). \label{4.6.16}\] It is straightforward (if tedious) to establish that \(w(\rho)\) satisfies the differential equation: \[ \rho\frac{d^2w(\rho)}{d\rho^2}+2(l+1-\rho)\frac{dw(\rho)}{d\rho}+2(\nu-(l+1))w(\rho)=0. \label{4.6.17}\] Putting in a trial series solution \( w(\rho)=\sum_{k=0}^{\infty}w_k\rho^k\) gives a recurrence relation between successive coefficients: \[ \frac{w_{k+1}}{w_k}=\frac{2(k+l+1-\nu)}{(k+1)(k+2(l+1))}. \label{4.6.18}\] For large values of \(k\), \(w_{k+1}/w_k\to 2/k\), so \(w_k\sim 2^k/k!\) and therefore \(w(\rho)\sim e^2\rho\). This means we have found the diverging radial wavefunction \(u(\rho)\sim e^{\rho}\), which is in fact the correct behavior for general values of the energy. To find the bound states, we must choose energies such that the series is not an infinite one. As long as the series stops somewhere, the exponential decrease will eventually take over, and yield a finite (bound state) wave function. Just as for the simple harmonic oscillator, this can only happen if for some \(k, w_{k+1}=0\). Inspecting the ratio \(w_{k+1}/w_k\), evidently the condition for a bound state is that \[ \nu=n,\;\; an\; integer \label{4.6.19}\] in which case the series for \(w(\rho)\) terminates at \(k=n-l-1\). From now on, since we know that for the functions we’re interested in \(\nu\) is an integer, we replace \(\nu\) by \(n\). To find the energies of these bound states, recall \(2n=2\nu=e^2\kappa /E\) and \(\kappa =\sqrt{-2mE/\hbar^2}\), so \[ 4n^2=\frac{e^4\kappa_n^2}{E_n^2}=-\frac{e^4}{E_n^2}\frac{2mE_n}{\hbar^2}, \label{4.6.20}\] so \[ E_n=-\frac{me^4}{2\hbar^2}\frac{1}{n^2}=-\frac{13.6}{n^2} ev = -\frac{1}{n^2} Rydberg. \label{4.6.21}\] (This defines the Rydberg, a popular unit of energy in atomic physics.) Remarkably, this is the very same series of bound state energies found by Bohr from his model! Of course, this had better be the case, since the series of energies Bohr found correctly accounted for the spectral lines emitted by hot hydrogen atoms. Notice, though, that there are some important differences with the Bohr model: the energy here is determined entirely by \(n\), called the principal quantum number, but, in contrast to Bohr’s model, \(n\) is not the angular momentum. The true ground state of the hydrogen atom, \(n=1\), has zero angular momentum: since \(n=k+l+1\), \(n=1\) means both \(l=0\) and \(k=0\). The ground state wave function is therefore spherically symmetric, and the function \(w(\rho)=w_0\) is just a constant. Hence \(u(\rho)=\rho e^{-\rho}w_0\) and the actual radial wave function is this divided by \(r\), and of course suitably normalized. To write the wave function in terms of \(r\), we need to find \(\kappa\). Putting together \(\rho=\kappa_n r\), \(\kappa_n=\sqrt{-2mE_n/\hbar^2}\) and \(E_n=-\frac{me^4}{2\hbar^2}\frac{1}{n^2}\), \[ \kappa_n=\sqrt{2m\frac{me^4}{2\hbar^2}\frac{1}{n^2}}/\hbar=\frac{me^2}{\hbar^2n}=\frac{1}{a_0n}, \label{4.6.22}\] where the length \[ a_0=\frac{\hbar^2}{me^2}=0.529\times 10^{-10}m. \label{4.6.23}\] is called the Bohr radius: it is in fact the radius of the lowest orbit in Bohr’s model. Exercise: check this last statement. It is worth noting at this point that the energy levels can be written in terms of the Bohr radius \(a_0\): \[ E_n=-\frac{e^2}{2a_0}\frac{1}{n^2}. \label{4.6.24}\] (This is actually obvious: remember that the energies \(E_n\) are identical to those in the Bohr model, in which the radius of the \(n^{th}\) orbit is \(n^2a_0\), so the electrostatic potential energy is \(-e^2/na_0\), etc.) Moving on to the excited states: for \(n=2\), we have a choice: either the radial function \(w(\rho)\) can have one term, as before, but now the angular momentum \(l=1\) (since \(n=k+l+1\) ); or \(w(\rho)\) can have two terms (so \(k=1\) ), and \(l=0\). Both options give the same energy, -0.25 Ry, since n is the same, and the energy only depends on \(n\). In fact, there are four states at this energy, since \(l=1\) has states with \(m=1, m=0\) and \(m=-1\), and \(l=0\) has the one state \(m=0\). (For the moment, we are not counting the extra factor of 2 from the two possible spin orientations of the electron.) For \(n=3\), there are 9 states altogether: gives one, \(l=1\) gives 3 and \(l=2\) gives 5 different \(m\) values. In fact, for principal quantum number \(n\) there are \(n^2\) degenerate states. ( \(n^2\) being the sum of the first \(n\) odd integers.) The states can be mapped out, energy vertically, angular momentum horizontally: The energy \(E=-1/n^2\), the levels are labeled \(nl\), \(n\) being the principal quantum number and the traditional notation for angular momentum \(l\) is given at the bottom of the diagram. The two red vertical arrows are the first two transitions in the spectroscopic Balmer series, four lines of which gave Bohr the clue that led to his model. The corresponding series of transitions to the \(1s\) ground state are in the ultraviolet, they are called the Lyman series. Wave Functions for some Low-n States From now on, we label the wave functions with the quantum numbers, \(\psi_{nlm}(r,\theta,\phi)\), so the ground state is the spherically symmetric \(\psi_{100}(r)\). For this state \(R(r)=u(r)/r\), where \(u(\rho)=e^{-\rho}\rho^{l+1}w(\rho)=e^{-\rho}\rho w_0\), with \(w_0\) a constant, and \(\rho=\kappa_1 r=r/a_0\). So, as a function of \(r\), \(\psi_{100}(r)=Ne^{-r/a_0}\) with \(N\) an easily evaluated normalization constant: \[ \psi_{100}(r)=\left( \frac{1}{\pi a^3_0}\right)^{1/2}e^{-r/a_0}. \label{4.6.25}\] For \(n=2, l=1\) the function \(w(\rho)\) is still a single term, a constant, but now \(u(\rho)=e^{-\rho}\rho^{l+1}w(\rho)=e^{-\rho}\rho^2w_0\), and, for \(n=2\),  \(\rho=\kappa r=r/2a_0\), remembering the energy-dependence of \(\kappa\). Therefore \(\psi_{210}(r,\theta,\phi)=N\left( \frac{r}{a_0}\right) e^{-r/2a_0}\cos\theta\). Again, evaluating the normalization constant is routine, yielding \[ \psi_{210}(r,\theta,\phi)=\left( \frac{1}{32\pi a^3_0}\right)^{1/2}\left( \frac{r}{a_0}\right) e^{-r/2a_0}\cos\theta. \label{4.6.26}\] The wave functions for the other \(m\) -values, \(\psi_{21\pm 1}(r,\theta,\phi)\), have the \(\cos\theta\) in \(\psi_{210}\) replaced by \(\mp (1/\sqrt{2})\sin\theta e^{\pm i\phi}\) respectively (from the earlier discussion of the \(Y^m_l\) ’s). The other \(n=2\) state has \(l=0\), so from \(n=k+l+1\), we have \(k=1\) and the series for \(w\) has two terms, \(k=0\) and \(k=1\), the ratio being \[ \frac{w_{k+1}}{w_k}=\frac{2(k+l+1-n)}{(k+1)(k+2(l+1))}=-1 \label{4.6.27}\] for the relevant values: \(k=0, l=0, n=2\). So \(w_1=-w_0\), \(w(\rho)=w_0(1-\rho)\). For \(n=2\), \(\rho=r/2a_0\), the normalized wave function is \[ \psi_{200}(r)=\left( \frac{1}{32\pi a^3_0}\right)^{1/2}\left( 2-\frac{r}{a_0}\right) e^{-r/2a_0}. \label{4.6.28}\] Note that the zero angular momentum wave functions are nonzero and have nonzero slope at the origin. This means that the full three dimensional wave functions have a slope discontinuity there! But this is fine—the potential is infinite at the origin. (Actually, the proton is not a point charge, so really the kink will be smoothed out over a volume of the size of the proton—a very tiny effect.) General Solution of the Radial Equation In practice, the first few radial functions \(w(\rho)\) can be constructed fairly easily using the method presented above, but it should be noted that the differential equation for \(w(\rho)\) \[ \rho \frac{d^2w(\rho)}{d\rho^2}+2(l+1-\rho)\frac{dw(\rho)}{d\rho}+2(n-(l+1))w(\rho)=0 \label{4.6.29}\] is in fact Laplace’s equation, usually written \[ \left( z\frac{d^2}{dz^2}+(k-1-z)\frac{d}{dz}+p\right) L^k_p(z)=0 \label{4.6.30}\] where \(k,p\) are integers, and \(L^k_p(z)\) is a Laguerre polynomial (Messiah, page 482). The two equations are the same if \(z = 2\rho\), and the solution to the radial equation is therefore \[ w_{nl}(\rho)=L^{2l+1}_{n-l-1}(2\rho). \label{4.6.31}\] Quoting Messiah, the Laguerre polynomials \(L^0_p(z)\), and the associated Laguerre polynomials \(L^k_p(z)\) are given by: \[ \begin{matrix} L^0_p(z)=e^z\frac{d^p}{dz^p}e^{-z}z^p \\ L^k_p(z)=(-1)^k\frac{d^k}{dz^k}L^0_{p+k}(z). \end{matrix} \label{4.6.32}\] (These representations can be found neatly by solving Laplace’s equation using – surprise – a Laplace transform. See Merzbacher for details.) The polynomials satisfy the orthonormality relations (with the mathematicians’ normalization convention) \[ \int^{\infty}_{0}e^{-z}z^k L^k_p L^k_qdz=\frac{[(p+k)!]^3}{p!}\delta_{pq}. \label{4.6.33}\] But what do they look like? The function \(e^{-z}z^p\) is zero at the origin (apart from the trivial case \(p=0\) ) and zero at infinity, always positive and having nonzero slope except at its maximum value, \(z=p\). The \(p\) derivatives bring in \(p\) separated zeroes, easily checked by sketching the curves generated by successive differentiation. Therefore, \(L^0_p(z)\), a polynomial of degree \(p\), has \(p\) real positive zeroes, and value at the origin \(L^0_p(0)=p!\), since the only nonzero term at \(z=0\) is that generated by all \(p\) differential operators acting on \(z^p\). The associated Laguerre polynomial \(L^k_p(z)\) is generated by differentiating \(L^0_{p+k}(z)\)  \(k\) times. Now \(L^0_{p+k}(z)\) has \(p+k\) real positive zeroes, differentiating it gives a polynomial one degree lower, with zeroes which must be one in each interval between the zeroes of \(L^0_{p+k}(z)\). This argument remains valid for successive differentiations, so \(L^k_p(z)\) must have \(p\) real separate zeroes. Putting all this together, and translating back from \(\rho\) to \(r\), the radial solutions are: \[ R_{nl}(r)=Ne^{-r/na_0}(\frac{r}{na_0})^l L^{2l+1}_{n-l-1}(\frac{2r}{na_0}) \label{4.6.34}\] with \(N\) the normalization constant. Griffiths (page 141) gives more details, including the normalization constants worked out. We used those to plot the \(n=3\) states—plotting here the functions \(u(r)=rR(r)\), since the normalization is \(4\pi \int^{\infty}_{0}|u(r)|^2dr=1, u(r)\) gives a better idea of at what distance from the proton the electron is most likely to be found. Here are the three \(n=3\) radial wave functions: The number of nodes, the radial quantum number, is \(3-l-1\). (Note: The relative normalizations are correct here, but not the overall normalization.) For higher \(n\) values, the wave functions become reminiscent of classical mechanics. for example, for \(n=10\), the highest angular momentum state probability distribution peaks at \(r=100a_0\), the Bohr orbit radius: whereas for \(n=10, l=0\), we find: Notice this peaks just below twice the Bohr radius. This can be understood from classical mechanics: for an inverse square force law, elliptical orbits with the same semimajor axis have the same energy. The \(l=n-1\) orbit is a circle, the \(l=0\) orbit is a long thin ellipse (one end close to the proton), so it extends almost twice as far from the origin as the circle. Furthermore, the orbiting electron will spend longer at the far distance, since it will be moving very slowly. (Note: the normalizations in the above graphs are only approximate.) This page titled 4.6: The Hydrogen Atom is shared under a not declared license and was authored, remixed, and/or curated by Michael Fowler via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
aacd1e71dd4befb2
Chapter 12 The nonlinear Schrödinger equation as both a PDE and a dynamical system David Cai, David W. McLaughlin, Kenneth T.R. McLaughlin Research output: Chapter in Book/Report/Conference proceedingChapter 34 Scopus citations Nonlinear dispersive wave equations provide excellent examples of infinite dimensional dynamical systems which possess diverse and fascinating phenomena including solitary waves and wave trains, the generation and propagation of oscillations, the formation of singularities, the persistence of homoclinic orbits, the existence of temporally chaotic waves in deterministic systems, dispersive turbulence and the propagation of spatiotemporal chaos. Nonlinear dispersive waves occur throughout physical and natural systems wherever dissipation is weak. Important applications include nonlinear optics and long distance communication devices such as transoceanic optical fibers, waves in the atmosphere and the ocean, and turbulence in plasmas. Examples of nonlinear dispersive partial differential equations include the Korteweg-de Vries equation, nonlinear Klein-Gordon equations, nonlinear Schrödinger equations, and many others. In this survey article, we choose a class of nonlinear Schrödinger equations (NLS) as prototypal examples, and we use members of this class to illustrate the qualitative phenomena described above. Our viewpoint is one of partial differential equations on the one hand, and infinite dimensional dynamical systems on the other. In particular, we will emphasize global qualitative information about the solutions of these nonlinear partial differential equations which can be obtained with the methods and geometric perspectives of dynamical systems theory. The article begins with a brief description of a spectacular success in pde of this dynamical systems viewpoint - the complete understanding of the remarkable properties of the soliton through the realization that certain nonlinear wave equations are completely integrable Hamiltonian systems. This complete integrability follows from a deep connection between certain special nonlinear wave equations (such as the NLS equation with cubic non-linearity in one spatial dimension) and the linear spectral theory of certain differential operators (the "Zakharov-Shabat" or "Dirac" operator in the NLS case). From this connection the "inverse spectral transform" has been developed and used to represent integrable nonlinear waves. These representations have provided a full solution of the Cauchy initial value problem for several types of boundary conditions, a thorough understanding of the remarkable properties of the soliton, descriptions of quasi-periodic wave trains, and descriptions of the formation and propagation of oscillations as slowly varying nonlinear wave-trains. In addition, more recent developments are described, including:o(i)the formation of singularities and their relationship to dispersive turbulence;(ii)weak turbulence theory;(iii)the persistence of periodic, quasi-periodic, and homoclinic solutions, by methods including normal forms for pde's, Melnikov measurements, and geometric singular perturbation theory;(iv)temporal and spatiotemporal chaos;(v)long-time and small dispersion behavior of integrable waves through Riemann-Hilbert spectral methods. For each topic, the description is necessarily brief; however, references will be selected which should enable the interested reader to obtain more mathematical detail. Original languageEnglish (US) Title of host publicationHandbook of Dynamical Systems Number of pages77 ISBN (Print)9780444501684 StatePublished - 2002 Externally publishedYes Publication series NameHandbook of Dynamical Systems ISSN (Print)1874-575X ASJC Scopus subject areas • Analysis • Mathematical Physics • Geometry and Topology • Applied Mathematics Dive into the research topics of 'Chapter 12 The nonlinear Schrödinger equation as both a PDE and a dynamical system'. Together they form a unique fingerprint. Cite this
cce4ba1856ae01b3
The Energy That Holds Things Together Matt Strassler [April 27, 2012] In my article on energy and mass and related issues, I focused attention on particles — which are ripples in fields — and the equation that Einstein used to relate their energy, momentum and mass. But energy arises in other places, not just through particles. To really understand the universe and how it works, you have to understand that energy can arise in the interaction among different fields, or even in the interaction of a field with itself. All the structure of our world — protons, atoms, molecules, bodies, mountains, planets, stars, galaxies — arises out of this type of energy. In fact, many types of energy that we talk about as though they are really different — chemical energy, nuclear energy, electromagnetic energy — either are a form of or involve in some way this more general concept of interaction energy. In beginning physics classes this type of energy includes what is called “potential energy”. But both because “potential” has a different meaning in English than it does in physics, and because the way the concept is explained in freshman physics classes is so different from the modern viewpoint, I prefer to use a different name here, to pull the notion away from any pre-conceptions or mis-conceptions that you might already hold. Also, in a previous version of my mass and energy article I called “interaction energy” by a different name, “relationship energy”.  You’ll see why below; but I’ve decided this is a bad idea and have switched over to the new name. Preamble: Review of Concepts In the current viewpoint favored by physicists and validated (i.e. shown to be not false, but not necessarily unique) in many experiments, the world is made from fields. The most intuitive example of a field is the wind: • you can measure it everywhere, • it can be zero or non-zero, and • it can have waves (which we call sound.) Most fields can have waves in them, and those waves have the property, because of quantum mechanics, that they cannot be of arbitrarily small height. • The wave of smallest possible height — of smallest amplitude, and of smallest intensity — is what we call a “quantum”, or more commonly, but in a way that invites confusion, a “particle.” A photon is a quantum, or particle, of light (and here the term `light’ includes both visible light and other forms); it is the dimmest flash of light, the least intense wave in the electric and magnetic fields that you can create without having no flash at all. You can have two photons, or three, or sixty-two; you cannot have a third of a photon, or two and a half. Your eye is structured to account for this; it absorbs light one photon at a time. The same is true of electrons, muons, quarks, W particles, the Higgs particle and all the others. They are all quanta of their respective fields. A quantum, though a ripple in a field, is like a particle in that • it retains its integrity as it moves through empty space • it has a definite (though observer-dependent) energy and momentum • it has a definite (and observer-independent) mass • it can only be emitted or absorbed as a unit. Fig. 1: A sketch of how the presence of a quantum of one field (blue ripple) creates a response in a second field (in green) which is largest near the ripple and fades off at larger distances. [Recall how I define mass according to the convention used by particle physicists; E = mc2 only for a particle at rest, while a particle that is moving has E > mc2, with mass-energy mc2 and motion-energy which is always positive. My particle physics colleagues and I do not subscribe to the point of view that it is useful to view mass as increasing with velocity; we view this definition of mass as archaic. We define mass as velocity-independent — what people used to call “rest mass”, we just call “mass”.   I’ll explain why elsewhere, but it is very important to keep this convention in mind while reading the present article.] The Energy of Interacting Fields Now, with that preamble, I want to turn to the most subtle form of energy. A particle has energy through its mass and through its motion. And remember that a particle is a ripple in a field — a well-defined wave. Fields can do many other things, not just ripple. For example, a ripple in one field can cause a non-ripple disturbance in another field with which it interacts. I have sketched this in Figure 1, where in blue you see a particle (i.e. a quantum) in one field, and in green you see the response of a second field. Suppose now there are two particles — for clarity only, let’s make them ripples in two different fields, so I’ve shown one in blue and one in orange in Figure 2 — and both of those fields interact with the field shown in green.  Then the disturbance in the green field can be somewhat more complicated. Again, this is a sketch, not a precise rendition of what is a bit too complicated to show clearly in a picture, but it gives the right idea. Ok, so what is the energy of this system of two particles — two ripples, one in each of two different fields — and a third field with which both interact? The ripples are quanta, or particles; they each have mass and motion energy, both of which are positive. Fig. 2: Compare to Figure 1; with the addition of another quantum (orange ripple) in a third field that also interacts with the second field, the response of the second field becomes more complex. The green field’s disturbance has some energy too; it’s also positive, though often quite small compared to the energy of the particles in a case like this. That’s often called field energy. But there is additional energy in the relationship between the various fields; where the blue and green fields are both large, there is energy, and where the green and orange fields are both large, there is also energy. And here’s the strange part. If you compare Figures 1 and 2, both of them have energy in the region where the blue and green fields are large. But the presence of the ripple in the orange field in the vicinity alters the green field, and therefore changes the energy in the region where the blue field’s ripple is sitting, as indicated in Figure 3. Fig. 3: The presence of the second quantum alters the green field in the vicinity of the blue quantum; the energy stored in that general region (indicated by the blue sphere) changes between Figure 1 and Figure 2. This change in the energy --- the interaction energy --- may be either positive or negative. Depending upon the details of how the orange and green fields interact with each other, and how the blue and green fields interact with each other, the change in the energy may be either positive or negative. This change is what I’m going to call interaction energy.  The possibility of negative shifts in the energy of the blue and green field’s interaction, due to the presence of the orange ripple (and vice versa) — the possibility that interaction energy can be negative — is the single most important fact that allows for all of the structure in the universe, from atomic nuclei to human bodies to galaxies. And that’s what comes next in this story. The Earth and the Moon The Earth is obviously not a particle; it is a huge collection of particles, ripples in many different fields. But what I’ve just said applies to multiple ripples, not just one, and they all interact with gravitational fields.  So the argument, in the end, is identical. Imagine the Earth on its own. Its presence creates a disturbance in the gravitational field (which in Einstein’s viewpoint is a distortion of the local space and time, but that detail isn’t crucial to what I’m telling you here.) Now put the Moon nearby. The gravitational field is also disturbed by the Moon. And the gravitational field near the Earth changes as a result of the presence of the Moon. The detailed way that gravity interacts with the particles and fields that make up the Earth  assures that the effect of the Moon is to produce a negative interaction energy between the gravitational field and the Earth.  The reverse is also true. And this is why the Moon and Earth cannot fly apart, and instead remain trapped, bound together as surely as if they were attached with a giant cord. Because if the Moon were very, very far from the Earth, the interaction energy of the system — of the Earth, the Moon, and the gravitational field — would be zero, instead of negative. But energy is conserved. So to move the Moon far from the Earth compared to where it is right now, positive energy — a whole lot of it — would have to come from somewhere, to allow for the negative interaction energy to become zero. The Moon and Earth have positive motion-energy as they orbit each other, but not enough for them to escape each other. Fig. 4: In precise analogy to Figure 3, the system of the Earth, Moon and gravitational field has a lower energy (because of a negative interaction energy that is more important than the positive motion-energy of the Moon and Earth) than would be the case if the Earth and Moon were very far apart; and for this reason to move the Moon far away from the Earth would require an input of a large amount of additional positive energy. Images from NASA. Short of flinging another planet into the moon, there’s no viable way to get that kind of energy, accidentally or on purpose, from anywhere in the vicinity; all of humanity’s weapons together wouldn’t even come remotely close. So the Moon cannot spontaneously move away from the Earth; it is stuck here, in orbit, unless and until some spectacular calamity jars it loose. You may know that the most popular theory of how the Earth and Moon formed is through the collision of two planet-sized objects, a larger pre-Earth and a Mars-sized object; this theory explains a lot of otherwise confusing puzzles about the Moon. Certainly there were very high-energy planet-scale collisions in the early solar system as the sun and planets formed over four billion years ago! But such collisions haven’t happened for a long, long, long time. The same logic explains why artificial satellites remain bound to the Earth, why the Earth remains bound to the Sun, and why the Sun remains bound to the Milky Way Galaxy, the city of a trillion stars which we inhabit. The Hydrogen Atom And on a much smaller scale, and with more subtle consequences, the electron and proton that make up a hydrogen atom remain bound to each other, unless energy is put in from outside to change it. This time the field that does the main part of the job is the electric field. In the presence of the electron, the interaction energy between the electric field and the proton (and vice versa) is negative. The result is that once you form a hydrogen atom from an electron and a proton (and you wait for a tiny fraction of a second until they settle down to their preferred configuration, know as the “ground state”) the amount of energy that you would need to put in to separate them is about 14 electron-volts. (What’s an electron-volt? it’s a quantity of energy, very very small by human standards, but useful in atomic physics.) We call this the “binding energy” of hydrogen. Fig. 5: Inside a hydrogen atom, the electron ripple spreads out in something like a cloud around the proton; the interaction energy involving the proton, the electron and the electric field is minus 28 electron-volts, which is partly canceled (mainly by the motion-energy of the electron) to give a binding energy of minus 14 electron-volts. We can measure that the binding energy is -14 electron-volts by shining ultraviolet light (photons with energy a bit too large to be detected by your eyes) onto hydrogen atoms, and seeing how energetic the photons need to be in order to break hydrogen apart. We can also calculate it using the equations of quantum mechanics — and the success of this prediction is one of the easiest tests of the modern theory of quantum physics. But now I want to bring you back to something I said in my mass and energy article, one of Einstein’s key insights that he obtained from working out the consequences of his equations. If you have a system of objects, the mass of the system is not the sum of the masses of the objects that it contains. It is not even proportional to the sum of the energies of the particles that it contains. It is the total energy of the system divided by c2, as viewed by an observer who is stationary relative to the system.  (For an observer relative to whom the system is moving, the system will have additional motion-energy, which does not contribute to the system’s mass.)  And that total energy involves • the mass energies of the particles (ripples in the fields), plus • the motion-energies of the particles, plus • other sources of field-energy from non-ripple disturbances, plus • the interaction energies among the fields. What do we learn from the fact that the energy required to break apart hydrogen is 14 electron volts? Well, once you’ve broken the hydrogen atom apart you’re basically left with a proton and an electron that are far apart and not moving much. At that point, the energy of the system is • the mass energies of the particles  = electron mass-energy + proton mass-energy = 510, 999 electron-volts + 938,272,013 electron-volts • the motion-energies of the particles = 0 • other sources of field-energy from non-ripple disturbances = 0 • the interaction energies among the fields = 0 Meanwhile, we know that before we broke it up, the system of a hydrogen atom had energy that was 14 electron volts less than this. Now the mass-energy of an electron is always 510, 999 electron-volts and the mass-energy of a proton is always 938,272,013 electron-volts, no matter what they are doing, so the mass-energy contribution to the total energy is the same for hydrogen as it is for a widely separated electron and proton.  What must be the case is that • the motion-energies of the particles inside hydrogen • PLUS other sources of field-energy from non-ripple disturbances (really really small here) • PLUS the interaction energies among the fields • MUST EQUAL the binding energy of -14 electron volts. In fact, if you do the calculation, the way the numbers work out is (approximately) • the motion-energies of the particles = +14 electron volts • other sources of field-energy from non-ripple disturbances = really really small • the interaction energies among the fields = -28 electron volts. and the sum of these things is -14 electron volts. It’s not an accident that the interaction energy is -2 times the motion energy; roughly, that comes from having a 1/r2 force law for electrical forces. Experts: it follows from the virial theorem. What is the mass of a hydrogen atom, then? It is • the electron mass + the proton mass + (binding energy/c2 ) and since the binding energy is negative, thanks to the big negative interaction energy, • mhydrogen < mproton + melectron This is among the most important facts in the universe! Why the hydrogen atom does not decay I’m now going to say these same words back to you in a slightly different language, the language of a particle physicist. Hydrogen is a stable composite object made from a proton and an electron, bound together by interacting with the electric field. Why is it stable? Any object that is not stable will decay; and a decay is only possible if the sum of the masses of the particles to which the initial object decays is less than the mass of the original object. This follows from the conservation of energy and momentum; for an explanation, click here. The minimal things to which a hydrogen atom could decay are a proton and an electron. But the mass of the hydrogen atom is smaller (because of that minus 14 electron volts of binding energy) than the mass of the electron plus the mass of the proton, let me restate that really important equation. • mhydrogen < mproton + melectron There is nothing else in particle physics to which hydrogen can decay, so we’re done: hydrogen cannot decay at all. [This is true until and unless the proton itself decays, which may in fact be possible but spectacularly rare — so rare that we’ve never actually detected it happening. We already know it is so rare that not a single proton in your body will decay during your lifetime. So let’s set that possibility aside as irrelevant for the moment.] The same argument applies to all atoms. Atoms are stable because the interaction energy between electrons and an atomic nucleus is negative; the mass of the atom is consequently less than the sum of the masses of its constituents, so therefore the atom cannot simply fall apart into the electrons and nucleus that make it up. The one caveat: the atom can fall apart in another way if its nucleus can itself decay. And while a proton cannot decay (or perhaps does so, but extraordinarily rarely), this is not true of most nuclei. And this brings us to the big questions. • Why is the neutron, which is unstable when on its own, stable inside some atomic nuclei? • Why are some atomic nuclei stable while others are not? • Why is the proton stable when it is heavier than the quarks that it contains? To be continued… 104 responses to “The Energy That Holds Things Together 1. With you so far and pleased to see the wind-field analogy here. I was the giant black rabbit in the front row during your talk in Terra Nova on Saturday but I had to give up at the point you started talking about wind as a field because the sound feed kept breaking up. I am now on the edge of my seat to discover why the neutron is not stable on its own… 2. … or rather why it IS stable in the nucleus (your linked article on conservation of energy and momentum explains the on-its-own neutron instability)…. 3. This is a great article from great MATT. , but please do not forget to tell us how non-material abstract statistics control / direct / confine the exact half-life time of all decaying nucleus so that the lump never deviate from that strict value , how the nucleus know that it cannot decay or it must decay or else half-life time will change !! this dilemma was mentioned in arther caostler book ( the roots of coincidence) but no answer was given. GOD bless you matt. 4. As the minus sign must be related to a fixed agreed-upon datum , what is the datum with which comparing binding energy we find that it is less / opposite / the other way around / ……etc.from that datum ? 5. Dear Professor, I’m visualizing these things pretty clearly, thank you. I’m ready for the next lesson! 6. Just how heretical is the notion of omnipresent fields? Is space time usefully thought about as a field? If so, it is the only field with broken Lorentz symmetry? • Not heretical in the slightest. It’s standard fare; every university in the country with a particle physics program has a Quantum Field Theory course. Space-time isn’t quite the field itself — this is a tricky point. That’s why I glossed over it. The fields in gravity are a bit more complicated than that. But I don’t think I want to try to answer this clearly now. In flat and unchanging space, gravity doesn’t break Lorentz symmmetry at all. Of course, the universe is expanding, and that defines a preferred sense of time (in the part of the universe that we can see, at least). And that does mean that the gravitational fields in our universe do, on the largest distance scales, break Lorentz symmetry, yes. And no other fields break Lorentz symmetries on the large scale, no. But the answer to your last question depends upon exactly what you meant. Globally across the universe, nothing but gravity breaks Lorentz symmetry (at least no one has ever detected of any other source of Lorentz breaking.) In small regions there’s all sorts of breaking by all sorts of things. For instance, there are stray magnetic fields in all sorts of places around the universe, and so locally those break Lorentz symmetry. And hey, even the earth breaks Lorentz symmetry (that’s why up is different from down, for instance, when you’re near the earth.) 7. Is attraction by definition a negative energy ? what is its physical meaning ? for ex. in hydraulics we call a negative potential energy of dammed water if we chose the datum line above water surface so water-head is BELOW IT. • attraction results from the fact that the negative interaction energy becomes more and more negative as you bring the earth and moon closer and closer together. • Q: Why is moon moving away from the earth when it should be getting closer and closer to it if gravity holds the two bodies. I know its said that in early Earth’s history, moon was much closer to Earth. Shouldn’t larger body win the tag of war? Sorry for being sightly off the topic but I’m still sorting out the data presented. Huh ? I just thought this results from looking at Einstein`s classical field equations without taking the detailed interactions between quantum fields into account … ? • didn’t you ever wonder where Einstein’s classical field equations come from, given that we live in a quantum world and the earth and moon are made from things that are described by quantum mechanics? But in any case, that’s not very relevant — because everything I said is also true of the classical field equations; I just used a quantum language to describe it, but the math is essentially identical. 9. Thank you so much for finally getting around to this topic 10. Vladimir Kalitvianski “interaction among different fields, or even in the interaction of a field with itself” I wrote a paper to show our errors in guessing the interaction energy (interaction Lagrangian), see here 11. Torbjörn Larsson, OM Glad to se the virial theorem at work on smaller scales than clusters. For us non-experts, the 1/r potential case is done in Wikipedia (for gravity). The meaning of validation is seldom stated clearly, this was a refreshing brief! “Your eye is structured to account for this; it absorbs light one photon at a time.” Allow me a more detailed model – when I was doing my PhD work we were preparing a book on sensors (which was never to be) and I made the biological photon sensor chapter. Mind that this is several decades old biology and from the top of my head: – It is only at low light intensities (dark adapted) that the eye is counting photons. Not with much quantum efficiency mind, but that is because our eye evolved to be lousy. (For example, cats doubles dark adaption efficiency by having layered index tissue mirrors beneath the neural layers – cat’s eyes.) – At moderate intensities rods and cones absorb several photons faster than they can transmit nerve impulses. Our eyes have evolved to regulate response by many mechanisms, from non-linear response in the photochemical receptors instantaneously and over time (bleaching, part of light vs dark adaption) over regulation of cellular cascades resulting in neural signal and of pigment refresh response to bleaching (both part of light vs dark adaption) to neural tissue feedbacks and feed-forwards (probably also part of light vs dark adaptation). The idea is correct, for some reason or other pigments absorbing one photon at a time have been utilized in biology many times over. But it is not always (in fact, seldom) the function of the eye. 12. Forgive me for being so dumb , but am i right in saying that the minus sign is ONLY in equations not reality and what you mean is that part of motion energy is transformed to binding energy so that part IS TAKEN from the motion energy so here we see the ( -) sign OF PART TAKEN FROM WHOLE with no negative energy in reality?????? • You are right that this is an accounting issue. What I really mean is that the energy of the system has decreased, and it is useful to account for that decrease in a particular way, through what I call interaction energy. In the end the total system has total energy which is positive. 13. Negative energy, eh? I guess this is where the notion of the Theory of Nothingness came from? We are not even close are we? 🙂 Is gravity the fundamental field of the vacuum? Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum. I can visualize the mass-energy of the ripple as being the defining level for that specific particle and the potential energy as the energy requires to keep pushing it in the positive direction (spacetime). Are you confusing the vacuum reference potential with a zero datum? Would quantum tunneling disprove negative energy? • The notion that the universe is a zero-energy object with positive energy in particles and fields and negative energy in overall interaction energy between gravity and fields is not a new one. Gravity is not the fundamental field of the vacuum; indeed your question doesn’t mean anything, because the vacuum is empty space, but all fields of a universe are to be found in empty space. “Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum.” No. Particles are created through interactions of particles with each other and with fields. But the reason there are particles to start with is that many of them were created in the Big Bang (we don’t know precisely how, because there are many possibilities) and once they start banging into each other and the universe becomes very hot, you can make many more of them. The ones that were left over could coalesce into galaxies, form dark matter halos, stream out as cosmic microwave background, etc. In collisions among these particles, new particles can be created; e.g., fusion inside a star. I don’t know what you mean by a “vacuum reference potential” and a “zero datum”, so I assume I am not confusing them. Quantum tunneling is a fact ( ) and so is negative energy (atoms, nuclei, stars, planets, satellites,…) so there’s nothing to your last question, at least, not as you stated it. 14. I will try an tie my line of questioning to make my point. I don’t believe there is “empty” space. I believe exothermic reaction of the Big Bang created space. Expansion of this energy over the same space it created began “clumping” into variable densities and hence the first field. I call it a field because there would have been a very symmetrical pattern over the “space” as the space expanded. As the expansion continued thermal gradients would begin to become more pronounced to a point where rotations would begin. This rotations, energy flows turning backwards at some radius to define the speed of light, i.e. it is this radius which is the first constraint of this universe. As mentioned in an other post, confinement would take the shape of a sphere and the smallest spheres of the “space” is what I call the vacuum potential. Hence, the interaction of adjacent sphere would then create the second field, gravity. The mechanism as I posted before would be a Newton’s cradle in reverse. The repeating collapsing and generation of the spheres would create an attraction force while the thermal cavities between the spheres are the ‘ripples” we perceive and formulate with the Lagrangian equations. I know that there are more and more physicists are giving up on a simple unified theory, (zero energy?, hologram?, strings? … amazing how fast one can lose himself/herself in the math, lol), but I am willing to bet on Einstein’s initial instincts and look for a nice simple solution. I know you jump on the use of the “exothermic reaction” and yes I am inferring that this bubble we are living in is one of maybe infinite bubbles that percolate up (down?) and coalesce into the magnificent universe we see. In this context I will ask again for your intuitive opinion since your knowledge base is so advanced, what is E, the left side of the equation? In one of your response to my post you described it as temperature and that we don’t know where and/or why there was such a high temperature at the Big Bang. Could you answer it by assuming it sipped in through a ruptured space-time of another manifold? Could this E be an entity of one temperature, I refuse to use string, particle, and I don’t know if space at one temperature, (temperature quanta/) makes any sense either. • I’m afraid I have no idea what you’re trying to suggest. • How can Nature create a microscopic spherical tornado spinning at the speed of light (v = c), from nothing? E = ( h / 2 x pi ) x ( c / r ) E = h-bar x ( L / T) / L E ~ 1 / T (?!) In 1932 Dirac offered a precise definition and derivation of the time-energy uncertainty relation in a relativistic quantum theory of “events”. But a better-known, more widely used formulation of the time-energy uncertainty principle was given in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds: σE x ( σB / | d / dt | >= h-bar Observation: This simple but elegant relationship tells me that there is a unified theory as simple and elegant as Mandelshtam’s and Tamm’s interpretation of the time-energy uncertainty. One interpretation is at the initial state the energy was almost infinite ( E ~ 1 / T ), however another interpretation is that time (existence) started with a spark, an almost infinite infusion of energy ( T ~ 1 / E ). So my question to today’s theoretical physicists is the time variable in the uncertainty relationship and the time variable in Schrödinger equation the same? • PS, Sorry some of text did show up … correction: • Oops, again the observable {B} is missing… try again. σE x ( σB / | d {B} / dt | >= h-bar … It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value, {B} changes appreciably. 15. I am caught a bit off-guard by the small inconsistency with regard to mass. For a single particle, you view “mass” as not including motion-energy, but for a system of particles, you *do* consider motion-energy to be part of the mass. I know it’s just a definition, but it still causes me to stumble a bit. • Ah!!! You are right to be caught off guard — I left out an important statement in the text. The motion energy of the particles INSIDE the system is part of the system’s mass, but the energy from the SYSTEM’S OVERALL MOTION is not to be included. Thank you for pointing out this error of omission — I will fix it immediately. • I expected that was what was meant. But I really should have asked two separate questions. When viewing a single particle in motion, you say that it has “mass” plus “motion-energy” which is not considered to be part of the particle’s mass. But when viewing a system of particles (to be clear, let’s consider ourselves at rest with respect to the center of mass of the system), you say that the internal motion-energy of the particles now *does* count as mass. • If you try to move the system as a whole, yes, you will discover that the system satisfies the equation E^2 – p^2 c^2 = (M c^2)^2 where E is the total energy of the moving system, p is its momentum, and m, the mass of the system, includes internal motions of the particles that it contains. This is the same equation that a particle of mass M would satisfy. This is verified in great detail in a few cases… and it is essential in explaining why neutrons are stable inside of nuclei. It also follows from the modern (post-1920s? I’m not sure…) theoretical understanding of what special relativity means and how it works mathematically. • I don’t wish to appear to be flogging a deceased equine; I quite agree with what you are saying, and I hope you understand that I am using “you” as a group-inclusive of physicists in general and not in the singular sense. I’m not disagreeing; I just find the applied definitions a bit curious. It is interesting that, when viewing an individual moving particle, you say that it has mass-energy and motion-energy, the latter not counting as mass, by definition. But then if you zoom out the microscope and observe that the particle in question is part of a system of particles (with respect to which we are at rest), then the motion-energy of that same particle *does* count as mass of the system, because it contributes to the inertia of the system. But, alternatively, the motion-energy of the individual particle likewise contributes to the inertia of the particle itself, hence the original distinction of “rest mass” vs. “relativistic mass”. I prefer the new point of view, actually — I just need to get used to thinking that way. • There’s a whole story of mathematics that lies behind the preferences that particle physicists take here. E^2 – p^2 c^2 = (m c^2)^2 is a relationship between two things that are observer-dependent and one thing that isn’t. It’s like the equation for a circle: x^2 + y^2 = r^2 , where r is the radius of the circle and x and y are coordinates; x and y depend on how you draw your coordinate axes, but the radius of the circle doesn’t care. So we define mass for a single particle to be an observer-independent quantity. Next, the goal is to define that quantity which is observer-independent for a *system*. And as Einstein showed in one of his early papers on relativity, there was only one consistent answer — the one I gave you. If this weren’t true, then for a particle (such as a proton) that later, after further experiments, turns out to be a system of many particles (such as the system of quarks and antiquarks and gluons that make up a proton) there’d be an inconsistency in what you’d mean by its mass if you treated it as a particle versus what you’d mean if you viewed it as a composite system. That wouldn’t make any sense. 16. Nice! To be read & enjoyed more than once. I would be interested in the inversion of your last question: not “Why is the proton stable when it is heavier than the quarks that it contains?” but “Why is the proton heavier than the quarks it contains?” given the naive idea that interaction energy as described seems to be typically negative. (I think I know the rough shape of the answer but you would carve it elegantly.) 17. What is the causal mechanism responsible to convert part of protons+neutrons masses to binding energy ? Is it a rule /principle of Q.M. that must be ” obeyed” for which no more explanation exists ? Is it a given property of EMF and gluon fields interaction ? Does equations describe it or explain it ? • I wouldn’t say you’re converting the proton and neutron masses to binding energy; notice I did not say the atom’s binding energy comes from the electron and proton masses. I said it comes from interaction energy involving the electron, proton and the electric field. For a nucleus, it is actually a complicated process, but it does arise from the interaction energy involving quarks and gluons in a not entirely dissimilar way. Because the effect is complicated, our equations are less reliable than for atoms, and it is harder to predict the interaction energy for all nuclei. Nuclear physicists are pretty good at it — but it isn’t simple. 18. So is it correct to say that the interactions system among the quarks ripples and fields plus the EMF plus the gluon field result in pumping mass from the system converting it to binding energy ? • There’s no pumping going on. You’re looking for a deeper explanation of the deep thing itself; the deep thing is that the interaction itself changes the energy of the system. Period. It’s not taking energy from something else. 19. m all protons + m all neutrons – binding energy –in the nucleus — exactly = extracting the last factor from the first and second ones……am i correct ? binding energy must come from somewhere , interactions are energy users not energy generators……….or else i am totally confused. • Yes, you are confused on this point — interactions are not things that require energy to occur — they do not use energy, the way an engine does. Nor do they produce energy the way a power plant does. The interactions themselves simply occur. Energy (possibly positive, and possibly negative) is present as a result of interactions taking place — but the interaction is not mechanically producing it, or using it. • People often ask where the energy of the big bang came from. i get the feeling from what ur saying that energy isnt anything fundamental.. its not a thing by itself…just a conserved quantity and that this question isnt meaningful. I take it is meaningful to ask where the laws and fields came from however…a i making sense? • Hmm. I’m far from sure we know what the question is yet. Often in physics (and in other areas of scientific research) the key is to figure out what the right question is. Sometimes, by the time you do that, you already see the answer. So I would say: regarding the right questions to ask about the universe as a whole, I don’t know that anyone yet knows what they are. 20. P.S. : So you mean that the interactions REDISTRIBUTE the overall system of mass/energy ?……are the fields designed to do this ? ie. it is a rule , a principal , a fundamental one ? • This is indeed very fundamental to quantum field theory. I’m trying to think about how I can answer this — whether it has an answer that is meaningful, or whether I just have to say: “this is what fields do”. Remember that I explained that energy is (according to Emmy Noether, ) that quantity which is conserved (i.e. does not change with time) because the laws of nature are constant in time. Operationally, one first writes down equations that describe fields that interact with each other. Second, one asks, using Noether’s theorem, “what is the energy of the system of fields”? One finds there is energy that is associated with the interaction of the fields, though the amount depends in detail on what the fields are doing. So I think you’re imagining energy comes first, and then you put fields in it. But no, you start with fields and the laws which govern their interactions, and then you ask: what is the conserved quantity associated with this system of fields and interactions? Not the other way round. 21. P.S. 2 : You cannot say that B is a result of A but B is not produced by A unless A is designed so that its mere existence is always accompanied with B. • I’m trying to address what I think your confusion is; I might be misinterpreting it. If A and B are sufficiently intertwined it can become impossible to state, in words, how they are related. In equations this would be very easy. 22. I do not agree that “the Moon and Earth cannot fly apart, and instead remain trapped”. There is one additional piece of energy you didn’t consider – the rotational energy of the Earth and Month. The Earth’s rotation pulls the Month apart by a few inches per century and in a couple of billions of years the Earth will loose the Month. However, it is realy difficult to imagine, how this tidal forces pulls the objects apart. • This is something that I left out, yes. I should probably supplement the article to explain it. As you say, it is tricky. • Actually, I think your statement isn’t correct, or at least there is serious debate. There are also statements in the literature that if the sun warms and the earth’s oceans boil away, the retreating due to tides will slow. But there seems to be more debate about the precise rates of tidal losses than I realized. Something to learn more about. I also hadn’t realized that there was actual data that gave information on tides and the moon’s location over geologic time scales. 23. THANKS MATT. : I very much accept that……….i mean this is the way our world is designed….this is the way every thing is connected to every thing , some times we have to take it as given. It is a great honor to have a dialogue with such a nice expert as your good self. 24. Hi Matt ! At the genesis of the universe, when all the fundamental fields were concentrated into a singularity, then presumably the negative energy would have been of infinite intensity. As the universe came into being and the fundamental fields expanded, then the negative energy within the fields would have dissipated to a point at which it was replaced by mass and motion energy , allowing the formation of stable structures. Would this be a fair summation ? • Well — first of all, we don’t know about the very earliest periods of the universe. We don’t know there was a singularity (and indeed, since a singularity by definition is a place where our equations break down, there’s no reason to think the equations we have right now actually work there.) So if I tried to answer your questions I’d be speculating wildly. Not that this stops theoretical cosmologists — it’s their job to speculate. But we do not know why the universe became hot and populated with ordinary matter (and presumably dark matter.) Also, DO NOT visualize the Big Bang as an explosion from a point. That is wrong. The Big Bang is an expansion of space, not an explosion into pre-existing space. But what it looked like when it began to expand was not a point — it may have been infinite to start with, or it may have been a region within a much more complicated pre-existing space-time, or its features may not have been interpretable as space at all. We certainly do not know the universe’s extreme past. 25. Now is it possible that one day we may discover that fields are ONLY our mathematical representation of what we observe with nice match , but the MOST fundamental ingredient of the world is something we never imagined ? related to this ; is our knowledge as per NOW can confirm that fields ARE the MOST primary ingredient with ultimate final confidence ? • Absolutely it is possible; it is even likely. Science does not provide final confidence; it provides tools for predictions and for consistent thinking. Those tools are always potentially subject to update with new information. The only thing we know is that those updates will preserve successful predictions of the past, as Einstein’s revisions of the laws of motion cleverly maintained all of Newton’s predictions. 26. Hi Matt, Are the disturbances in the field (well behaved or not) changes (fluctuations) in the value of the field? The following questions assume the answer to this one to be “Yes”. But even if it’s “No” you might still be able to see what’s confusing in my mind. Is the quantum limitation a property of the fields (i.e the change in the value of the field cannot be smaller than a quanta)? Does the energy of a general disturbance in the field (not a particle) have the same components as the particles (mass energy and motion energy)? Does a field have energy? Can two fields interact (or have a relationship) in a different way than the one described in the article (i.e a disturbance in one field generates a disturbance in the second field)? Is there energy just because two fields that can iteract have large values (in the same region I suspect) without the need of any disturbance? • So — delayed answer: Disturbances in the field do involve changes in the value of the field, yes. They aren’t changes in the average value of the field over all of space, but rather localized changes. (Caution: this is quantum field theory, so just as we can’t simultaneously know the position and velocity of a particle, we cannot simultaneously know the value of a field and how it is changing… ) The statement that ripples in fields are quantized, however, is NOT a statement that a change in the value of the field cannot be smaller than a quantum. The value of the field can change continuously. It is the statement that a RIPPLE (an up-and-down change that resembles a ripple on a pond, in that the average change in the value of the field is *zero*) cannot have arbitrarily small height (i.e. `amplitude’). For a given field, its ripples (i.e. its particles) all have the same mass. (Small caveat if the particles are very short-lived, but let’s ignore that for the moment.) The electron is a ripple in the electron field; all electrons have the same mass. General disturbances can have any mass. You can, if you want, think of them as having mass-energy and motion-energy. It’s not as useful as for particles, because (unlike a particle) these disturbances tend to fall apart right away (even faster than most unstable particles do.) So they don’t tend to move very far, and if they bounce off of something they will typically emerge with a changed mass — very different from electrons, which can travel macroscopic distances and retain their mass even if they bounce off of something. Fields do have generally have energy, yes; if they are changing over time or over space, they always do. Yes, it is possible for two fields that are non-zero but not disturbed to have energy due to their interaction. An example would be if there are two types of Higgs fields in nature rather than one; the average values of the two Higgs fields in nature will be determined by the requirement that their energy of interaction with each other and with themselves be minimized. • This was the question I’d been asking myself for the last few hours, even since you introduced a difference between real and ‘virtual particles’, only one of which seems to be nicely quantized and well-behaved. These things seem described so differently, but it’s hard to picture what keeps them separated when they share the same field and both seem to have some sort of wave-shape (?) Could one not engineer a strong disturbance that happens to shape itself into an actual particle? Can I make a disturbance large enough that its peak reaches a quantum, can I then make it oscillate by some mechanism? I can’t see where nature draws the line. 27. Elizabeth Maas About quantum units being dependent upon the “grid lines” of the field in which the are derived, is it conceivable that at an earlier stage of the universe’s evolution, these grid lines and quantum units were of a grander scale or at least of a different scale than known to us today experimentally? Consider time dilation for a mass, defined by spacetime’s absence or field knotting, traveling at near light speed relative to our frame of reference. Although there may or may not be a quantal unit of time, time’s arrow is a relativistic constant within it’s frame of reference. Time’s arrow accomodates the frame of reference. Furthermore, there is a fractal nature to the expansion of the universe – as well as a fluid nature with boundary partitions – some fields extend WITHOUT a time component – accounting for quantal entanglement. Some mysteries remain eternal! My point: Is it conceivable that the scale of a quantal unit is dependent upon the scale of the field’s “grid lines” from which it is derived, so that the fields from which particles are formed have evolved and rescaled simultaneously with that of cosmic evolution. My answer to myself: “Yes.” • Sorry, I have no idea what you are talking about. Please define “quantum units” and “grid lines”; most fields could not have “grid lines”, by any definition I can think of, so I don’t know what you’re talking about. And “quantum units” is a non-standard and ambiguous term. 28. Elizabeth Maas Of all isotopes, iron-56 has the lowest mass per nucleon. With 8.8 MeV binding energy per nucleon, iron-56 is one of the most tightly bound nuclei. How would you explain this stability? What is it about the nuclear geometry of fields allowable by this number of protons and neutrons to account for this energy of mass defect? I have been intrigued by knot theory within-between fields so I have been exploring for representative phenomena. 29. Hi Matt, Have I asked my questions in a wrong way? I was wondering what is a ripple in a field. Is that an oscillation of the value (the property of the field you said can be measured everywhere) of the field? Thank you, 30. you correct pavel by being out by a factor of 100 from this site , but the site itself is also out by a factor of 100 I believe with this statement ‘Laser pulses are aimed at the reflector from Earth and bounce quickly back at 3 million meters per second – that’s about 186,000 miles per second, so the round trip takes less than three seconds’ also it says there “As an interesting sidelight, there will be no full moon in February of 2010. There will be fewer in the short month as time goes by because the Moon will take longer to orbit the Earth as it spirals away” really ? the slow rate of the moon drifting away is more than compensated by the slowing rotation of the earth, making lunar months longer in scientific seconds, but shorter in days. it would seem that if we have anough patience, there will never be a february without a full moon. • That’s why I gave two sites; any given site can either be wrong or have a typo. Of course if a typo gets copied things are correlated. But still — you’re not implying that the 2 cm/year number was wrong, are you? Instead you just mean that that website should say 300 million meters per second. 186,000 miles per second (and three seconds for the round trip) is correct. Looks like a simple typo there. Your point about February is more substantive and seems correct, because isn’t the end result of tidal friction (except for one subtlety) that the earth day and lunar month become equal? So the day becomes very long, the moon maintains a fixed position in the sky, and the moon is new and full every day; some parts of the earth see the moon all the time and some never see it. And I’m not even sure we can talk about February; how many days will there be in a year, by then? The subtlety is that the earth’s oceans may boil away, due to the sun becoming hotter, long before that happens, vastly reducing tidal friction and pushing this end-point off for I have no idea how long, but certainly longer than the earth is likely to survive the sun’s expansion as it reaches old age. 31. Prof. Strassler, Just to be clear: we are restricting ourselves to a flat Minkowski spacetime, are we not? The whole concept of “energy” and its conservation becomes rather problematic in the curved spacetime of general relativity unless some univeral Killing feld is imposed (which violates the general covariance requirement of general relativity). When both time and space can “bend” depending on spacetime’s contents and on the motion of mass-energy-stress through it, the symmetries required for a meaningful definition of energy as a conserved quantity aren’t present. • Edit: That should be “… unless some universal Killing field is imposed …” • What you say is true and not true. We are approximating the curved space picture by assuming only the time-time component of the metric is curved, and representing that as the gravitational field. Any effects that go beyond this approximation are minuscule. Physicists always make useful approximations in order to capture the physical phenomena to the extent possible. You are rejecting the approximation in which energy can be used because it isn’t exactly accurate. You say we should use Einstein’s general relativity to do this correctly. And you talk about the curvature of spacetime instead. But in that case, shouldn’t you worry about the fact that general relativity is also wrong, because it doesn’t account for the fact that the earth and moon’s particles should be described using quantum mechanics? And in fact, that’s not enough, because spacetime itself is quantum mechanical at very short scales. In other words, your picture is incomplete too. One of the hardest things to learn in physics is when subtle effects matter and when they don’t. You’re so focused on getting things exactly right that (a) you have forgotten that you don’t have them exactly right either, and (b) there is a physics point which you are making much more confusing than it needs to be in the process. First let’s make sure we understand energy and why things are bound together; then we can try to understand how, in some circumstances (but not this one), general relativity forces us to account for the fact that this is not a sufficiently good approximation. 32. According to general relativity, the Earth and the Moon are not feeling any “force” of gravity. They are both traveling in geodesic orbits around their common center of mass — i.e., they are in free-fall along geodesic paths that are curved due to the curvature of spacetime: both space and time are curved as a result of the presence of their mass-energy-stress within the spacetime. In GR, a body moves along a geodesic (not along a straight line) unless affected by an outside force. There is no Newtonian “gravitational field”, just the dynamic metric of spacetime. So your earth-moon diagram is roughly correct, according to GR, if you substitute “4-dimensional manifold with a semi-Riemannian metric that varies according to Einsteins equation” for “gravitational field”. But that is a quite significant detail, and very different from how you view the problem, and approach it mathematically, in Minkowski spacetime. • Most of what you say here is wrong… you have Einstein correct, but you have not understood that what I said is also consistent with Einstein. First, I did not say the words “gravitational force” in my article. Nor did I say “Newtonian field”. You put words in my mouth — so why are you criticizing me for using them? You are right there is no Newtonian gravitational field — however, you are wrong beyond that point. The metric IS associated with the Einsteinian gravitational fields — and in particular, in situations where you have two slowly moving, weakly gravitating objects, the only component of the metric which is significantly different from flat space is the time-time component, and the only components of the Einsteinian gravitational fields which are significantly different from zero are those that are derived from the time-time component of the metric. See Weinberg’s book on the weak-gravity limit. (You are perhaps not familiar with the field language, but it works just fine.) The approximation I am making is that the other components of the gravitational field are very small — an approximation whose limitations can be measured with precise techniques, but which is accurate enough that everything I said about binding, and binding energy, gives the correct result. Just as we should not waste our time worrying about the quantum mechanical corrections to the earth-moon system, we should not worry about the components of the Einsteinian gravitational fields that are so small that they do not affect the dynamics of the earth-moon system. 33. A slight correction: the force that the Earth and Moon do feel is a tidal force: Because the curvature of spacetime in which they travel is not uniform, the paths that some parts of these bodies travel is slightly different from the paths that their neighboring regions are trying to travel. This tends to try to pull them apart. But because they are semi-rigid bodies, these sheer forces are of course resisted by the electromagnetic forces holding them together, so the motion of the body as a whole is affected. And that is why the Earth’s rotation is slowing down and the moon’s orbital velocity speeding up (causing its orbit to expand) – due to the tidal forces induced by differential curvature of the spacetime which they inhabit. 34. Because spacetime in GR is curved, there is no general definition of parallel vectors, nor parallel transport. In most spacetimes in general relativity, there can be no global family of inertial observers. That is, spacetime in GR is Lorentz _covariant_ only locally, not globally. Although energy at a point (or in a sufficiently local region where spacetime curvature is negligible) can be defined, in general, an observer cannot know the energy at an arbitrary distant point. And if that local energy is unbounded from below, or sufficiently negative, spacetime itself becomes unstable. So I was surprised by your use of the Earth-Moon/gravitational system to illustrate a rather semi-classical mechanics view of energy. You seem to have crossed The Line That Should Not Be Crossed – conflating quantum mechanics based on Hamiltonians and Minkowski spacetime with gravitation based on Einstein’s equations and curved spacetime. As with your goal of correcting the common misrepresentation of “particles”, shouldn’t we be careful to use the most accurate, up-to-date description that we have – currently, still Einstein’s General Relativity — while being very explicit as to its limitations? • Oh, come on. This is ridiculous. Please stop talking to me as though I’m an idiot. The most up-to-date description of gravity would treat the earth and the moon as quantum mechanical systems. What’s your argument for not doing that? Are you seriously suggesting that comparing hydrogen to the earth-moon system is so completely wrong that absolutely nothing useful can be learned by doing so? And that it would be better to leave people so confused about hydrogen (AND the earth-moon system) that they cannot understand why structure forms in the universe? If so, I advise you to run your own website, and explain things your own way. 35. Hi Matt, I think I know why you’re not answering my questions. I sincerely apologize for my (childish) behavior. • Calin — the reason I haven’t answered is that your first question was tough to answer without a long reply, and I set it aside. Then I forgot about the restatement. Let me try to get back to it. I’ve had a lot of comments (and a lot of work too) in the last few days. p.s. Now I’ve answered it. 36. 1)so the moon moves 3.8cm per year away or 3.8 metres per century which is about 12 meters extra travelling distance and 12 milli-secs per century for its orbit as the moon travel about 1000 meters/sec But the solar day becomes 1.7 ms longer every century and so a month will last about 51 ms longer /per century. if you can put the 2 figures together then a lunar month should be about about 39 ms shorter per century(51-12). for the moon always to be visible in february, it would have to lose 37 hours so that it would only be 28 days long. as there are about 3,400,000 sets of 39ms in 37 hours, it should take 340,000,000 years until you can be assured of a visible moon in february with the day being 25.6 hours and 342 days in a year. your link says 15 billion times 3.8cm =570,000 km but it only has to travel another 176,000 km to get into synchronous orbit which is about 4.5 billion years. (3)in 15 billion years the earth should take another 71 hours to rotate daily (@1.7 ms longer every century )for a total of 95 hours in a day when the moon reaches synchronous orbit with Earth . currently the moon takes 708 hours to orbit the earth. is that what happens as the moon gets into synchronous orbit, it gets a rapid increase in speed ? • If it’s not too late to follow up on this discussion of tidal locking, let me mention that it was the topic of a summer project I did as an undergraduate, more years ago than I care to admit. My job was just to make educational animations, but I had to learn something about the related physics as well (at the Newtonian level, of course). My employer seems to have abandoned his Web site before adding this project, so I went ahead and resurrected it here. There may well be errors, but I hope the animations and accompanying discussion and references will prove educational. In discussions like (1) and (3), we need to carefully distinguish sidereal time and synodic time. A sidereal month is the amount of time it takes for the moon to orbit once around the Earth, with respect to the background stars, currently 656 hours. A synodic (lunar) month is the amount of time between successive new moons from the perspective of an observer on the earth, currently 708 hours. Similarly, a synodic (solar) day is the time from one sunrise to the next, while a sidereal day is the amount of time it takes for the earth to rotate about its axis, with respect to the background stars. (Of course there are further caveats, refinements, and so on, which are unimportant here.) In the synchronous orbit expected in the far future, a sidereal day will last as long as a sidereal month (the number I have in that old project is about 1130 hours), the same point on the Moon will always face the Earth (this is already true today), and the same point on the Earth will always face the Moon. Whether or not an observer sees the moon in February or any other month would depend on where that observer were located on the earth: part of the planet would always see the moon, the rest would never see it. Regarding (2), as the moon moves farther away from the earth, the rate of this motion away from the earth decreases. And (re: 3) as the radius (semi-major axis) of the Moon’s orbit increases, its angular velocity also decreases, as described by Kepler’s third law. 37. Hi Matt Let’s say I superglue two strong magnets together with the south poles touching. This object has some positive interaction energy in addition to just the masses of the original magnets and the glue; so, if I take a very accurate scale, and weigh this object against a similar object but with a south pole glued to the north pole, the first object would actually be heavier? 38. David Schaich | liked your website. does it have the actual mathematical equations though ? 39. Hey Matt, A physics newbie here, trying to wrap my mind around interaction (“potential”) energy and total system energy. I think I understand the gist of what you say in this (great) article about the relationship between mass, interaction energy, and inter-system relationships. I do get that interaction energy is essentially what defines the energetic boundary of a system that keeps it from fragmenting into separate, basically isolated systems (is that the right way of saying it?). But I am still somewhat confused, as I try to explain below. This post gets a bit longer than I meant it to, but I’m not sure how to get my thoughts across any more succinctly, so my appologies for that. =P In my current reference texts, I am being introduced to the idea of attractive forces as having negative interaction energy, such that two isolated systems starting from rest and given even a small attractive force between them, over an infinite distance, will show a net kinetic-interactive energy sum of 0, even as the interaction energy decreases infinitely, and the kinetic energy increases infinitely. As the separation distance approaches infinity, the interaction energy approaches zero (as in the case of gravity). And by further reasoning, as the separation distance approaches zero, then the potential energy approaches negative infinity. But my mind trips over the accounting of it, I guess you could say? I am just not quite clear on why the interaction energy is negative, even as I understand the reasoning that leads to the conclusion that the interaction and kinetic energy, summed, must = 0; since both isolated systems started with only their rest energies. I find it mentally far more clear to restate the situation. Because while we are going from two isloated systems to a compound, two-object system, we are “injecting” this new factor, the separation distance, into the behaviour of the system. And wherever there is separation distance, and a force that can act over that distance, there is interaction energy, at least as far as I understand it. So the interaction energy of a system with an infinite separation distance, but some acting attractive force (however small), is in fact *infinite* by this reasoning (even as it is somehow -> 0, which is in fact an increase from infinitely more negative states…). And as separation distance approaches 0, so does interaction energy (just as when one approaches infinity, so does the other). No separation distance, no distance for any attractive force to perform internal work over. So if you have two massive particles that start at rest, *with a separation distance between them*, and assume these particles have no electric/contact forces (only gravity is present), then you basically get a gravitational oscillator, where – relative to one particle – the second particle oscillates through the first along one linear path, up to a maximum distance equalling the initial separation distance. Also, the kinetic energy = 0 at either extreme of the oscillation, since the attractive force has been working across that distance as it moves away from the other particle the entire time, while the interaction energy = 0 at the single instant where both systems occupy the same space, since particle has been accelerating that entire time (kinetic energy = total energy – rest energy), and there is now no distance between the two particles whatsoever. In no instant in this system is the interaction energy ever 0, and the sum of the kinetic and interaction energies of the system is always a constant. Indeed, as far as I can tell, the only reason the interaction energy becomes negative at all is because we define the ‘zero point’ of potential energy to be some point where the separation distance IS NOT zero. And I’m not clear as to why this is a useful assumption. Anyway, having said all that, I am unclear as to how to reconcile this “gravitational oscillator” perspective – where separation distance and interaction energy are both always non-negative, and increase with each other – and the case in which interaction energy and kinetic energy start at 0 from two isolated systems, and then the former decreases without bound as the latter increases without bound. (It is worth noting that the ‘escape energy’ of this sort of system, from the perspective I describe, would be a point at which the potential energy suddenly starts dropping to zero as the attractive forces move ‘out of range’, and the systems become effectively isolated, even as the particle we deem to be ‘moving’ relative to the other retains a nonzero kinetic energy. At least, that’s as far as I can reason it.) Hopefully this reasoning isn’t just a giant jumble, and any insight you could perhaps provide would be greatly appreciated. These relationships are just not quite coming together in my head in the way they have been presented to me so far. Thanks again, – Chris • Chris — your confusions are very natural and common, and your reasoning (I admit I didn’t go through every detail) seems sound. As you say, it is an option, when dealing with energy in classical freshman-level physics, to set the zero of energy wherever we like, and either perspective you outline is allowed. Experience, however, will teach you that setting the energy at zero is a more consistent thing to do. For instance, suppose, that you set the zero of energy so that comet #1 has zero interaction (i.e. “potential”) energy at its closest approach to the sun and positive energy further away. Well, now if comet #2 has an orbit that brings it closer to the sun, it will have negative potential energy. So what you’d have to do to describe a whole solar system using only positive interaction energy would be to find the comet with the closest approach to the sun, and set the zero of potential energy there. Or even more appropriately, put the zero of potential energy at the dead center of the sun (where it is finite because the sun is a spread-out sphere.) But you see: to describe this system’s energy you need to know many of its details. This is not convenient, and it is very system-dependent; add one more comet, or make the sun a little more or less dense because of its evolution over time, and you may wish you’d set the point of zero potential energy differently. In contrast, if you always set the zero of potential energy at infinite separation, this is system-independent, and always works (as long as you’re not dealing with a significant fraction of the universe, or something else that renders ordinary classical physics insufficient.) You don’t need to know anything about what’s in the system to do this. And that’s why, with experience, you’ll see this is by far the best choice. The alternatives work in specific situations, but they don’t lead to a useful and general theoretical picture. • Thanks Matt; I almost hope you didn’t go through it all, rambling as it was! Anyway, I was increasingly sure that negative interaction energy had to be a conscious, yet arbitrary choice on the part of the physics community for some reason of conceptual simplicity, but for whatever reason my text (“Matter and Interactions” for the curious, which has worked very well for me so far short of this exception) just didn’t go into the reasoning that lead to that choice in any detail, and I was having difficulty finding other good articles explaining the justification. Your explanation helps clarify that, so thank you very much. I’m not sure I completely grasp your points as to your example of the solar system, though: I do get what you are saying about the interaction energy becoming negative if, in this situation, we set the zero point at any arbitrary distance from the center of the sun, with respect to comet orbits or anything else. The same sort of approach may be applied to objects on Earth’s surface with respect to the surface of the Earth, below which they cannot move, even as the force of gravity still pulls upon them from Earth’s center of mass. So I think I get that. If I understand correctly, you’re saying that – assuming a positive interaction energy perspective – if we place the zero of the solar system’s interaction energy at the dead center of the Sun, this basically makes the most physical sense (and this is basically the approach I outlined above re; gravitational oscillator), since that is basically the point to which the gravitational force is always trying to pull all solar objects. You say that we need to know many details of this system; can you give me one or two examples? I can see that we need to know the Sun’s radius, so that we know the closest any object can ever get to the zero-point at the center of the sun. But I’m not clear what other factors would be critically important to bear in mind. One thought that occurs is that while this approach is simple when we assume the Sun is stationary, it becomes far more complicated if our frame of reference sees the sun moving around with respect to us, which basically means the zero-point of the solar interaction energy is wandering around too… You also say that adding one more comet, or changing the density of the Sun, can affect the way we interpret the interaction energy for objects within this system, based on this approach. I am presuming this is because such changes would influence the center of mass of the solar system, and thus the point to which objects are trying to gravitate, and so ‘moves’ our zero-point. Is this reasoning correct? Thanks again for humouring my questions; your assistance in understanding these ideas is very much appreciated. =) • No, it doesn’t change the center of mass much — that wasn’t my point. The changes in the solar system will assure that certain objects will have negative interaction energy despite your best efforts to avoid it. For example: set the energy for an object located at the center of the sun as the zero of energy. Now imagine a comet falls into the sun, making the sun’s mass larger. Well, the interaction energy for an object at the center of the sun just decreased. So in this process, the energy at the center of the sun has now gone negative. This is not very convenient. Or suppose you put the interaction energy of Mercury and the Sun to be zero. Now in comes a comet; it passes Mercury and goes closer to the sun. Now its interaction energy is negative; do you want to redefine where you put the zero just because a new comet came closer than Mercury? Best to put the zero of energy at infinity, and not be affected by these details at all. • Slobodan Nedic Thank you very much for artuculating the apparent inconsistencies in the definition of system’s energy, and relationship of interaction and potential energies. Not only that you mentioned the “Gravitation Oscillator”, on there is an article and supporting material on plausibility of founding the systems’ energy on the minimal work needed to be done for moving of its parts over *closed* paths, whereby it turns out to be non-zero – meaning the essential non-conservativeness of all natural orbital systems, contrary to the common assumption (i.e. energy AND angular momentum conservation) … 40. Hey Matt, thanks again for the clarifications. I think I’ve about got the idea now, between my readings and your responses to my questions. So, I wanted to ask, for purposes of clarification; Does it make sense to define the “interaction energy” of a multiparticle system, in terms of the individual rest energies, as essentially – in attractive systems – the amount of rest energy that the two particles, when interacting, are able to convert into kinetic energy and, in some form, eject from the interacting-state system (as in the case of a proton-electron pair that ejects a photon/quanta of energy)? Or, in the case of repulsive forces, the amount of kinetic energy that a particle can, by interaction with another particle, convert into additional rest energy within the interacting-state system? If one assumes that all physical particles are always trying to enter states with a lower total rest energy (for whatever reasons I don’t yet grasp), as I understand is essentially the case from my limited experience with the basics of chemistry, then that seems to make sense. That said, I’m wondering if I am connecting dots that aren’t there. Is this an accurate conceptual interpretation of interaction energy, or am I on the wrong track? • I don’t think I’m understanding the way you’re thinking. What do you mean by “individual rest energies”, or by “convert into kinetic energy”? In an atom, the rest energies of the particles are just E_rest = mc^2 for each elementary particle. The interaction energy of a system of elementary particles would be the Mc^2 for the whole multi-particle system, minus the sum of the mc^2 for each elementary particle, minus the kinetic energies of each particle. That is the simplest way to say it. I don’t know what it could mean to say “all physical particles are always trying to enter states with a lower total rest energy”. All electrons have the same rest energy: E_rest = m_electron c^2. And physical particles in a multi-particle system don’t do things independently; the system does things. Within a system, energy is conserved; energy can only be lowered if some energy leaves the system. For example, an atom can fall from an excited state to a less excited state, lowering its total energy and making its interaction energy more negative (though also the electron’s kinetic energy more positive, but not by as much) — but this can only happen if the atom emits a photon, which leaves the atom. So I don’t know where you’re going with this line of thinking, or what you mean. What are the basics of chemistry that you are trying to rely upon? • Slobodan Nedic This post: was primarily intended to Chris Rowlands … 41. Upon binding, say a proton and electron, the loss of potential energy has to go *somewhere*, right? I’m pretty sure a 13.6 eV photon is emitted. The equation m_atom < m_proton + m_electron would be more clearly written, with an addition, m_atom + m_photon = m_proton + m_electron (You know what I mean when I say m_photon… of course the equation would be more correctly written in terms of energies to avoid any confusion about the photon having a rest mass.) 42. Pingback: Courses, Forces, and (w)Einstein | Of Particular Significance 43. I just wanted to say thank you for the clear and informative articles on your site and for taking the time to produce them and answer people’s questions. I’m sure I speak for many others when I say that your work is really appreciated. Long may it continue! 44. Pingback: Page not found | Of Particular Significance 45. Pingback: A Short Break | Of Particular Significance 46. I get pleasure from, cause I found exactly what I used to bee taking a look for. Have a great day. Bye 47. Pingback: Quora 48. dr md zakir Hussain Loved the article….a very beautiful post. 49. Pingback: What Interaction Holds Two Atoms Together | Jemiya1 50. In a 2014, Helen Quinn (from SLAC) was recommending to use the expression “interaction energy” instead of “potential energy” ( ) This expression does not seem to be used commonly — and on this webpage, it seems to be “self-invented” term. Is it possible to say a little about the “origin” or “tradition of use” for this expression “interaction energy”? 51. Pingback: Moving Through Time or Space, Where Does Your Energy Go? « Jim Ritchie-Dunham 52. Hi, Professor Strassler, Could you say me what is a non-ripple disturbance in a field (as said at the beginning of your post) ? Kind regards 53. Could you say what is a non-ripple disturbance in a field (said at the beginning of your post) , 54. Pingback: Moving Through Time or Space, Where Does Your Energy Go? – ISC 55. While reading this article on the search for Dark Matter particles, I am reminded by this article on binding energy. Why aren’t we accounting for gravitational binding energy as the mass that makes up Dark Matter? Leave a Reply to Matt Strassler Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
dcf4503c3cfd7214
Slaying a greenhouse dragon by Judith Curry On the Pierrehumbert thread, I stated: So, if you have followed the Climate Etc. threads, the numerous threads on this topic at Scienceofdoom, and read Pierrehumbert’s article, is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures?   I’ve read Slaying the Sky Dragon and originally intended a rubuttal, but it would be too overwhelming to attempt this and probably pointless. I was hoping to put to rest any skeptical debate about the basic physics of gaseous infrared radiative transfer.  There are plenty of things to be skeptical about, but IMO this isn’t one of them. Well, my statement has riled the authors of Slaying the Sky Dragon.   I have been involved in extensive email discussion with the authors plus an additional 10 or so other individuals (skeptics).  Several of these individuals  on John O’Sullivan’s email list actually agree with my assessment, even though they regard themselves as staunch AGW skeptics. One of the authors, Claes Johnson, along with John O’Sullivan, expects a serious critique from the climate community.  Johnson says he intends to submit his papers to a peer reviewed journal.  I agreed to host a discussion on Johnson’s chapters at Climate Etc., provided that the publishers of Slaying the Sky Dragon would make Johnson’s chapters publicly available on their website (which they have). Johnson’s first chapter is entitled “Climate Thermodynamics,” which presents an energy budget for the earth and its atmosphere that does not include infrared radiation.   The second chapter is entitled “Computational Black Body Radiation,”  which seeks to overturn the last 100 years of modern physics  and concludes that “back radiation is unphysical.” For background info: • Claes Johnson’s website is here • Johnson’s blog is here, see specifically these posts ( here and here) • John O’Sullivan’s advert for the debate at Climate Etc. (note Monckton and Costella are in  my “corner” in criticizing the book and Johnson’s chapters). I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters.  I’m opening up this discussion at Climate Etc. since • the Denizens seem to like threads on greenhouse physics • I’m hoping we can slay the greenhouse dragon that is trying to refute the Tyndall gas effect once and for all. It will be interesting to see how this goes.  Claes Johnson has said that he will participate in the discussion. Note: this is a technical thread, please keep your comments focused on Johnson’s arguments, or other aspects of Slaying the Sky Dragon.   General comments about the greenhouse effect should continue on the Pierrehumbert thread. 2,518 responses to “Slaying a greenhouse dragon 1. It’s an interesting concept, that an atom cannot absorb (but only reflect) incoming EM at a cooler temp than its own blackbody emission temp at that instant. No idea if it’s true. My layman’s understanding of the thermodynamics constraint was just that it described net transfer, which must always be from hot to cold. 2. I have to agree with omnologos on this. By inferring that all those skeptical of the man-made global warming meme (some, like us, skeptical of the greenhouse gas theory, itself) are supposed to be seeking a unified front as if we are a political or military force is, frankly, absurd. We prefer to leave ambitions to claim a consensus to the post-normal science green brigade; they appear to have abandoned the traditional tenets of the scientific method. Consensus is utterly meaningless- being proven right is the goal even when the so-called ‘consensus’ is adamant we are wrong. The statement, “I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters” is so funny coming from someone who is “too busy” to do what she infers is such a basic task, herself. • Well, I mainly found it interesting that a number of people on your self selected email list were highly critical of the book and Johnson’s chapters. Your email list does not begin to reflect the broader range of skeptical opinions. • Judy, I intentionally invited to participate those who I knew to have contrary views . This is the whole point of debate isn’t it? Let’s see some actual analysis please rather than insults and hand waving so far displayed by those made uncomfortable by what the book presents. • Thank you for mentioning me John as my comment has been snipped out. Perhaps I should be glad it deserved that much of an attention. 3. Hi judith, I was positively surprised by the first chapter, which correspond to the mental model I have formed about GH effect, but I do not really see where it is in conflict with mainstream view nor why it is independent of infrared radiation. On the contrary, it explicitely agree with mainstream view, that is that TOA is variable in height and the higher the more GH gaz is present. The only thing it add is that lapse rate below TOA is related to thermodynamic and not radiation, and that lapse rate can vary with humidity and thus is a potential feedback (negative feedback). Up to here, I perfectly agree, appart that one should mention that it is an approximative model because all radiation does not happen a a precise TOA height, but that TOA is an average concept, the atmosphere is not perfectly IR opaque and then IR transparent, it is semi-transparent so radiation is a diffuse process and all radiation occuring at TOA is only a (usefull?) approximation. At this point, the model does not allow to predict the change of T_ground when CO2 is doubled, what would be needed is the change in TOA from CO2 doubling, and the various H2O feedbacks (on TOA itself, and on lapse rate). Still, this model seems to me much more useful and closer to reality than pure radiative model with an IR opaque shell-like atmosphere concentrated at TOA, and the (negative) feedback of H2O on lapse rate seems perfectly valid (and not mentioned explicitely on previous GH accounts I have read). This first chapter does not ring any physical alert bells though, so I guess reading the rest makes sense, and I am for now positively surprised by “Slaying…”… • oups, forgot to say: read the Pierrehumbert thread where I attempt to expose the mental model I built about GH effect. Done that only from the various GH threads here, at wuwt and rc, not from the “Slaying….” chapters….So u see why this first chapter was appealing to me :-) • ouch, started reading second chapter about blackbody…yikes, this one is definitely in crackpot territory, so “Slayer….” is a kind of mixed bag imho, if most chapter are long the first one, it is worthy, else (or if conclusion hold only if all chapters are true), then it will easily debunked… • Kai, the first chapter rests on the result of the 2nd chapter (they are both written by Claes Johnson), i.e. there is no back radiation and atmospheric infrared radiative transfer is not important in the earth’s energy balance. So if Ch 2 is crackpot, then Ch 1 is also. • Dr Curry, To help the readers understand, please: 1. elaborate your definition of back radiation and your concept of it, 2. explain whats wrong with ” atmospheric infrared radiative transfer is not important in the earth’s energy balance”. The Earth’s mass is so huge compared with atmospheric mass. The Earth’s IR energy emitted is so huge as compared with atmospheric absorption of IR energy. Will you care to do a comparison? Why is NASA’s radiation energy balance for K-12 incorrect? • Sam, I essentially agree with Pierrehumbert’s essay on this topic, see the previous pierrehumbert thread • Dr. Curry, I admire your tactics of diverting your GT students’ attentions for avoiding direct answers to direct questions soon I found the Figure 1 model there is not a true representation of the atmosphere radiation transfer, namely, lack of the cloud radiation transfer and lack of layers direct radiation transfer to the Earth surface. • kai, “… this one is definitely in crackpot territory”. This is very unrespectful to an an author who try to sort out radiation misconceptions, care to elaborate? • Is he really trying to sort out radiation misconceptions? Whether he tries it or not, the result leads to think the opposite. The book in, which the article appears is definitely trying to increase misconceptions. • Its easy to make a generalized comment. I find generalised comments do increase misconceptions. Will you be more specific, such as list them out item by item, concept by concept, misconception by misconception, page by page? Doing it this way helps the readers understand your points of view. • Sam I have done that kind of commenting in tens of messages. Repeating similar statements hundred or thousand times more, is not going to stop requests like yours. When you stop commenting, there will always be a new participant, who starts from the beginning again. That will go on as long as this site is active. 4. Dr. Curry, I am sorry you felt obliged to give so much space to the ‘Dragon’ book. But if anything, it will unify skeptics by giving many something to agree on that fails as a skeptical case. I see this book as sort of a left hand paranthesis to Hansen’s Venus-ization of Earth as a right hand paranthesis, expressing clear markers where wishful thinking has taken over. 5. Judy: I do not say that radiative transfer plays no role in climate. It would be helpful for the debate if you woul read what I write and not freely invent crackpot themes. • Well yes, you admit to solar radiation and black body radiation. But your treatment in the first chapter completely omits atmospheric gases (and cloud) infrared radiative transfer (and includes that ludicrously incorrect diagram from more than 10 years ago that somehow continues to exist on a NASA web site). • Having said that Johnson is wrong, I’d like to point out that his first chapter on climate thermodynamics – emphasizing heat transport by convection and evaporation, but not including radiation from the atmosphere, is no more wrong than Pierrehumbert’s article which does the opposite – making the incorrect claim that the surface temperature can be determined by calculation that only includes radiation, ignoring convection and evaporation. And yet of these two incorrect articles, Judith refers to one as “excellent” but says that the other could be refuted by undergraduates. I wonder if these same undergraduates could refute the Pierrehumbert article? I expect most could not, because the new generation of students are being brainwashed in the same way, for example by the GaTech course “EAS8803 – Atmospheric Radiative Transfer”. I note that the blurb for this course says that “Topics to be covered include the radiative balance at the surface”. I do hope that you have some students bright enough to realise that there is no radiative balance at the surface, and that one day this fact will dawn on those who design and teach the course. • Sorry, posted this in the wrong place in the thread. • Why NASA did not correct it and misled the general public for over 10 years with that incorrect diagram? Or NASA is incapable of understanding the subject of radiation? Or under the authority of James Hansen, no one in NASA dare to correct it? • This diagram apparently first appeared in a doc designed for K-12 education. The names Eric Barron (currently president of Florida State University) and John Theon were on the doc (back when theon was still employed at NASA and Barron was at Penn State, which places it in the mid 90’s). But I assume this diagram was drawn by a staff person, and Barron didn’t pay close attention. That is the only way I can explain this. Somehow John O’Sullivan spotted this (or at least publicized this). And it sits on a web site to the present day. In spite of my contacting several people about this. The bottom line is that there is too much form and not enough substance oversight on public communication documents (as opposed to satellite data quality issues, where there is a lot of oversight and checks and balance in place at NASA). • Over the 10 years, this diagram has misled the K-12 students, the teachers, the politicians and the world who visited the NASA site. This is a serious American educational flaw that NASA, Eric Barron and John Theon should be informed to correct the diagram or delete from the NASA website and owe the American Education and the world an apology. If you have not asked them to correct it, please do as an educator at the Georgia Tech. 6. Ok fine then Judy: You don’t like the Kiehl-Trenberth diagram. So what is then wrong with it, as you see it? Maybe we share some insights? • I’m not clear which diagram you are discussing here. If it is Fig 5 of Chapter 2 of Johnson then it does closely resemble Fig 7 of Kiehl and Trenberth 1997. If the latter was ‘ludicrously incorrect’ then it was still given pride of place 10 years later (with added colour but no other changes apart from the caption) in IPCC AR4 WG1 Chapter 1, p 96 (2007). But I thought Dr Curry was referring to Fig 4 of Ch 1 of Johnson, which is also attributed to NASA, but which differs from the Ch 2 version in not showing any downward long-wave radiation. Is that also derived from K & T? 7. Judy: You say that “I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters”. I suggest that you actually try this as a take home exam for your students. From your teaching they will understand that Kiehl-Trenberth is wrong but maybe they will find something they think is right. Go ahead! 8. Apart from an over-indulgence in post-modern civility, the chapter on Climate Thermodynamics pursues the misconceptions underlying current AGW theory. A helpful touchstone for pdf files is a scan for the word equilibrium where used to describe what physical science calls steady states. I find three such instances in this chapter, all wrt the adiabatic lapse rate. Equilibrium states have no net fluxes of matter or energy entering or leaving. (Canonical ensembles allow fluctuations.) Equilibrium profiles are isothermal and the adiabat is not. Steady states require external fluxes to prevent them from relaxing to equilibria. The alert student should now be asking, how do I determine this flux needed to maintain an adiabatic profile? With CO2 doubling, one typically calculates a 2% flux reduction and then presumes a 2% increase in the thermodynamic potential difference (1/T) is needed to restore the flux level. Thus, given a 65K tropospheric differential, 1.3K. An alternative interpretation is that adding CO2 increases the resistivity of the troposphere, just as traces of phosphorus disproportionately increase the resistivity of a copper wire. Thermodynamics asks, what change in potential is required to restore the original rate of dissipation of free energy? In high school we learned the expression E^2/R, albeit in a different guise. Ergo, only a 1% potential change now compensates a 2% resistivity change to restore energy balance. When our student resolves the difference in these solutions, he should be able to answer his earlier question. Perhaps herein lies Sommerfeld’s dilemma – thermodynamics is not the intuitively obvious subject it may superficially appear. To paraphrase yet another quotation, ” …, and you’re no Arnold Sommerfeld.” • Is the presumption of a 2% change in (1/T) tied to a 2% change in flux found in textbooks, and generally accepted in the climate change literature? If so, then the generally accepted value for climate sensitivity is a factor of 4 too large. The reason is that the Stefan-Boltzmann law says j is proportional to T^4. Taking the derivative of both sides with respect to T, and then dividing both sides by the Stefan-Boltzmann law, and rearranging shows that the % change in T will be 1/4 times the % change in flux. Because 1/T contains T^1, the % change in (1/T) will also be 1/4 times the % change in flux. You and all other knowledgeable bloggers are asked to comment on and make any corrections to my calculations found at posted on Feb. 7 at 7:44 pm. 9. Well, of course Johnson is wrong. It is perhaps instructive and useful to try to explain why. In the ‘blackbody’ chapter he seems to think that a warm body can warm a cooler one but not a warmer one. He says at one point (sadly no page numbers) that there is two-way propagation of waves, but only one-way propagation of energy. How does that work? Are there two types of EM wave, one transporting energy and one not?! We can also ask him this : an isolated backbody is radiating into a vacuum. Then a warmer body is brought in. How does the first body ‘know’ to stop radiating energy in that particular direction? Later on he tries to use equations – but his equation (4) is just wrong. Where does this equation come from? What is u supposed to represent? Why is radiation given by the third time derivative of u? 10. The email debate of last week was the first geniune airing of the flawed Physics of AGW in all history. The fundamental flaws are explained in “OMG….Maximum CO2 Will Warm Will Warm Earth for 20 Milliseconds” posted at and at the website. Surprising that the truth was hidden in plain sight for so long. Since the show is now over, I felt it necessary to add one final comment “Climate Follies Encore” which explains the post 20 Millisecond exchanges. This has been the greatest education process, for the wisest among us, and we will now share. My chapter includes over 100 pages of footnotes and is supported by 60 articles in archive and Canada Free Press. We share a glorious future of truth. My thanks to Judy for enduring my repeated, well meaning barbs for over a year now. (co-author of SSD) 11. To PaulM: You have not read and understood my argument: I present a differential equation modeling two-way wave propagation combined with radiation and with a dissipative effect making the energy transfer one-way, from higher to lower frequency. If you don’t like this equation, give me one you think is a better model. Just words is to diffuse to discuss. • Perhaps you should start by reading a standard text book on Radiation Heat Transfer and then move on to some papers (H C Hottel would be a good start) and learn about the subject rather than propose some wild theory? The fact of ‘backradiation’ has been well tested in many situations, furnaces, radiation shields for thermocouples etc., let’s see you apply your theory to such situations and see how it works? • Please define back radiation which confuses me even though I had written something about it. To me, back radiation is reflection from the back with wall or relective radiation. A thermocouple when placed at the center of a pipe gain heat from the flowing media as well as radiation directly from the wall concentrated at the thermocouple measured an erronous fluid temperature. With such a wall you get radiation concentration. Without a wall, the radiation is minmal. Similar analogy for the greenhouse situation, greenhouse has glass or sheets of clear plastics to trap most IR, without this layer of wall, no trapping of IR and hence no greenhouse. It is obvious. 2 black bodies at different temperatures, they all emit IR with the resultant energy flow from the hotter to the colder in a free radiation condition. The colder can have an extremely small effect of slowing down cooling of the hotter unless a back wall from the colder reflect the radiation to other directions are reflected by the back wall. I have not read Mr. Claes Johnson’s article about the radiation. I will assume he is mostly correct in a free field radiation as in most climate situations. There is no back radiation. Furnace, thermocouples etc cases are not free field radiation cases which involved walls of reflecting radiations. Radiation involves walls of reflection has back radiation. 12. curryja | January 31, 2011 at 8:31 am As an “update” how about this diagram and text on Wikipedia, it appears a little more recent. The text does not appear “improved” either. So, no real changes to it appear warranted according to AGW. Maybe you could explain what makes the old one and the “new” one “ludicrously incorrect”, that might help in discussing what the “slayers” are showing, saying, suggesting, and raising for discusion. Heck, we might even get to a better understanding of where the science actually is at present. 13. PS to PaulM: I start from the same equation as Planck did 100 years ago, but combine with finite precision computation instead of freely invented ad hoc statistics. Statistics is not physics, just imagination, and physical particles have little imagination. • “Statistics is not physics, just imagination, and physical particles have little imagination”. How dismiss 200 years of thermodynamics and physical statistics, with the only clue of a single metaphor: the “not-thinking” particle. Funny (what about: Einstein debunked, there is no light speed maximum: photons don’t care about cops and driving speed limits ?). Anything else more substantial, perhaps? 14. Read the second chapter – it’s goofy, not physics. The initial clain to get rid of wave particle duality pretty much floored me, since this aspect has been very well experimentally shown. To accept this assertion means ignoring what you can see with your own eyes (and instrumentation) in a laboratory. A fatal flaw is confusing net energy flow with absolute energy flow – this is in the black body discussion. To say that a colder black body can’t radiate to a warmer black body (he calls this “back radiation”) is beyond ridiculous. Basically, he presents a circular argument without proving his ridiculous premise, throws a bunch of jibberish (maybe not jibberish, but I don’t call it physics) in the middle to make it all seem scientific, and then returns to his unproven assertion that a colder body doesn’t have black body radiation in the direction of a warmer body. Thus, besides claiming no one knows the nature of a photon (as part of an argument against the traditional treatment of blackbody radiation – yet single photon experiments have been run for decades) , he negates the Superposition Principle and relies on some mysterious instantaneous knowledge existing in one body about the temperature and direction of all other bodies in the universe. I think the spook guys would love to have this type of instantaneous directional communication device in their hands. Just to make it more clear, suppose you have two black bodies at different temperatures facing each other, with a shutter over each blocking all radiation. Remove the shutter in front of the colder body an very short time before removing the shutter in front of the hotter one. Then initially radiation would flow from the colder one toward the hotter one, and then reverse direction when the second shutter is opened. • The question is, can a cold body make a warm body hotter? Everything with mass and a temperature radiates. Who disputes that? Imagine a hot body (with an internal or external heat source) and a passive body floating in the vacuum of space. Can the passive body make the hot body hotter? Imagine the passive body gets closer to the hotter…it will absorb more radiation, right? It will get warmer. If there was such a thing as back-radiation heating, then the hot body gets more of it back. Then the bodies get so close together…that they touch. Now the radiation effect is greatly magnified (whatever radiation can do, conduction does much better). Does the hot body, at any time during this process, ever get hotter? Radiation from a passive source cannot make a hot body hotter. • Radiation from a passive source cannot make a hot body hotter. It certainly can, put a thermocouple in a flame and you’ll measure a certain temperature which is lower than the surrounding flame because of conductive losses down the wire and radiative losses to the surroundings. Surround the ThC with a silica tube and the temperature measured will increase due to radiation from the cooler tube. Check out ‘Suction Pyrometers’: • I can’t tell if you’re kidding, Phil. Transport your experiment into space so we can focus only on radiation effects. Then replace the flame heat source with a resistive one so it will work in a vacuum. Now, tell me how the passive thermocouple can increase the temperature of the heated body. The only thing it can do is cool the heated body…at various rates and with varying degrees of coupling, sure. But, under no condition can it make the heated body hotter. The passive body is never a source of heating for the source. Never. Now, what does that tell you about Trenberth and Keihl’s energy balance schematic? The earth’s surface is heated by back radiation from passive CO2 and water vapor? • I never kid, your complete failure at understanding the applicable physics, lack of reading comprehension and refusal to read the cited material makes responding to you a complete waste of time! • The topic is radiation, Phil, the supposed mechanism for global warming caused by increasing CO2 in our atmosphere. You love to talk about conduction and convection as if I don’t understand these concepts, but that is a hand-waving distraction. Focus, Phil. We’re talking about radiation…and how a passive body can heat a body with a heat source. I know how a passive body can cool a hot body…let us count the ways. Your GHG theory depends on passive materials heating hot materials. What are you going to do with radiation, Phil. Store it? Delay its transit time to space? You can reflect it, diffuse it, deflect it or focus it. You can’t store it or “back radiate” it to make a warm surface warmer. • You can’t store it or “back radiate” it to make a warm surface warmer It reradiates in all directions – the use of “back” is arbitrary and capricious, and assumes the location of another black body is somehow important. To be correct in what you say, it would have to stop radiating in a particular direction just because there is a black body in that direction – that’s ridiculous. You’re confusing net heat flow with absolute heat flow. A hot black body is in fact warmer if there is a cooler black body radiating toward it, simple because net heat flow is less. Dr Curry may confirm that I’m a definite skeptic, but I’m also a physicist and the linked chapter 2 and the posts here based on it are not even close to reasoned. • Reading comprehension still lacking I see! Missed this did you? And this, the first sentences in the cited reference: “When a bare thermocouple is introduced into a flame for the measurement of gas temperature, errors arise due to the radiative exchange between the thermocouple and its surroundings. In the standard suction pyrometers a platinum-rhodium thermocouple, protected from chemical attack by a sintered alumina sheath, is surrounded by two concentric radiation shields.” Yes Ken we are talking about radiation but unfortunately you don’t understand it. • If you are checking things out: try the 2nd la of Thermodynamics. • Phil, apparently you are confused by the slowing of a flux as you are not actually measuring the temp of the hot body, only the heated body, the thermocouple. 15. The atmosphere is in thermodynamic equilibrium. There are slight variations which are caused by certain cyclical processes which the proponents of AGW mostly refuse to accept. CO2 concentration is not one of them. John Tyndall did not prove a damn thing about CO2 absorption. His equipment was far too primitive to distinguish between absorption, reflection, refraction, diffusion, scattering or anything else. He incorrectly concluded that all energy missing between the source and the pile in his half baked experiments had been absorbed by CO2. Above all he ignored Kirchhoff’s law. The conservation of energy falsifies the “greenhouse effect” because as per Kirchhoff’s law that which absorbs, equally emits. This fact is absent from Tyndall’s ramblings and exposes him for what he was. Nothing traps in heat, quote: “In either case, the characteristic spectrum of the radiation depends on the object and its surroundings’ absolute temperatures. The topic of radiation thermometry for example, or more generally, non-contact temperature measurement, involves taking advantage of this radiation dependence on temperature to measure the temperature of objects and masses without the need for direct contact.” According to Kirchhoff’s Law any substance which absorbs energy will equally emit that energy. CO2 has a lower specific heat capacity to O2 and N2. The atmosphere which is 99% N2 and O2 is in relative equilibrium. Therefore adding more CO2 at trace amounts to the atmosphere will simply force the CO2, with its lower specific heat capacity, into equilibrium with the rest of the atmosphere. The higher the concentration of CO2 the lower the overall atmospheric temperature will become. “A simple reproducible experiment” “Specific Heat Capacity of Gases” AGW theory requires that we suspend our knowledge of this obvious fact and accept that it is the 0.0385% CO2 which forces the other 99% of the atmosphere into equilibrium with itself. It is the same logic as claiming that by taking a pee in the ocean, you have warmed the ocean. When in fact your pee has been chilled by the ocean. It’s called semantics. It is interesting that Judith has played the appeal to authority card. It is also interesting that those who appeal to the authority of Tyndall and the RS (7GT/1Gt human v’s natural CO2!) fail to acknowledge that they are relying on primitive out of date 150 year old “science” which has not even been critically re-examined. Anyone who quotes John Tyndall as the man who proved the “physics” of the “greenhouse effect” displays nothing short of sheer ignorance. It is the ultimate in the bogus appeal to authority. John Tyndall was fool and a fraud. Above all he was an insider at the Royal Society. Tyndall’s experiments have as much value as Sir Paul Nurse’s implication in his recent Horizon “program” that natural processes account for 1 Gt CO2 while humans account for 7 Gt CO2, i.e. NONE. So can we quickly dispense with the pseudo science of John Tyndall and get back to reality? That appeal to authority was for yesterday’s people, those who had faith in the integrity of science, scientists and trust in the Royal Society. Those people are long gone. (Last seen heading south on highway 51 with Trust in the passenger seat and Faith at the wheel!) • You can see why I am personally not taking this on in any detail, it is just endless. You incorrectly state Kichoff’s Law. αλ = ελ, where Lambda should be a subscript. it says that at a particular wavelength, the fractional absorptivity equals the fractional emissivity, where the fractional part is relative to the intensity of black body radiation at that wavelength. So if an oxygen molecule at temperature 200K receives a bunch of solar radiation in the ultraviolet bands, it will also emit in the ultraviolet bands, but because the oxygen molecule is relatively cold, there is almost no actual energy emitted by an oxygen molecule with temperature of 200K. Your next sentence is a mistaken interpretation of very basic elements of the kinetic theory of gases. And on and on . . . My point in not rebutting all this personally is that I would need to spend an hour on each incorrect sentence to try to educate people that don’t already understand this. Roy Spencer and scienceofdoom have already tried. And there are hundreds of such sentences to rebuke. • I fully understand agree on your point on this issue. I can not understand your thoughts/position on “post normal science” and why “climate scientists” opinions should be given an preference in regards to governmental policy. • when did I EVER say that scientists should be given a preference in regards to governmental policy? I have been very actively fighting against that! • Then I have misunderstood your thoughts and the meaning applied to post normal science. • yes, that is my great frustration. • Post Normal Science, or Special Pleading? • I have to admit to misunderstanding it too, in that case… • Judy, The tail does not wag the dog. E in = E out. DITO darling. • no, energy in does not equal energy out. • Yes it does! • No, energy in does not necessarily HAVE TO EQUAL, on virtually any time scale equal energy out. Does energy never get used? Does energy never get taken out of the system “permanently”? Of the energy taken out of the system, what determines when it is put back into the system, and how, as what? • Derek, Please consider the principal of the “conservation of energy”. “Does energy never get used?” I believe “converted” is the word you are looking for. “Does energy never get taken out of the system “permanently”? Sorry, NO COMPRENDE ? ? ? Taken out by what Derek ? • Will, Sorry, NO COMPRENDE ? ? ? K&T “timescales”, do NOT compute Will. Hence “permanently”…, sedimentary rocks. Re “converted” – does that mean “some” will never be returned to escape from the “system” as heat (energy lost to space)? “Life”, and “work done”, being the obvious examples. • Derek I have made the point about the energy that does not leave the system in my paper here: I think you know what I mean when I say E in = E out. Apart from hair splitting, do you actually have a point? • E in = E out. That’s an equilibrium condition, Will. Given that we’ve been doubling the amount of CO2 we add to the atmosphere every three or four decades for the past century or more, we are nowhere near equilibrium. Nor will we be until (a) we hold constant the amount of CO2 we add each year and (b) nature catches up. Expect (b) to happen roughly three decades after (a). But don’t expect (a) to happen until we can no longer afford fossil fuel. And at that point (a) won’t happen anyway since our CO2 production will decline thereafter rather than holding steady. David Archer believes things will remain hot thereafter. I disagree: I believe that after we stop emitting CO2 the temperature will plummet even faster than it has been rising due to the way equilibrium works. Conceivably by 2150 we’ll be in a Younger Dryas type ice age, though at that point we’ll surely have figured out some way of preventing that. • Will, earths atmosphere is not a closed system. The sun adds energy to it, which it then rediates to open space. Hence E in = E stuck in system + E out. • That is probably the most enlightening statement in this thread!!!! • Ein – Eout = delta global energy storage (mostly as heat in the oceans and atmosphere)- only at top of atmosphere • Energy is also stored with chemical changes, plant growths, animal grows … and stored energy dissipated thru plant deaths, fossil fuel uses … 16. To Harold: Yes physics is very goofy, in particular particle statistics physics. I start from the same wave equation as Planck and use a finite precision dissipative effect instead of jibberishy statistics. So what I do is less spooky than what you hint at. It is remarkable that in a discussion about the “greenhouse effect” physicists have nothing to say. To me it is a physical phenomenon that physicists should be able to grasp, but it seems they don’t. • I guess you didn’t get it. I’m a trained Physicist, now retired. I was pretty sure I just said something about the greenhouse effect, and particularly pointed how a simple thought experiment shows how wrong your theory is. I don’t intend to try to convince you, but my thought experiment should convince almost any reasonable idiot your theory is wrong. I don’t use different standards for either side of AGW. I have fairly rigorous standards,, which you have failed, and most ot the AGW papers also fail my standards. Sloppy work on the AGW’s crowd’s part doesn’t excuse sloppy work on the anti-AGW’s side. As for dragging Tyndall into the discussion and how the physics hasn’t been looked at, try reading some of Dr. Earl W. McDaniel’s and others’ books from decades ago on details of atmospheric excitation and radiation. Dr. Curry – FYI, Dr. McDaniel taught Physics at Georgia Tech, and had a great sense of humor. 17. “Statistics is not physics, just imagination, and physical particles have little imagination.” If there’s a more rigorous, more unchallengable, more awe-inspiring development in physics than the development of statistical thermodynamics I am not aware of it. 18. What are the main results of statistical thermodynamics with some form of informative content? • Claes This is a very interesting thread. If it wouldn’t be too much trouble, would you mind using the ‘reply’ button to respond to comments. It can be found next to the name and date/time of the person that you are replying to. It positions your response at the correct point in the blog and makes it easier for us lurkers to follow the argument. Many thanks. 19. Having read just the first couple pages of the second chapter, I know this argument is going to ‘creative’. The first argument is rather interesting. Blackbody radiators absorb all frequencies of light (definition of ‘black’), but only emit radiation with a specific spectrum determined by the body’s temperature. I think that’s fairly standard physics canon. But the train gets off the tracks pretty quickly after that. The author makes the statement, ‘The net result is that warmer blackbody can heat a colder blackbody, but not the other way around.’ which is obviously wrong. But why? Before this gem of a statement, the author goes through several analogies (which aren’t as informative as equations) proving to himself that only high frequency light that is not being emitted by the blackbody can increase the temperature of the body. This energy is absorbed, then Stokes shifted to lower energy by coupling to internal modes of whatever form of matter we’re discussing. In molecules, mostly vibrational and rotational degrees of freedom, along with intermolecular collision, play this role. The implied converse of this statement is that energy absorbed by the warmer blackbody from the colder blackbody is not outside of the frequency range of the warmer blackbody’s spectrum, therefore it doesn’t add heat and doesn’t increase the temperature! The question then becomes, if we are talking about blackbodies that absorb ALL frequencies of light, what happens to the energy in the lower frequencies absorbed by the warmer blackbody? Surely there is energy in those photons/waves. Because the warmer blackbody is, in fact, a blackbody, it MUST be absorbing those lower frequencies. What happens to that energy? I think most of us know that according to the conservation of energy, those lower frequencies absorbed by the warmer blackbody increase the temperature of that blackbody, even though the radiation was emitted by a colder blackbody. Kirchoff’s law is the mathematical manifestation of this fact. The emissivity of the blackbody in thermal equilibrium equals its absorptivity. Therefore, the thermal equilibrium of a blackbody can be shifted by changing its absorptivity in ANY SPECTRAL REGION, not just the high frequency region. This can be accomplished by increasing the inward flux of low frequency radiation due a nearby colder blackbody, as is the case with atmospheric greenhouse gases in the case of climate. So, I don’t doubt that the author’s math and equations are correct. Unfortunately for him, it’s the interpretation of those equations, along the lines of high pass filters and classrooms, that is flawed and ultimately leads to the incorrect conclusion that the greenhouse gas can’t exist. Not even that it doesn’t exist, but that it can’t. It’s brilliant in its simplicity, really. I would like to see this author handle the fact that we clearly observe a completely isotropic cosmic microwave background surrounding everything corresponding to a 4K collection of intra-solar and inter-stellar gases. I think that fact is fairly irrefutable proof that this guy is totally wrong. Moreover, why are we continuing to clamor to convince people like this that they are incorrect. Anyone who put enough time into convincing themselves of this type of theory after many, many attempts of others to ‘disprove’ them is not going to be swayed by observational evidence, proof of principle experiments or even reason. It’s better to not give their theories credence by taking the time to ‘debunk’ them. Comments welcome. • Comments welcome. You must be aware that the authors are partaking in these discussions on this thread. They’ve already posted a number of comments. So why are you referring to the authors in the third person? Take a glass of water at 99 C and surround it with a dozen glasses of water, all also at 99 C. Does the water in the glass in the center get warmer? Does it cool off more slowly than it would if the other glasses were not present? • The net transfer of energy between the 99C glasses of water will be zero but all will be radiating energy at a rate appropriate for a single glass of 99C water. Again, all radiating but net transfer zero. As for cool down, yes the central glass will cool slower. I suppose you could say that the other glasses are insulating it. What is happening mechanically is that the outer glasses are exposed to a cooler room so the net energy flow between them results in heat loss to the room. The inner glass then is exchanging energy with an incrementally cooler glass of water so it experiences an incremental net loss of energy. Until final equilibrium is achieved, the inner glass will remain warmer than the outer glasses and the outer glasses will remain warmer than the room. • So at what point does the glass in the centre become warmer? Answer: At no point. Take note Roy Spencer and Science of doom, its called ENTROPY. Without a continuous energy input you have no net increase from so called “back radiation”. • But there is a continuous energy input into the Earth’s climate system; it is provided by the Sun. • “But there is a continuous energy input into the Earth’s climate system; it is provided by the Sun.” There is also continuous darkness. In reality, any given spot on the surface directly under the solar point receives at most 25% of the continuous energy input from the sun in any 24 hour period, but generally much less. You know that night/day warming/cooling thing? • Wow, this is really simple physics. I’m not a all sure how this can be so hard to follow. In the water glass case, there is no steady energy input to the center glass of water so it will loose energy to the environment. The surrounding glasses increase the time that it takes to cool by radiating heat ‘back’ to it as they also cool by radiating their energy. To increase the temperature of an warmer object by a cooler object, the warmer object must have an continuing energy input that must be dissipated. In that case, the presence of the cooler object near it radiating part of its energy toward the warmer object results in that energy being added to the total that the warmer object must dissipate. The temperature of the warmer object must increase again to radiate that somewhat larger total energy input. Of course, if you insist there is no such thing as a photon or that two streams of photons cannot pass each other traveling in opposite directions, we probably will never be on the same ‘wavelength’. I’ve spent my life around electronics, radio, and nuclear physics. Maxwell’s equations rock for many aspects of electronics and radio but they are just very handy tools. Because Maxewell’s math works a good share of the time does not mean it defines reality. It is not very useful for use when counting gammas to determine the activity level of a radioactive source. Calculations based upon photons and nuclear interactions are the tools that work there. You should use the tool that suits the job at hand. I believe the photon view is the correct one for radiated energy discussions. This is basic physics and basic engineering stuff. However, do not take this to mean that I believe doubling of the atmospheric CO2 is going to be a big problem. I don’t. I just prefer simple physics not be twisted to make a point. • jae Could you have picked a more complicated example, if you’re being rhetorical? Dr. Curry’s more creative students will no doubt be asking about partial gas pressure, room temperature, convection, conductivity, where the lights are in the room, and will you refill the glasses as evaporation causes volume change, to start with. ;) A glass ingot at 99 C, I could understand. Especially if you included conditions like, “in a closed system initially at STP,” and “surrounded in all directions with no significant gaps,” and “all ingots behave as uniform spherical black bodies,” etc. Does the water in the glass at the center get warmer? Unlikely, though Dr. Curry’s students could contrive extrinsic conditions to make it so, I am sure. Does it cool more slowly? Likely, for most sets of extrinsic conditions, I think Dr. Curry’s able students will find. More to the point, could you expand on your point, please, as it elludes me. (Though I’m sure Dr. Curry’s students would be able to explain it to me.) • I think we have to be very careful about how we set this problem up. We are using language of ‘external energy sources’ versus the system of interest, the surface of the earth in most of this discussion. In this case, the outer glasses ARE an energy source for the central glass. They just happen to be at the same temperature as the central glass, in stark contrast with the earth-like situation in which the sun has a dramatically different temperature from the earth. So let’s a assume the ‘other’ glasses are in a circle around the central glass and that the glasses can only emit energy out into the plane that contains all the glasses, for simplicity. Being all at 99 C, each glass will have some the same emissivity and emit the same spectrum. It is very likely that each glass can absorb most, if not all, of the energy emitted by the other glasses. Now, to me, there seems to be two parameters that matter the most. 1) the temperature of the surrounding air and 2) the distance of the 12 ‘other’ glasses from the central glass. The temperature difference between each glass of water and the surrounding air will determine the difference in the energy absorbed by the central glass and the energy that it gives off due to the air by conduction. The distance between the 12 ‘other’ glasses will determine what percentage of the emitted energy from those glasses can be absorbed by the central glass. The closer the ‘other’ glasses are to the central glass, the more of the emitted energy the central glass can absorb. The further away, the smaller the percentage of energy that can be absorbed. There *should* could be an air temp and distance from the ‘other’ glasses at which the central glass is taking in more energy from the surrounding air than it is giving off. In such a case, the central glass would increase in temperature as per the conservation of energy. It would be an interesting experiment to set up at least. • maxwell …..”It would be an interesting experiment to set up at least.”…. Its been done. Sounds like the proof of the zeroth law of thermodynamics. • Bryan, ‘Sounds like the proof of the zeroth law of thermodynamics.’ I don’t think this experiment would prove that there is no thermal motion at 0 K. I mean, that is the zeroth law of thermodynamics. Can you please explain a bit more what you are saying? • maxwell | There are a number of glasses at the same temperature “Being all at 99 C, each glass ” The zeroth law of thermodynamics may be stated as follows: If two thermodynamic systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. In the early days this was assumed but later questioned. It had to be experimentally determined and since we already had Laws 1,2 and 3 it was called the zeroth. Perhaps you are thinking about law 3 which is about absolute zoro • I have to agree with Zorro.. er, with Bryan, I mean. Zeroeth law well-established, a black body among bodies of equal temperature will not increase in temperature, though this says nothing of how quickly each will lose temperature or in what pattern. If the air were above the temperature of the glasses, or if there were certain complex salts that underwent a physical change in solution in the central glass, if it contained fissile materials in high enough concentrations, if exothermic chemical changes happened in the ‘water’, if the air pressure were suddenly increased compressing tiny soda bubbles (or at 99C, nearly boiling so water vapor bubbles), if the glass were in an atmosphere of pure reactive metal particulate suspension (potassium, say), if there were a series of lasers deflected toward the central glass, or electric currents, or sound waves.. there are all sorts of extrinsic conditions that might raise the temperature of the central glass. Which, as I said, a complex example.. and I still don’t follow the point of it originally being posted. • It looks like a jumped the gun under some confusion. Sorry for that. • How about debunking quickly this way – two black bodies at different temperatures separated by a perfect reflector. With the reflector in place, heat travels from each black body, bounces off the reflector, and returns to the originating black body. Under the theory that was proposed in chapter2, when the mirror is removed, the heat from the cooler black body must still return to the cooler black body – it has to act as if there is still a reflector in place, but not so for the hotter black body. A ridiculous result. Bad physics… • No reasoned rebuttal yet? • Harold, before quantum physics and the idea that a photon was an actual particle there was wave physics that was, and still is, experimentally proven. Those physics experiments described scattering, reflection, interference, and cancellation. Why would this section of physics suddenly become null just because you apparently have forgotten it? There are several possible explanations of why a cooler body would not heat a warmer body contained in this PROVEN area of physics and which are contained in the correct energy equations that give a NET energy flow. • When creating the waves and waves of orcs in the big battle of Gondor CGI, one of the directors apparently asked one of the programmers, “That one orc, why’s he running the wrong way?” The programmer answered, “Looks like he panicked.” Panic didn’t make the waves of orcs not waves of orcs, or invalidate the wave equations, or nullify the physics of Middle Earth. And while waves of panic can be observed in mobs, a one-orc wave isn’t really well-modeled by wave equations. Which are, after all, only proven mathematical models, not themselves mathematical axioms. • Bart R, we don’t need no stinkin’ wave EQUATIONS!! We have stinkin’ empirical data. That’s all I am asking of the cold object heating, or slowing the cooling of, the warm object by radiation. Empirical data. One of the thought experiments I really like is the one where the heated object is surrounded by a cooler sphere which is thermostatically controlled. My imagination tells me that the radiation from the sphere is cancelled by the radiation from the heater leaving the net to be drawn off by the cooling system of the sphere with nary an effect on the heater itself. I am told that this cool sphere will actually heat the heater. If the heater is made hotter by the cooler sphere, its radiation should be elevated measurably. If it isn’t measurable I really don’t care with respect to the climate disagreements. • Right, right. Data and thought experiments, very nice. But then why were you bringing up data and thought experiments in a discussion of wave equations, again? I’m not sure I follow the analogy built into your thought experiment. Which is the thermostat? Which the air conditioner? Help me with my own thought experiment: There is a mall full of inventory that is replenished through another channel, with customers entering and leaving all the time through multiple sets of doors. The doors are designed to allow customers in without hindrance, but to slow some customers on the way out by redirecting them randomly through the mall (the mall hopes to increase sales this way). There’s a ‘sellostat’ set by the mall manager designed to set how much the doors slow outbound customers, but the manager’s salary is set by sales figures, and every day he gets greedier. Can you see where my thought experiment is going, and maybe suggest improvements? • Uhh, Bart R, where did I suggest a thought experiment? Haven’t the vaguest on yours. I’m a simpleton remember? By the way, doors are NOT a way to allow people in without hindrance. A strip mall does that. • kk To your first question, that’d be: “One of the thought experiments I really like is the one where..” And one simpleton to another, remember what? I like your idea about using a strip mall rather than doors. Much clearer, and suggests better parallels. So, strip mall manager welcomes all buyers to his locale, and takes some steps (advertising posters facing people as they walk away from the strip mall, principally, but also shills who block the way out and chat up secret sales, and the smell of food from the strip mall’s food vendors) to hinder buyers as they leave. At first, the manager sets out only small hindrances, but he believes that they work because the theory of advertising tells him so, and his mall sells more and more as he puts more hindrances up, and he sees no reason to stop putting up hindrances since he’s rewarded by profit. He’s so rewarded by profit, he pays off the local officials to allow him to put up more posters, and muscles out any competing shills trying to get buyers to leave for their strip malls up the street. So, do you think the mall will be more crowded, the more the manager hinders buyers from leaving? • And I continued saying I would like it done as an experiment. I still would. I do not want to discuss it as a though experiment. It has been done to death. • kk Right, but all experiments start with design, with an analogy or meaning to the model built, with some hypothesis they test… And I’m still not sure what yours tests. Could you expand on that? • I wouldn’t say test so much as measure. It would seem that climate science, and maybe other fields, agree that there is backradiation and that either it slows cooling of the hotter body or heats it. I want at least one experiment, preferably several differing ones, that quantifies that relationship. If it slows cooling, by how much. If it heats the body, by how much. Does there appear to be conditions that increase or decrease the effect we need to research more… Why am I so adamant about real experiments? Because thought experiments are limited to the variables that we put into the experiment, you know, just like models. If we do not know about it or do not know it well enough to get a reasonable ball park figure, or cannot convert it to mathematics at a resolution that is useable, we have no way of knowing whether the results of our thought experiment is valid. The world has a reality that we need to include and we do not know all that reality. Look at the empirical experiments that detail an effect very well, yet, we do not see the result of the effect generally in reality due to offsetting effects. I will grant that there needs to be a lot of thought put into designing the experiments for these same reasons and there is where we find a good use for thought experiments. If we allow contamination the physical experiment will not be giving us results useful for the original purpose. Thought experiments can help us design the experiment to try and exclude contamination so we have a higher certainty of measuring what we think we are measuring. • kk I’m aware of multiple real measurements cited in this topic and elsewhere on this blog using advanced and proven equipment for decades. I’m aware of multiple real experiments cited in this topic and elsewhere on this blog and in countless other sources. On its face, the phenomenon of reflected radiation is so everyday commonplace that it would take extremely strong evidence, and with the addition of so much experimental proof, much better rebuttal than has yet been offered on these pages, to credit your words, “I want at least one experiment..” as anything but flat-out, and excuse me for being blunt, lie. • Bart R, when did Backradiation become REFLECTED radiation?? This is something I definitely missed along with 99.99% of papers and work I have never read that are obviously inexistence in spite of my ignorance. Please note, I am NOT trying to claim there isn’t literature, only that I am extremely limited in my exposure to the literature and reality. • kk You get the distinction between reflected radiation and back radiation? Means you have an advanced and subtle grasp of the topic, and should be able to handle the things talked about in the blog by people who do serious measurement, experimentation, analyses and interpretation of these things for a living. (Which would be not me.) You want quantified relationships, and that’s all well and good. Check out the slightly unpleasant quote referenced in or the very nicely put lower down this page for context and background about quantified measurements, and some of the problems with experimental interpretation present in the subject. I frankly don’t believe we are going to get to widely accepted experimental results along the lines of your suggestion any time soon, unless someone builds a pair of hermetically-sealed IR-transparent domes the size of Nebraska and experiments with changing their CO2 concentrations repeatedly under differing conditions of sunlight. Too much room for waffle, and too much brute force logic. And even then, questions of applicability will bedevil us. What we need is a guy with a teacup and some milk, and the ability to clearly explain so anyone can understand why the milk particulates suspended in it move.. erm, sorry. Wrong experiment. But you get my drift? • Bart R, I have just a thin veneer of knowledge and none of the math skills making it a very thin veneer. • Photon? Why did you switch frames? I said heat, nothing about photons – I’m using a classical EM frame. Maybe you thought I was talking about photons, since the waves would have to suddenly turn around and return to their source under the proposed theory, which doesn’t make sense to you. That’s my point, the proposed theory doesn’t make physical sense. • Harold, how does the HEAT travel without waves or particles?? • Sorry Harold, I got lost. OK, back to waves. The classical wave experiments show that waves can interfere, cancel, and augment (sorry for the layman’s terminology). Interference partially cancels or deflects, cancellation negates and augmentation adds. What happens to the energy Harold?? This has been shown to happen in experiments. Doesn’t it happen out there in the atmosphere? The thought experiment is that 2 bodies are radiating against each other. The colder body will be radiating at a lower energy peak, but the warmer body will be radiating at that wavelength also. Why won’t the waves at the same frequencies cancel or interfere? At the quantum level I am even less adept, but, I understand that the particles need to have a correct energy state to absorb energy. What happens if the bodies do not have the correct open energy state to absorb the wave/photon carrying the energy? Won’t it be deflected/reflected instead? Isn’t this a more reasonable explanation of what we see in the atmosphere between GHG’s and the surface and each other for that matter? Finally you suggested the wave would HAVE TO RETURN TO ITS SOURCE. . The wave would be deflected in another direction, although it would seem that it could be deflected back to the originating body. What amazes me is that there is all this partially understood and misunderstood knowledge being tossed around. Yeah, kinda like me. • kuhnkat, reference your comment about e/m waves from several sources interacting with each other, maybe you should take intensity and phase (and polarisation?) into consideration too. It may help to move your though experiment on if you gave some consideration to a practical experiment carried out around 1800 by English scientist Thomas Young described in “Instruction Manual and Experiment Guide .. ADVANCED OPTICS SYSTEM .. Experiment 4: The Wave Nature Of Light ( Alternatively you could set up youjr own experiment using a candle and some card board sheets with slits cut in them ( If you’re interested, this interaction of e/m waves from a single source taking different routes to a common destination can present a surprising problem for Line-of-Site (LoS) radio communications links. Although the optical path from transmitting to receiving aerial may be unobstructed, the radio signal can be reduced (even to zero) as a result of cancellation of the signal travelling over several different paths ( I’m sure that Joel, the thread’s resident expert in all things to do with theoretical physics, can explain it all in simple terms far better than I. He must do it all of the time when lecturing to his RIT students. Best regards, Pete Ridley • Pete, to be honest, I find kuhnkat’s posts quite painful to read. He understands just enough about the existence of interference to be led astray into a variety of completely wacky conclusions. First of all, interference occurs in only a very carefully prescribed set of circumstances. The light has to be of the same wavelength and to be “coherent”, which means that the waves are in lock-step with each other. Also, the geometry matters. Waves traveling in opposite directions (even assuming the coherence and all) don’t cancel each other out except at very specific locations separated by distances of half of wavelength, which means on the order of microns for what we are talking about. In between, they add together constructively. The result is a standing wave, such as is seen on a guitar string. So, I really don’t see anything useful coming out of kuhnkat’s ramblings. They are just an attempt to turn the nonsense that we know Claes and the other Slayers are spewing into something intelligible. But, you can’t produce sense from nonsense by adding more nonsense. • Hi Joel,. I felt confident that a top lecturer like you woujld be able to present a compexy subject like e/m wave interactions in a simple manner. Well done, but can you please try to avoid unusual words like “coherent” and concepts like “standing waves” which might confuse us simple lay people. Best regards, Pete Ridley. • Pete, (1) The adjective “top” as in “top lecturer” is yours, not mine. (2) I gave brief descriptions of what “coherent” and “standing waves” mean. However, I also wanted to use the correct terminology so that people can easily look on the web to find more detailed description. For example, here is the Wikipedia page on coherence: and here is their discussion of standing waves: • The problem was, Joel, that Ridley turned to his library to look it up. But he couldn’t find either “coherent” and “standing waves” in his dog-eared copies of The Elders of Zion and Mein Kampf. So it’s good that you’ve given him links to look the terms up. The other problem is, he won’t. Like Kuhnkat, Ridley uses ignorance as a war club to bludgeon his enemies. Pete also prefers using his time and bandwidth to search for Holocaust deniers, Neo-Nazis, and Jihadists like Daniel E. Michael to quote. (Did you READ the Michael letter Ridley quoted yesterday that ends with “Death to America!!!”) • Joel, As the earth emits a relatively continuous band it emits the same wavelengths that are emitted by the GHG’s. While I readily agree that the amount of interaction is probably quite small, if we toss out enough minimal influences we make other amounts larger. (one of many issues with models) How about an actual experiment to measure the backradiation effect. Something like a tube with earth at one end and a short wave source at the other. Use at least two runs, one with atmospheric gasses with no GHGs and one with GHGs computed to give the actual backradiation of a column in the open atmosphere. Measuring how fast the earth is warmed with and without GHGs should give a rough idea of how much the backradiation effect is. Or, has this been done and can you point me to the paper?? Simply shining IR through a tube of co2 tells us little about the effects of the radiation emitted by that co2 or h2o or ch4… on the ground. For the truly anal we could use differing types of material such as granite, dirt, wet dirt, loam, sand… to see how the effect is modified if there are measureable differences. Actually a third run with close to a vacuum would be good to show that there is no difference between non-GHGs and a vacuum insofar as the rate of warming of the surface. That is, there is negligible backradiation from non-GHGs matching their negligible absorption. This is the type of straightforward experiment that MIGHT convince some sceptics and deniers that there really is a measurable, significant in relation to the earth system, increase in warming speed. It should be able to clarify which of the ideas of no effect, slows radiation from the earth, or warms the earth is correct. I would note that some significant warmists apparently believe there is a real warming. An actual series of experiments should be able to sort this mess out!! It is really silly to have all these conflicting discussions over the number of angels that can dance on the head of a pin when we should be counting them with electron microscopes or other detectors. (well I guess there is the issue of finding the pin they are dancing on or luring them to our dance) • Kuhnkat: I don’t even understand your experiment…and I don’t really see why scientists should waste their time running it. For one thing, the basic physics of the radiative transfer in the atmosphere and specifically radiative forcing of CO2 is well-accepted and well-tested science by everyone who has even a small modicum of respect within the scientific community (e.g., Roy Spencer and Richard Lindzen accept it). So, the issue comes down to feedbacks and that is not something that can be settled by such a simplistic experiment. For another, I am under no illusions that we can ever convince “skeptics” who doubt such basic tenets of science to become AGW believers. Such people are like Young Earth Creationists: they don’t disbelieve AGW because they doubt the science; rather they believe any bogus nonsense attacks on the science because they are ideologically opposed to the actions that follow from addressing AGW. If you guys can’t even comprehend and accept basic science about which there is no serious controversy whatsoever and instead believe nonsense, how am I ever to convince you on the issue of feedbacks and climate sensitivity, which actually require weighing the balance of the evidence? It is like telling me that if I can only get a Young Earth creationist to abandon the belief that the earth is only 6000 years old, he will actually fully accept evolutionary theory…Ain’t gonna happen! • Hi Joel, I agree with your comment (yesterday at 9:09 pm) about the heat retaining effect of water vapour and some trace atmospheric gases preventing some of the IR energy that is emitted by the earth from radiating back out unobstructed (AKA the Greenhouse Effect) and that humans adding a tiny amount of CO2 could result in a small (beneficial?) rise in temperature. Ias you say there are not many respected or knowledgeable scientists who consider otherwise. On the other hand I’ll be very very surprised if you can provide a sound analysis of your own that convinces true sceptics that the balance of evidence indicates that a global climate catastrophe looms as a result of our continuing use of fossil fuels. Rest assured that the use of fossil fuels will continue for many many decades yet and all of the scare-mongering by the power hungry, the UN, the politicians and the environmental activists will not change that. I still haven’t seen your refutation of the analysis carried out by Roger Taguchi showing that the feedback effect is negligible. Was it too hard for you? OK, here a simple question. If positive feedback due to increased water vapour arising from a slight increase in global temperature due to our use of fossil fuels is able to cause a global climate catastrophe in the next 90 years why didn’t such a disaster happen during the Roman warming or during the MWP? I’m sure that you can explain that in simple enough terms for lay people like me to understand, but please don’t try to argue that the rate of warming now is far greater than ever experienced during the past 300M years or that Mann was correct and there was no such thing as the MWP. BTW, have you started reading “The Hockey Stick Illusion” yet – no, I thought not. Best regards, Pete Ridley • Joel, the experiment is to see how fast the material warms with and without ghg’s in the atmosphere giving an empirical figure for the effect of backradiation in a carefully controlled experiment. Why is this important? Because deniers like me say there is none. Luke-Warmers and warmers believe in varying amounts of slowing of the surface cooling, and some alarmists say the backradiation actually raises the temperature of the material above the level that the short wave can make it. Even if everyone suddenly went sane and decided there was only a reduction in the rate of cooling (faster warming also) it would be good to actually quantify by empirical experiment exactly what the magnitude of the effect is. You say: Yet, that statement says NOTHING about the magnitude of the effect on the earth itself. I am sure you agree that different materials would react differently even if your theory is correct. Being able to put constraints on the effect in the models would be a real contribution outside of just making some people happy that their position was proven. The Climate Science community appears to me to be adverse to the drudge work of detailed science. It is time they stopped talking about saving the earth and started doing the real work necessary to prove the hypotheses and giving us more information on what may need to be done. This one paragraph shows how hopelessly confused you are about something that is just basic physics! You make this distinction between “rais[ing] the temperature of the material above the level that the short wave can make it” and “a reduction in the rate of cooling”. There is no such contradiction between those two pictures: CO2 slows the rate of cooling and, in doing so, it causes the temperature of the earth to be warmer than it would be in its absence because the earth is heated by the sun and its steady-state temperature is determined by the balance between the rate at which it receives energy from the sun and the rate at which “cools itself” by sending energy back into space. The fact that you have been unable to comprehend this shows how you are unwilling to allow yourself to comprehend the most basic of scientific principles. The magnitude of the radiative effect of CO2 is not under debate in any serious quarters. Roy Spencer and Richard Lindzen and the rest of the scientific community all agree it is 3.8 W/m^2 (+/- 5%, or at most 10%). The magnitude of the resulting temperature change is still under debate, but this involves the question of feedbacks, which alas can’t be settled by any experiment smaller than the entire scale of the earth. (Which is not to say we can’t learn a lot about feedbacks from empirical data. In fact, we can and have. See, for example, here: ) • Joel, You have very bad radiation, the Earth cooling and the energy content concept here. 0.04% CO2 in the atmosphere has absolutely minimal energy content in it when comparing the energy content of the atmosphere (orders of magnitude more than CO2) not to mention the LW radiation energy from the Earth surface (orders of magnitude larger than atmosphere). I guess you know the mathematical differention of infinitely small -> 0, thats CO2 capable of warming the air -> 0, warming the Earth -> 0 and CO2 capable of slow cooling -> 0. CO2 cooling warms the Earth is absolutely absurd if you have any energy concept at all. Warming and cooling are mainly due to huge amount of water presents on the Earth. The movement of water causes most weather changes. I would advise you to appreciate the energy contents in them and study the physical properties of water, CO2 and the energy they are involved or you will never learn and keep on misinforming the general public wasting your life unless you have an agenda in order to stay on the gravy train. The Earth receives the Sun energy, stores (chemically and physically) some of it, reflects some of it, refracts some of it, conducts some of it, convects some of it, radiates (naturally including decays, volcano eruptions, human consumptions of food and fossil fuels) some of it. The Sun itself also in an ever changing state of emitting energy. There is no steady state temeperature, only instantaneous temperature. The fact that you have been unable to comprehend this shows how you are unwilling to allow yourself to comprehend the most basic of scientific principles of energy, cooling, heating and radiation. • “Warming and cooling are mainly due to huge amount of water presents on the Earth” should be amended as “Warming rate and cooling rate are mainly due to huge amount of water presents on the Earth” • Sam NC: It would have been more precise of me to talk about the rate at which energy is emitted or absorbed by the earth. Yes, the conversion of this into a rate at which temperature changes involves the heat capacity which, as you note, is largely due to thelarge amount of water. However, this doesn’t change the end result, i.e., the final steady-state temperature, but just how long it takes to get there. [Of course, this ignores water vapor or cloud feedbacks, which can affect the end result.] Well, if I fail to comprehend this, I am in good company with basically all of the scientific community. Why do you think you understand these things better than the National Academy of Sciences, the authors of the major physics textbooks which discuss global warming, etc., etc.? You are just fooling yourself…It is the Dunning Kruger effect ( ). Look, if you want to believe nonsense, I can’t stop you. Go play with your fellow travellers who believe the Earth is only 6000 years old and all the rest of the folks who would rather believe pseudoscience than science that conflicts with their ideology. Ignorance can only be cured if someone wants to learn. You want to remain ignorant and so you will. 20. To Judy: It is clear that you miss the points I want to make. Of course there are endless little things you can focus on and question, but in the spirit of Leibniz I ask you to try get the main message. I am not saying that my model is perfect. I try to make a point about radiative heat transfer based on a mathematical analysis of the same equation Planck tried to use but gave up with. If you focus on this equation, do see something of interest in my analysis? What is your model for radiation? Does it contain “backradiation”? Is it a stable phenomenon in your model? Next, you said you did not like Kiehl-Trenberth, and I asked you why? I do it again. And have you given your students my chapters for homework? It could be an educational experience, and students need assignments, right? • “I am not saying my model is perfect” Your model and main message are fundamentally flawed, as was easily shown. 21. To Maxwell: A warm body also absorbs low frequency waves but re-emit them and thus avoid getting heated by low-frequency stuff. Like an educated person simply does not get heated up by silly remarks from uneducated, only by remarks from more educated. Right? • Claes, to me, that seemed to be a weak response to a very clear post by Maxwell. You wanted Judith to give you the opportunity to debate the science contained in your book, so debate it properly rather than handwaving away difficult objections. • Mr. Johnson, Is the irony lost on you? ‘A warm body also absorbs low frequency waves but re-emit them and thus avoid getting heated by low-frequency stuff.’ Without warming the warm body with low frequency light from the colder body, there is lack of energy conservation. In order to emit more low frequency light (ie the low frequencies already being emitted and the absorbed low frequencies from the colder body) the thermal equilibrium must change, coming to a higher temperature according to the Stefan-Boltzmann law. Raising the temperature costs energy. So there are two options 1) your theory violates the conservation of energy because the emission of low frequency light by warmer blackbody doesn’t change in response to increase flux of low frequency light from a colder blackbody or 2) conservation of energy is preserved and your thesis (cold blackbody can’t heat warm blackbody) is wrong. I’ll let you pick which options you want. With respect to your poorly thought out classroom analogy, I am constantly learning from people who have less education than me. On an almost daily basis in fact. So not only is your analogy not informative in the context of energy transfer via radiation, it’s as fundamentally incorrect as your physical theory. Any other thoughts? 22. One thing that may be overlooked, in these discussions on whether or not a cold object can heat a warm body though exchange of radiation is that, a photon doesn’t know where it came from, the only thing it “knows” is its frequency. All the properties, momentum, wavelength and energy are directly related to its frequenncy and vice-versa. Measure one of the four and you know all of them. 23. The trouble with photon particles carrying energy back and forth is that it is an unstable phenomenon, or do really think there is a highway with left and right lanes connecting two bodies? Why would photon respect such traffic laws? Which equation is describing the physics you are hinting at? • Why does there have to be left and right lanes, or what happens when two photons traveling in opposite directions reach the same point in space? Do they collide, or interact in anyway? Or do you have anything other than handwaving to support this statement from your book? “We argue that such two-way propagation is unstable because it requires cancellation, and cancellation in massive two-way flow of heat energy is unstable to small perturbations and thus is unphysical.” Why does it require cancellation and why is it unstable? • Why would it be unstable? In second chapter you obtain an equation witch is the same as Boltzmann law for 2 bodies and infinitely small T difference. So conclusion about stability should be the same. By the way, your equation is not symmetrical, meaning that cold and hot temperature have not the same influence. So how do you generalise to a N>2 body problem? Looks trickier than classic Boltzmann to me. But more important, you throw out quanta interpretation. Sure it is not intuitive, but since Boltzmann it has been used to derive a huge amount of physical equations, and explain a lot of experimental results. Throwing out quanta to obtain radiative transfer equations you like better is only the begining of the story, because now you will have to reinterpret THE major part (more important imho than relativity) of modern physics ( post WWI physics). This is not out of question, but it is a huge task, and a task far far far too big to start from just radiative heat transfer…even if historically it was the start up of quantum mechanics. To make such a body of inference collapse, a single new fact may be sufficient, but the new fact will usually not be the same as the one at the origin of the old theory, and the new theory should be as powerful as the one it replace. Not bearing well for your new interpretation, so yes, even if I like the first chapter a lot, the second one is definitely in crackpot territory… • I notice you have avoided the challenge of applying your theory to a real world problem such as heat loss from a pipe, the concept of a cooler body transferring heat to a warmer has great success in these situations and has been tested many times. Cut to the chase, try some of the problems on page 582 of Mills, ‘Heat and Mass Transfer’. Here’s a link in case you don’t have a copy. • BTW, your interpretation of radiative heat transfer is very easily testable experimentally: consider heat exchange between a hot body at T_h and a cold body at T_c=T_h/2. classic equation (eq 20 in chapter 2) gives R =sigma (T_h^4-T_c^4)=15/16*sigma T_h^4 your new equation (eq 21) gives R =4*sigma *T_h^3*(T_h-T_c)=2*sigma T_h^4, i.e. almost 2 times the heat transfer predicted by S-B. Quite easy to test using simple calorimetric experiment, no? 24. Believers in the greenhouse effect will not honestly take in information counter to their belief, no matter how it is couched. As long as everyone pretends that there is a legitimate scientific debate being engaged here, it is obvious that situation will continue unchanged. Meanwhile, the truth lies elsewhere than the mass of climate scientists, and the hapless public, supposes. What follows is a comment I started to post on Claes Johnson’s site a few days ago, but didn’t because I realized no one was listening. I’ll put it here just because I exist, and the facts exist, and it has to be said, and eventually admitted by everyone: You need to establish first how the atmosphere is basically warmed: By atmospheric absorption of direct solar infrared irradiation, or by surface absorption of visible radiation followed by surface emission of infrared. Climate scientists, and their defenders, who tout the greenhouse effect, believe the latter [which leads to the infamous backradiation], and ignore the former. But as I have tried to communicate, to other scientists and to the public (see my blog article, “Venus: No Greenhouse Effect”), comparison of the atmospheric temperatures of Venus and Earth at corresponding pressures, over the range of Earth atmospheric pressures (from 1 atm. down to 0.2 atm.), shows the ONLY DIFFERENCE between the two is an essentially constant 1.176 multiplicative factor (T_venus/T_earth) which is just due to the relative distances of the two planets from the Sun. Nothing more. It has nothing to do with planetary albedo, or with the concentration of carbon dioxide or other “greenhouse gases”. The only (small) deviation from this general condition is in the strictly limited altitude range of the clouds on Venus (pressures between about 0.6 and 0.3 atm. only), where the Venus temperature is LOWER (not higher, despite the carbon dioxide atmosphere) by just a few degrees than the strict 1.176 x T_earth relationship, due no doubt to the cooling effect of water (dilute sulfuric acid) in those clouds. The only way this overwhelming and definitive experimental finding (T_venus/T_earth = essentially constant = 1.17 very closely, encompassing the data of two detailed planetary atmospheres) can be explained is that the atmospheres of both planets are heated by the SAME PORTION of the solar radiation, attenuated only by the distance from the Sun to each planet. This means absorption of visible radiation at Earth’s surface, followed by surface emission of infrared (heat) radiation into the Earth atmosphere, cannot have anything to do with the basic warming of the atmosphere, because Venus is largely opaque to the visible solar radiation, and it cannot reach Venus’s surface (and is thus not part of that common portion warming both atmospheres). So the first unarguable fact is: Earth and Venus are both warmed by direct atmospheric absorption of the same infrared portion of the solar radiation. There is no speculation, no theory in this statement: It is an amazing (because so many scientists believe otherwise) statement of experimental fact, based on the actual detailed temperature and pressure profiles measured for the two planets (which have been available to climate scientists promoting the greenhouse effect for nearly 20 years now, which means they are incompetent). And it completely invalidates ANY “greenhouse effect” of additional warming by adding carbon dioxide to the atmosphere: Venus has 96.5% carbon dioxide (compared to Earth’s 0.04%), yet its atmospheric temperatures relative to Earth’s atmosphere have nothing to do with that huge concentration, but only and precisely to the fact that Venus is closer to the Sun than is the Earth. Venus’s surface temperature is far higher than Earth’s, because Venus’s atmosphere is far deeper than Earth’s. To tell the public — and to teach students — otherwise is to recklessly spread an obvious falsehood and steal hard-earned knowledge from the world, thereby misusing and ultimately defaming the authority of science in the world. Stop playing around with theoretical put-downs, and talking past each other, and admit that the Venus/Earth data completely and unambiguously invalidates the greenhouse effect. 25. Claes starting point is not concerned with the climate change issue as such. His contribution is to question if Plank and Einstein were correct to abandon classical wave theory in favour of the quantisation of electromagnetic radiation. To be sure Plank and Einstein were deeply unhappy with the situation and regarded the concept of the photon as a “fix” or even a “trick” which would give way to some fuller explanation of phenomena like the photoelectric effect and so on. IMHO the photon explanation is the best we have at the moment but I’m glad that imaginative people like Claes are ready to reexamine the fundamentals from time to time. I’m sure if a real problem about heat transfer required a solution Claes would produce a solution that competent Physicists would agree with. He would probably use the Poynting vector to give the direction and magnitude of heat flow. Which of course as Clausius pointed out is always from higher to lower temperature bodies. On the climate change issue he would say I’m sure that the colder atmosphere cannot increase the temperature of the warmer Earth Surface. And he, in turn, throws out the superposition principle ( the two black bodies’ radiation patterns can be solved for indepandantly, and then added together), which holds for classical wave theory. I don’t see switching to a classical EM frame, and then having to destroy a central tenet of the classical EM theory an advance. You can’t have it both ways – classical EM holds and classical EM doesn’t hold. The very frame it’s put itno says his theory is flat 100% wrong. 26. Maxwell posts: which is obviously wrong. But why?” One warm body in dark space radiates energy in all directions except back at itself (ignoring internal self-balancing). Two warm bodies in dark space do that but also each warms the other which reduces the rate at which they cool. This is true regardless of relative temperatures. The cooler body radiates the warmer body (can’t be helped – it doesn’t know it is the cooler body and science doesn’t care) and that unavoidably slows the rate of cooling of that warmer body. Unlike electricity passing through a straight taut wire (and a tapered wire will demonstrate distributed radiated energy), no part of the wire radiates any other part of the wire. It is like that solitary radiating body in space. The thermal distribution is a consequence of the local resistance and thermal conductance. Not the case with radiated energy. Each object paints any other visible object and that object is compelled to react to that energy. • dp, it was a rhetorical question, but I appreciate your answering it in the context you used. It’s more practical than my own and hopefully will get through to more readers. • OK, I’m late to this and have what may be a very dumb question. But, dp, in the scenario you posit isn’t it possible, depending on the temperature, size, and proximity of the two bodies, that the cooler may actually increase in temperature, at least for a period of time, while it is never possible that the warmer object would increase in temperature? And isn’t that the point some are making, i.e. the colder body can not warm the hotter body? • The warmer body does indeed warm the colder body, but at the same time the warmer body gets also warmer than it would be without the colder body. It would still radiate as much as without the colder body and this radiation would disappear to the empty space. What the colder body does is that it is also radiating (although less) and some of this radiation is going to hit the warmer body and bring some heat to it. Some additional heat is heating the body whatever its source is. • Pekka, I have seen this explained before. If the colder body causes the warmer body to heat then the radiation of the warmer body will increase and it should be measurable. If we cannot measure it the effect is so small as to be ignored in the context of the climate debate (much larger effects are ignored by the Models). Can you point us to papers showing the experimental data on this? No one else has bothered to beat us over the head with the actual empirical data, that I have seen, and my head is really hard so takes a lot to penetrate it. 27. Claes, I could recommend some books on statistical thermodynamics if you’re interested. It seems to me that if one is going to dismiss it as “jibberishy” one ought to know something about it, if one is not to be be considered a crank. 28. To David: I have tried to learn from books on statistical thermodynamics but I belong to the large group of mathematicians who cannot understand what this theory tells you about reality. As Harry DH says: A constructive debate requires constructive minds. To argue with a three year old who has decided to not do something, requires something else than good old logic. Yes; it is a good idea to go back and understand that Planck and Einstein and Schrodinger were not happy at all with particle statistics. Maybe they had some good reasons not to be which are still valid. 29. Right, they are challenging Planck and Einstein so we should prove it. From the chapter on Blackbody radiation: “7.13 Stefan-Boltzmann’s Law for Two Blackbodies The classical Stefan-Boltzmann’s Law R = T 4 gives the energy radiated from a blackbody of temperature T into an exterior at absolute zero temperature (0K). For the case of an exterior temperature Text above zero, standard literature presents the following modification: R = T 4 − T 4 ext, (20) where the term T 4 ext conventionally represents ”backradiation” from the exterior to the blackbody. It is important to understand that this is a convention which by itself does not prove that there is a two-way flow of energy with T 4 going out and T 4 ext coming in. In our analysis, there is no such two-way flow of heat energy, only a flow of net energy as expressed writing (20) in the following differentiated R  4T 3(T − Text) (21) with just one term and not the difference of two terms. The mere naming of something does not bring it into physical existence.” If you have two bodies, or one body radiating to an exterior, which can be considered as two bodies. They both are radiating, and how do they know of the existence of the other, which would be required to determine the magnitude of the net flow of energy. Pretty much requiring inanimate objects having knowledge of other inamimate objects is what your analysis requires. We can detect the cosmic background radiation, and those photons, when they enter a detector, must add that energy to the detector in order to satisfy consevation of energy, which warms the detector, slightly. That cosmic background radiation is just blackbody radiation extremly red-shifted. I guess we are getting somewhere, as those who are trying to disprove the greenhouse gas effect, realize that in order to do that, they must attack Einstein, Planck and the Photon, and you wonder why they are labeled crack-pots. 30. I often find challenges to my existing perspectives to be enlightening, because in responding, I’m forced to review my own understanding, and on ocassions, revise it. In this case, however, the claim that a cooler body can’t cause a warmer body to become warmer still (if that is indeed claimed) is so nonsensical that it would be hard to learn anything from refuting it. Instead, I will simply suggest a simple experiment. I assume most of us are located in what is now a relatively cold time of year. Here is what I suggest. When the temperature outside is 2 deg C and your body skin temperature is, say 35 C (measured by a thermometer taped to your skin and insulated to shield it from the outside), go outside dressed only in a short-sleeve shirt and shorts, wait for about an hour, and then take your temperature. It will be lower – record the value. It might be around 32-33 C. Now go back in the house, and put on heavy clothes and an overcoat, taken from the closet at 20 C (obviously colder than your body skin temperature). Again, take your temperature after an hour. Did the 20 C clothes cause your 32 C temperature to go up or down? The mechanism of warming by the clothes is primarily convective, while the warming of the surface from the atmosphere is primarily radiative, but the principle is the same – a cooler body can cause the temperature of a warmer body to rise. For this to happen, of course, the cooler body must itself be exposed to heat that originated in an even warmer source than the current temperature of the warmer body. In the absence of such a source, a cool object can’t raise the temperature of a warmer object (although it can cause it to cool less than if the warm object were simply radiating to space). For your skin, that heat is generated by metabolism sufficient to maintain your internal temperature above the 32 C skin temperature, and the clothes retard its escape. For the atmosphere, the heat comes from the sun, and is transmitted to the atmosphere by absorption of solar radiation and IR radiation from the surface. Ultimately, of course, the net heat flow is from warm to cold – from the sun, via various routes, to the Earth, and then to space. In the meantime, the greenhouse effect operating on the atmosphere makes the Earth’s temperature habitable. • Fred, with all due respect, everyone here understands conduction and its little brother convection (and convection’s little brother advection) just fine. The nonsensical greenhouse gas theory is based on radiation and radiation balance causing heating. Bringing conduction, convection and insulation into the conversation is off-topic and a distraction…and certainly seems intentional to me…like a magician trying to distract the audience from the things going on in his left hand. • Fred Moolten | During a school lesson the Physics teacher might say the force of gravity causes “bodies” to accelerate towards the Earth at 9.81m/s2. A pupil might ask “is the body alive”. Fred we are not talking here about heat sources that have a means of regulating their power output such as an animal. Will putting clothes on a bronze statue at a temperature of say 350K cause its temperature to rise above 350K if the ambient temperature is say 275K? Of course not! All the clothes can do is to insulate the body i.e. to reduce the rate of heat loss from the object. • I believe most readers will understand the point I made. • Fred Moolten Most readers will conclude you don’t know much about heat transfer! • I’ll take my chances, Bryan. • Fred, I do hope you are joking here as the reason I will be warmer after putting on the 20c clothes is the energy I’m burning and turning into heat (you know, calories) will not be lost as quickly allowing my body to warm. • Ken – Your explanation is correct, but there was no joke intended. The point is simple – as long as a heat source is available for the cooler object to operate on, that object can raise the temperature of a warmer object. In this example, the heat source is body metabolism. For the greenhouse effect, the heat source is the sun. The inability of a cooler object to raise the temperature of a warmer object applies when there is no source of heat for the cooler object to divert back toward the warmer object, but that is not the case with our atmosphere. • Bryan, if a reader came to the conclusion that Fred was discussing convection or conduction it would point more to reader’s inability to decipher the most important aspect of the his example rather than a real lacking on the part of Fred. Yet, here you are. • Fred seemed to have interlinked lines of confusion . Power sources that regulate their output. Insulation does not imply that the insulator transmits heat by any method to the source of heat. • Actually I don’t understand Fred. Let me see, if I put a brick out in the sun and it warms to X degrees, and if I then split the brick in 2 and seperate them a couple of centimetres, they will “become warmer still”? • Possibly yes, because of increased surface exposure to sunlight, but it depends on air temperature, the absorptivity of the bricks for solar and IR wavelengths, their IR emissivity, conductivity and temperature at the surface they are resting on, and other variables. • I don’t know why you would introduce all those variables. It’s the one brick under the one sun sitting on the one surface. All I do is tap it with my trovel and split it in half (like a good brickie would) the properties of the 2 halves are identical. If it’s T rises due to the greater surface area, what has that got to do with the discussion about a cool body increasing the T of a warm body via radiation? OK we’ll void the extra surface area by placing a brick under the sun until it reaches X degrees. We now get a 2nd brick from the shed and place it next to the first one. Will the T of the first one now rise above X degrees because a 2nd brick was placed next to it? • Yes, under many circumstances (see Frank Davis’s link below to Spencer’s blog). • that means we could….for instance…..increase the surface temperature of the Moon from 107DegC to a somewhat higher T by placing an atmosphere around it? • The mean (day/night average) lunar temperature could be increased by an atmosphere containing greenhouse gases. • Why are you mentioning the mean T? The moon has a T of 107DegC during the day. If we introduce a cool body next to it (an atmosphere) will it increase the moons T? It’s a simple question expanding on our discussion so far. You didn’t introduce ‘mean’ or day night into the brick example. So what will the new daytime T be? • BH – It will increase it more at night, but it would also increase the daytime temperature as long as the cool body did not shield the moon from the sun. • Splitting a brick is a very good example. If we split the moon in two halves, will they warm each other, so each alves become warmer? I dont thnk so. • At equilibrium, the temperatures won’t change as long as the new surfaces have the same physical properties (emissivity/absorptivity) as the original surface. That is because the moon’s temperature is determine by the level at which radiative loss to space equals radiative gain from sunlight. Since splitting the moon won’t change the incoming solar energy in W/m^2, the outgoing flux and therefore the surface temperature won’t change. • actually if you split the moon in half, the two halfs would both be cooler as the combined surface area would be greater • Rob – the extra surface area would both absorb and radiate more heat. The temperatures would remain unchanged, because they are dictated by solar absorption on a W/m^2 basis. Surface area is therefore irrelevant. • Fred- If you slice a moon or planet in half wouldn’t it expose the warmer core of each half, which would result in greater heat loss • just kidding • Baa Humbug Indeed, what a number of the IPCC adherents miss out is that a colder object can make a warmer object colder than it would be in the absence of the colder object. Why do they have this blindspot? • I don’t understand your (Moolten’s) point either. Clothing does not heat up a human body via “back-radiation” or “back-conduction.” There is no heat transfer from cold to hot without work input (Clausius), and clothing (and likewise the atmosphere) cannot add work input. “The total surface area of an adult is about 2 m^2, and the mid- and far-infrared emissivity of skin and most clothing is near unity, as it is for most nonmetallic surfaces. Skin temperature is about 33 deg C, but clothing reduces the surface temperature to about 28 deg C when the ambient temperature is 20 deg C. Hence, the net radiative heat loss is about Pnet = 100 W.” Clothing “reduces the skin surface temperature” because the human body has to supply heat energy to the colder clothing to increase the temperature of the clothing. Clothing does limit convection, as do glass panes in a greenhouse, but CO2 has no such ability. Thus, the analogy fails. Please point me to a textbook of physics which contains the terms “back-radiation” or “back-conduction.” • Without the clothing, the temperature would decline even further. If you are skeptical, try the experiment I proposed. 31. “I belong to the large group of mathematicians who cannot understand what this theory tells you about reality.” It is impossible to take anything you say seriously when you make statements like this. When classical thermodynamics fails to explain the specific heat of your atomic crystalline solid, to where do you turn? Maybe you can guess what technique Einstein used to model the solid. 32. No I am serious, as serious as Einstein when he distanced himself from statistics as a way of understanding physics. • As in the statistical emission properties of a theoretical S/B, solid two-dimensional black-body disc. As aposed to the physical emission properties of a real, fluid, three-dimensional grey-body gas. The Stefan/Boltzmann BBD argument is a infra-red herring. It leads nowhere because it is an apple and oranges comparison. We can easily compare a body of CO2 to a body of air and clear up the argument in seconds. “An easily reproducible experiment” This simple experiment demonstrates that CO2 in the atmosphere is forced in to equilibrium by and with, the O2 and N2. Not as AGW has it, the other way around. • Interesting. You choose a classical frame for your work, and then use a statement about the interpretation of QM wave functions to boslter your argument .. but fail to note that Einstein didn’t distance himself from statistical mechanics, etc. I see no clarity in your thoughts or arguments, merely throwing in red herrings instead to answering the obvious inconsistencies which result from your theory . 33. Judy says that something is wrong with the KT energy budget, but refuses to tell what is wrong. What kind of debate is this? Is it some kind guess play? So Judy, please tell me now what it is you find is wrong with KT? • Exactly when and where have I said something is wrong with the KT energy budget? KT’s numbers are almost certainly inexact. Attempting to do some sort of globally averaged energy balance may not be the best way to go about it. But that does not mean that atmospheric infrared back radiation does not exist. 34. To Fred Molten: Can you give me the equations you are using showing that heat by itself (without external input of energy) can move from cold to warm? Of course putting on clothes makes it possible to keep a higher body surface temperature but the heat comes from the catabolism of your body, not from your clothes, at least if you live in Sweden. 35. The present physical theories are perfectly able to describe all basic processes that need to be considered in analyzing atmosphere and they have been tested extensively in very many different setups. There are no reasons to replace any of this knowledge by some conflicting physical laws. Most physicists are, however, unaware about, how much of the physical understanding can be described in several different ways. Handling of electromagnetic radiation is one good example. One of my former colleagues did theoretical research on laser physics. Most descriptions of lasers start immediately with quantum field theory, but his approach was based on classical electromagnetic field theory and it was very successful. It was not in contradiction with quantum mechanics, but the mathematical approach was very different. I can see in Claes Johnson’s texts superficial similarities with that approach. The way quantization is brought into the calculations can be chosen from several alternatives. In some approaches it can make sense to state that there are not forward radiation and back radiation, but only the net radiation. If the final results differ from the conventional approach they are certainly wrong as the conventional approach has been validated so well, but the alternative approach may also be correct as long as it leads to the same results. I do not believe that the alternatives would often be easier to understand or of any particular value, but I would be careful before declaring some non-conventional approach automatically wrong. The case of analyzing lasers that I mentioned at the beginning is proof of the fact that sometimes one may indeed find advantage from postponing the quantization and using classical formulation as far as possible. Using obscure alternative formulation and vague argumentation as evidence on weaknesses in the conventional understanding of physics is another matter. When it is done in parts of physics, which have been applied widely for years without any conflict with observation, I would not give any weight on such claims. • Thank you Pekka • Steven Mosher I’ll suggest a cage match. Johnson versus maxwell. no other commenters allowed. People can then see that Johnson will not be able to maintain his position. we will them ask him to admit his honest error and ask the publishers to correct the book. • As publisher of the North American and Oceania version…I accept this challenge. I’m happy to publish errata and a new edition if and when the errors reach a critical mass. I’m not sure how to prove anything when the topic gets this esoteric…I prefer lab experiments where the data verifies or falsifies a claim. No models. No dueling weblinks or appeals to authority in any form. It makes things tough when you need a vacuum to isolate the experiment from conduction and convection effects. We’ll see how it goes, I suppose. Good idea, Steve. • Steven Mosher Thank Ken. I suggested the same thing for the IPCC. We need to make room for the admission and correction of honest error. The IPCC could not do it. I do not trust them as a consequence and thus am forced to look at primary research on my own to come to a considered judgement. • Now that we have aired some stuff, I agree that the discussion is best left to those with a degree in physics (maxwell, pekka, and there are others among the denizens of climate etc that have not shown up). • I’d be down for this ‘cage match’ if I thought it would do any good. Alas, we’ve seen that even when faced with the idea that his theory violates the conservation of energy (the 1st law of thermodynamics, the very theory he claims supports him), he is unwilling to concede or even engage. It’s my opinion, based on this fact and the lack of transparent discussion perpetuated by some other commenters, that science is not of interest to these people. Maybe it is an ‘honest’ mistake that Johnson has gotten to this place, but I see his poorly thought out analogies beginning Chap. 2 as a way for him to rationalizing away the physical meaning of some of the most well-known and thoroughly tested laws physics has given us thus far. In such a case I have to wonder how much honesty is involved… • Pekkka, a good historical example of what you are saying is the Drude model. It posits that electrons are classical in enough numbers when confined in a solid. There is a basic kinematic equation describing the force acting on each electron that, when solved for the appropriate situation, gives an answer that fits ‘reasonably well’ to observations. You may be familiar with this model if your friend works on lasers. But the Drude model, and other so-called ’empirical models’, is flawed physically. Just as your friend’s laser theory is flawed. That is to say, it is practical for a well-trained experts to use such a theory because he/she understands its flaws and faults. It works for back of the envelope calculations which are quite important in the lab. What happens when we are trying to determine a ‘physical understanding’, however? In such cases, it’s my opinion that we must do our damnedest to get to the meat of a problem. Even if that means dispelling a computationally practical and useful formalism like the Drude model. Because the Drude model doesn’t give us transistors or quantum wells or superconduction…or lasers for that matter. Having relied on the Drude model takes away from our understanding of reality. In the same way, while Mr. Johnson’s attempts might seem like an interesting facet of science, they fundamentally take away from a broader understanding of reality. There is no basis in it’s being real other than the words on a pdf. It is especially problematic since so many here are willing to simply regurgitate his memes without any skepticism at all. I think the most important aspect of doing science, as Mr. Johnson claims he is doing, is determining whether or not you can handle being wrong. If you cannot handle such an outcome, as Mr. Johnson’s reaction to the criticism he has faced here makes me think, you are not interested in science. I don’t think Mr. Johnson is interested in science. I’d be interested in your take on that. 36. Steven Mosher You and John Sullivan utterly mis understand the concern about the “united front” If, for example, the AGU were to offer some session time to discuss skeptical issues, the first question is WHICH skeptical positions should be given time? If, for example , a research center were to open its laboratory time to test skeptical ideas on GCMs WHICH skeptical positions should be given time. It was a PRAGMATIC discussion about a PRACTICAL problem. Now then warmists could pick the WORST skeptical ideas and only discuss those. this is what realclimate does. • Give an some specifics for a really good skeptical idea that Real Climate has ignored. Paul Middents • As far as I remember RC has covered just about every paper which has been promoted by the skeptics in recent years. In the end there aren’t good skeptical ideas and bad skeptical ideas, there are just good and bad ideas, and good ideas will generally get proper consideration. Maybe there are some exceptions – if someone can provide evidence that there are good, credible ideas out there which are not being considered then fine, until then I remain, well, skeptical. • Andrew Adams, I will be glad to give you an example of bad ideas that RC still supports. Hockey Sticks. Have they admitted yet that Mann’s and associated work are all severely flawed and should be withdrawn? That they do NOT support the claims they make? • Hi kuhnkat, I suspect that none of the “Hockey Team” ground staff at RealCLimate have been allowed to read respected investigative science journalist Andrew Montford’s excellent exposé “The Hockey Stick Illusion” ( This was declared by another respected investigative science journalist Matt Ridley ( as being “…a rattling good detective story and a detailed and brilliant piece of science writing .. ”. Ref. your comment yesterday at 11:48 pm, for Andrew to “ .. Dig harder man!!! .. ”, he had the opportunity on 1st July and because he refused to remove his blinkers he threw it away. Investigative science journalist? – pull the other one. Best regards, Pete Ridley. • Montford is a “respected investigative science journalist” by what standards exactly? Has he won awards of his colleagues? Has his work appeared in prestigious publications? I haven’t read Montford’s book but my experience is that those who have make lots of charges regarding Mann that they can’t actually defend, most likely because they are false. (At least if they are true, noone has provided evidence to rebut my evidence that they are false.) I am not sure whether they got this info from Montford but that has been my impression. • Joel, instead of waffling from a position of ignorance try reading the book and following the references, do your own assessment then go over to the blogs of Steve McIntyre’s blog ( and Andrew Montford ( and try to convince them that you know better than they do. Let me know how you get on. They may let you co-author a paper with them on the subject. Best regards, Pete Ridley • Pete and Dr. Curry: Well, next time I find it in a bookstore, I will look through it and see what it has to say about the “censored” directory and about the Tiljander proxies. If it just repeats the same unsubstantiated nonsense that I see from people like “Smokey” on WUWT, I will be very unimpressed. If not, then maybe it is more worthwhile. • Joel, rather than reading the books, try going to their websites and reading the archives. Especially Climate Audit, Steve McIntyre’s site, as he was central to debunking the Hokeystick. You might even ask him directly about the “censored” directory with r2 information that was not published as he wrote about it first I believe. Of course, even if that was an inflated anecdote by some unknown person, the fact is that the r2 statistics for the Hokeystick FAILS! The difference is whether Mann knowingly misled people or is just sloppy and ignorant about the statistical methods he uses. Here is a start at CA: Be sure to ask Steve directly about how he knows the “censored” directory really came from Mann’s FTP server. 37. To Pekka: You seem to agree that macroscopic physics cannot be modeled by quantum mechanics, and so macroscopic equations are needed for atmospheric radiation. Now macroscopic radiation seems to be well described by Maxwell’s equations, modulo the difficulty of the ultraviolet catastrophy, which destroys everything. What I suggest is a rational way to avoid the catastrophy and keep the great advantage of Maxwell’s equations as compared to primitive particle statistics. Isn’t that something to think of a bit, in the spirit of Planck and Einstein, rather than dismissing without reflection? And the radiative transfer equations are much cruder than Maxwell, right? • Claes, I do not agree that quantum mechanics cannot be used in those parts of atmospheric physics, where it has been used. What I was saying that in some situations the agreement with quantum theory can be obtained in surprisingly many different ways. Even for effects where the quantum theory differs from traditional classical physics the correct results may sometimes be obtained in ways where the quantum effects are somehow hidden. Hamiltonian formulation of mechanics allows for presenting some quantum effects in less common fashion etc. Einstein was not happy with quantum theory. I think that the main reason is related to conceptual difficulties in joining it with general relativity described in the elegant ways that he had developed. His dissatisfaction came out also in his statement about God not playing dice or in his paper with Podolski and Rosen, which has now been proven to be in conflict with experiments after Bell had formulated his inequality along the lines of that paper. In this case Einstein erred and quantum mechanics prevails. The problems in interpreting quantum mechanics are also related to some of the possibilities of doing the calculations differently. The quantum mechanics is, however, extremely successful in giving correct predictions with high accuracy. Thus it is a very good and valid physical theory in pragmatic sense. Most physicists do not worry about the philosophical problems and do their work successfully. Whether the philosophical difficulties turn out to have some relationship to the next paradigm, which would solve the problems of Einstein and unify gravity and quantum mechanics in a elegant way, remains to be seen. Perhaps not by our generation, but our children or grandchildren. I am still not telling the name of my former colleague, but I can tell that he has been later professor at KTH. When we were working at the same institute, we had some very interesting discussions on the foundations of quantum mechanics. • I add that sometimes it has also turned out that results generally thought to depend on quantum mechanics turn out to be true in more general settings. This is not very common and I cannot give examples, but I have certainly heard about such cases. • Claes, Concerning back radiation I certainly believe that it is a useful concept and that the radiative energy transfer can be handled most easily by including it in the calculation. I cannot figure out, how all correct results could be obtained without considering it explicitly. On macroscopic level avoiding it may be possible, but on the more detailed microscopic level it seems almost impossible, but only almost. 38. To Judy: OK so now you say the KT is basically correct and that backradiation is a real physical phenomenon. Very good because we now have something concrete to discuss. May I then ask you about the equations describing your effect of backradiation? Without equations anything is possible. 39. Mr.Johnson: In your description of a IR camera you admit that the instrument , directed appropiately, show radiation. At the same time you negate that this radiation reflect some reality. Could you please explain this ? I am extremely confused. • I think that what Claes is saying, is that the radiation you measure is a result of the temperature. Not the other way around. Sounds good to me. 40. To Judy: Do you claim that radiative transfer equations model backradiation? 41. I hope that this is relevant to the discussion: July 23rd, 2010 by Roy W. Spencer, Ph. D. • Frank Davis Google the famous “Pictet Experiment” Its of great historical importance and quite relevant to this discussion. Why do they have this blindspot? • Thanks, Dr. Curry, for hosting this debate. @ Frank Davis: Dr. Spencer says in his article: “So, once again, we see that the presence of a colder object can cause a warmer object to become warmer still.” However, the process he refers to is not heat transfer by radiation from a colder system to another system, but a kind of isolation, like in a thermos. Yet, the colder system is not providing “more” energy to the warmer system, but just would be avoiding that the warmer system emits heat towards the colder system. This argument is not true because the thermal energy is transferred to the colder system, invariably, unless the colder system is a perfect reflecting material or the colder system has a very low heat capacity. I would make Dr. Spencer to recall that the Earth is not a thermos; his argument could be possible if the highest layer of the Earth’s system, i.e. the thermosphere, had a mass density higher than that of the surface. It’s not the case for the real Earth. On the other hand, if you wish to consider QM on this thread, you must include also the well-described by Einstein induced emission, which has been corroborated experimentally and in the construction of some devices, and the well-know and demonstrated radiation pressure. These two real physic phenomena debunk any idea of a “backradiation” from the atmosphere warming the surface. • Nasif, I think you are confused. Roy’s posts on this topic are very clear and definitely show that there is in fact backradiation toward the surface from the atmosphere. He ultimate experiment used an IR thermometer to measure the actual temperature of the air several hundred feet above via the IR light it emits. Even more confusing are your statements concerning stimulated emission and radiation pressure. Can you please explain specifically how those physical processes play a role in radiative transfer or the lack there of in the atmosphere? • @ maxwell… No more than you are. Roy’s “experiment” only demonstrates that there is energy flow by radiation, whatever his conclusions could be. Both processses, induced emision (or induced radiation) and radiation pressure, influence radiation. If you know what those terms mean, you won’t be so confused, as you are, on “backradiation” issues. • Nasif, you’re damn right I’m confused. You still haven’t provided a physical mechanism for how those interesting terms have to with the most important aspects of radiative transfer. On the point of Roy’s experiment, what type of energy transfer is he measuring? If the IR thermometer is conducting energy from the surrounding air, his thermometer would have measure about 300 K. Instead, his thermometer read around 200 K. Where does that difference come from in terms of energy transfer? Are you saying that the thermometer can conduct energy from the upper reaches of the lower atmosphere without conducting through all the layers? That would be a monumental theory! Also, if you’re going to charge that a particular person doesn’t understand some terms you use, you should make sure you know what you’re talking about. I have extensive experience in classical optics, quantum optics, atomic and molecular spectroscopy and nonlinear optics (I built an optical parametric amplifier over the past week in fact) so I KNOW those terms you’re using have absolutely nothing to do with this discussion, the greenhouse effect or ‘backradiation’. It’s a purely quantum mechanical, spontaneous effect. It was an interesting go at it though. • Dear Maxwell, Please, visit again Roy’s experiment and see what the box floor is and on what kind of surface it was placed on. You’ll get the answer. Regarding induced emission, you should not forget the natural photon streams, so upwards, during nighttime, as downwards, during daytime. On the other issue, if you make the proper calculations on radiation pressure, you’ll find that the downwelling radiation heating up the surface is not possible in the real world. If Dr. Curry, the owner of this blog, grants me permission to go out of topic, I will proceed to answer properly your questions. • Nasif, the hole just keeps getting deeper. The experiment to which I was referring had Roy traveling around in his convertible sedan pointing an IR thermometer into the air. Not his make-shift holhraum. In that case, where he is clearly measuring the temperature of the atmosphere directly several hundred feet above him, how does energy interact with the thermometer to produce a reading of 200 K? I’ll give you a hint, it has nothing to do with radiation pressure. It’s becoming more and more clear to me that you are using words that have one meaning to you, but a totally different meaning to actual optical scientists. You ought to look into the ways in which these terms are used in scientific circles so that you can more easily communicate your points in a scientific debate. • Dear Maxwell, Don’t go further on this or you’ll get disappointed on your own limitations about those concepts. Take your book on Radiative Heat Transfer and you’ll see I’m absolutely correct. I don’t want to go further on discussing those concepts because they are out of topic and I respect the admonitions of Dr. Curry on the purpose of this blog thread. Well, what the ground on which Roy placed his box and what the floor of the box was? Could you be so kind as to tell us what was it, specifically? Third, when he was “meassuring the temperature travelling around on his convertible sedan”… maxwell, tell me honestly, don’t you know how thermometers work and what thing makes them work? • Nasif, I’m trying to determine if your lack of comprehension of my comments is due to the possibility you are not a native English speaker or just plain stupidity. I’m willing to give you the benefit of the doubt and assume the first option, but not for too much longer. Again I’m discussing Roy’s use of a IR thermometer, not his makeshift holhraum. Please make an important mental note of that fact and stop your persistent confusion over this fact. It’s making you look dumb. An IR thermometer measures IR light (heat) emanating from a body or gas. Therefore, if Roy is pointing this thermometer at the sky, the thermometer reads the temperature of the sky via its IR emission. Therefore, the atmosphere is emitting IR radiation toward the ground that began its journey in ‘life’ at the surface, making it ‘backradation’. QED. As for radiative transfer, I’ve extensively studied ‘Introduction to Three Dimensional Modeling’ by Washington and Parkinson. From this text I am able to recover what both quantum mechanics and thermodynamics imply should be a downwelling IR emission from the atmosphere. Do you have other certified texts that you feel are better than Washington and Parkinson? Furthermore, you continue to lack any sort of meaningful physical description of what you are talking about. Based on the plethora of these facts so far, I must say I don’t think very highly of your opinion on this matter. It’s been real though. • Maxwell, I have an analogy I think is quite good. If you have a 6Volt potensial, and a “current sink” at 1 Volt, you will have a current from 6 Volt to 1 Volt. Increase the “sink” to 4 Volts. The 6 Volt source will drain slower. But the current is still seen as going from 6 Volts to 4 Volts. But we dont talk about “back-current”. That would be confusing. • Maxwell, darn, you probably will never see this to answer my question, but, just on the offchance that someone does and can: I am under the impression that the atmosphere absorbs virtually all of the IR radiation (except the window) within about 15ft of the surface. if this is so, exactly what was Dr. Spencer measuring from the ground?? Wouldn’t downward IR also be absorbes so that all he would be able to measure would be about 20 feet over his head and not an average of several hundred feet???? 42. Again Judy: Which equations do you claim model backradiation? As I said I want equations and I want the equations to be motivated or derived mathematically. Which equations are you referring to? • The equation that models backradiation is the Planck law for the intensity of light emitted by a blackbody at a specific temperature. This equation is carried out for every layer in the atmosphere, which has a stratified temperature profile. The absorption of light, all frequencies, is modeled by the Beer-Lambert law which is easily derivable from Maxwell’s equations via the electromagnetic wave equation. If we wanted to get down and dirty with the most fundamental equation governing the behavior of absorbing material to first order in the perturbation due to the interaction with light, we would have to use the quantum master equation with a phenomenological coupling to the vacuum field. We can go to second order in the perturbation to get to scattering processes if we liked as well. Ever wonder why the sky is blue? This process would allow us to see absorption and spontaneous emission (the dominant form of emission in the atmosphere) on a per atom/molecule level. The Beer-Lambert and Planck laws get the overall average effect of the quantum master equation in this context. So from first principles, we can easily calculate (grad school quantum problems) the rate of absorption and emission of a particular molecules when the light in question is on resonance with a particular allowed quantum transition, the linewidth of that transition based on different broadening processes and the necessary equipment to test the predictions of any such calculation. From there, we can sum over all the molecules in our volume and get an answer to compare to the observational laws used in climate models. You can see whether the agreement between these methods is good. Let me know how it goes. • Maxwell, doesn’t CO2 absorb and emit based on its molecular bond configuration as opposed to planck energy?? Maybe someone can jump in and explicate what the difference is if any?? • For example, CO2 emits at 15 microns at an intensity according to the number of molecules and their temperature using the Planck function for temperature that wavelength (which actually peaks not far from 15 microns for normal atmospheric temperatures). This emission is seen at the ground as part of the back-radiation, together with all the other CO2 and H2O bands in clear sky that make up all the back-radiation. • kuhnkat, Not one opposed to the other but both combined. The molecular properties determine which wavelengths have strong emission and absorption, i.e. they determine the emissivity, which is equal to absorptivity. Planck’s law tells how strong the emission is at those wavelength as the strength is a product of the emissivity and Planck’s law for black body at that wavelength. When the emissivity of a gas is strong for a particular wavelength, it means that its is significant already for a thin layer and very close to one for a thick layer in accordance with the Beer-Lambert law. Then the the strength of emission at that wavelength is the same as for a black body of the temperature of the gas. This is true for those wavelengths, but at other wavelengths the gas does not emit at all or very little. • Then Planck’s laws are applicable to emission whether it is from level changes in single atoms or bond interactions. • As Pekka pointed out above it sets the upper limit at any wavelength, if there is no allowed transition at a particular wavelength then the emission will be zero no matter what the Planck value is. The Co2 band at 15μm will emit strongly up to the Planck limit. O2 can emit in the UV at around 220nm but in the atmosphere the Planck limit will be generally so low that this emission will be very weak (Judith made this point earlier). 43. To Lucia: If you had read the equations I refer to as Navier-Stokes you would have seen that they express conservation of mass, momentum and total energy and are the basic equations of thermodynamics describing transformation between kinetic and heat energy through work. Are you familiar with thermodynamics? • Claes, The basic equations of thermodynamics are called “The first law of Thermodynamics” and “The 2nd law of Thermodynamics”. One of the clues is that these equations contain the word “thermodynamics”. In contrast, conservation of momentum is “mechanics” and the navier-stokes equations are basic equations for fluid mechanics. Conservation of mass is used in analyses, but that doesn’t transform the equation into “a basic equation of thermodynamics”. Are you familiar with thermodynamics? I’m laughing myself to tears here. I am familiar enough to know that you are making errors. :) • This is lucia. • Kim— You are correct. I don’t know why wordpress auto-filled the name incorrectly. I should have seen that. • The mistake is to think that it is possible to consider thermodynamics and fluid mechanics as separate in a gas or liquid. They are inseparable in the context of free atmospheric thermalisation. To imply otherwise is erroneous, perhaps even fallacious. 44. To Lucia: The convective adjustment that you think is science, is just an ad hoc fix up without any mathematical basis. If you are allowed to adjust what your equations tell you, then you can get anything you want. • Still waiting for you to apply your model to a real world example such as those which I showed above. Most applications of standard radiation heat transfer have a substantial overlap between the incoming spectrum and the emitting spectrum by the way. • Claes– Since your paper suggests you think the first law of thermodynamics is the 2nd law, and the navier stokes equations is the basic equation of thermodynamics, I am not surprise that you think the convective adjustment is just an adhoc fix up. To understand the physical motivation, will need to apply thermodynamics. At my blog I gave you a tip on how to distinguish the first law from the second: The second law should contain an inequality symbol ≤, a symbol that represents entropy (S is often used), and a symbol to represent temperature (T is a popular choice, but rebels sometimes use θ). Also, if I recall correctly, it generally contains no work term (i.e. W would not appear.) As for this: If you are allowed to adjust what your equations tell you, then you can get anything you want. Yes. I agree. • Also lucia. • The above is me– Lucia. • Lucia, I did not check carefully, but I think the equations that Claes is presenting do present correctly the second law. The inequality is hidden in the requirement that D ≥ 0. The formulation is not the one we all have seen most often, but I think it is correct. The same statement that Claes presents correct formulas in a less conventional way seems to apply to the other chapter as well, but there I have doubts on, whether all equations are correct or only some of them. I did not read in this text at all carefully or study the equations more than superficially as I do not think that his approach is useful even when it is correct. Many of the claims in the text are strange if not outright wrong. • Pekka– Specifying 0≤D where D is dissipation is a consequence of the 2nd law of thermodynamics. However, it does not turn those equations into the 2nd law. That equation may be a correct representation of something but it is not the 2nd law of thermo. This is not a matter of notation. Other puzzling things about that equation may have something to do with non-conventional representations — for example, it’s not clear to me that it’s even a correct formulation for the first law. But in order to pinpoint the problems, I need to know whether that’s supposed to be a control volume formulation or an analysis on a fixed volume, and possibly where the boundaries are etc. My impression is it’s supposed to be a control volume with the top at the top of the troposphere– but if so quite a few terms may be missing. (Or not. It depends on whether we have a control volume whose shape is permitted to change– in which case…. well…) • Lucia, My purpose is not to defend the book or conclusions presented by Claes Johnson in the book. I certainly disagree on very many things. I am only noting that texts that are obviously wrong, when they lead to definitely wrong conclusions may not be wrong in all of their details. Most people seem to agree that this chapter is actually correct in what it describes. Its content may be used in reaching wrong conclusions outside its range of validity, but that is another matter. It is also possible that the unconventional way the equations are presented contributes to wrong conclusions, but even so the equations may be correct. Claes Johnson presents two inequalities in eq. (2). They are equivalent when combined with the first law /eq. (4). Of course this is not the most general presentation of the first and second law, but for the problem considered they appear to be equivalent with the general formulation. It is clear that using these laws as more basic than the general formulation may lead to errors. Perhaps such an error is really done, when considering radiative processes in the other chapter. I am not really interested enough to even check. Also in this chapter the formulas (5) and the related discussion are obscure. If not for other reasons then at least in the total neglect of considering units properly. The equations can only be valid in units where temperature is dimensionless (i.e. 1 K = 1) and the unit of acceleration is inverse of the unit of length. Furthermore it is stated that specific heat capacity cp = 1. Whether all that is possible at all is certainly not obvious. But then again all that is more or less forgotten when the next formulas are standard knowledge. The whole paper is confusing and may well be misused, but even so it is good avoid erroneous claims about its content. • Pekka– I have never suggested things that are wrong in their results must be wrong in all their details. I am pointin I am saying is that those equations are not “The second law of thermodynamics”. The reason I am saying they aren’t is that they aren’t. In undergraduate fluid mechanics problems, students solving pipe flow and other simple problems, often use an equation referred to as “the mechanical energy equation” or sometimes “the energy equation”. It is derived from conservation of mass and momentum, sort kinda-sorta like the first law of thermo and includes a dissipation term. The 2nd law requires that dissipation term to be positive. So, using that equation lets students impose the requirements of the 2nd law on their analysis. However, you don’t get to call that equation “the second law of thermodynamics” merely because it permits students to correctly incorporate the effects of dissipation on pressure drop in pipeflow. Likewise, what Claes writes down is not the 2nd law of thermodynamics. Moreover, I find your clain that To be rather dubious. In fact, based on the text, I’m not convinced it is possible to pin down what “the problem considered” really is. • Lucia, At least I agree on your last point. Reading the text of CJ it is often very difficult to pin down what he is writing about or where he is aiming to. 45. Dear Friends, I come late to the interesting discussion, so I did not read through all. Therefore I do have a remark. A flat hot body with two sides, unit heat capacity and with time dependent temperature Th(t), starting at Th(0)without an internal or external energy source cools from both sides with the rate dq/dt = sigma Th(t)^4 per unit area. Now you put a cold body with Tc(t) adjacent, facing exactly one side without touching, the hot body cools from this side with the rate dq/dt = sigma*(Th(t)^4 – Tc(t)^4) per unit area and with dq/dt = sigma Th(t)^4 per unit area from the other side. Therefore the hot body in both cases is cooling all the time, since Th(t) is always greater or equal to Tc(t). However, the hot body Th(t) stays in the second case warmer all the time than in the first case. But this is different from saying it gets warmer than initial Th(0). If Tc(0) is smaller than or equal to Th(0), then Th(t) is always smaller than Th(0). Of course as Roy Spencer showed a hot body with an internal or an external energy source can get warmer than Th(0), if you put a cold body adjacent to it. Best regards • Dear Günter… Anyway, the colder system IS NOT heating up the warmer system, but cooling it, continuously, if we wish, but only up to the point of equilibrium, i.e. when both systems reach the same energy density. And even so, the internal or external source of heat would continue heating up the warmer system. Take off the internal or external operator, for example, and you’ll see the colder system cannot heat up to the warmer system but quite the opposite. It is the internal or external PRIMARY heat source what heats up the system, not the colder system. The latter is Dr. Spencer’s argument. • Dear Nasif, that’s what I wrote if you reread my paragraph.. Therefore I said: “Of course as Roy Spencer showed a hot body with an internal or an external energy source can get warmer than Th(0), if you put a cold body adjacent to it.” Of course it is the energy source that heats the body up. I think it is important not to confuse “getting warmer” or “keeping warmer” with a energy source that heats a body up. • Dear Günter, Yes, you’re right. I misinterpreted the last paragraph of your post. Sorry… You’re also right on not confounding “getting it warmer” and “keeping it warming”. All the best, • In his discussion of a hot plate next to a cold plate, Dr Roy Spencer says: The 2nd Law of Thermodynamics: Can Energy “Flow Uphill”? In the case of radiation, the answer to that question is, “yes”. While heat conduction by an object always flows from hotter to colder, in the case of thermal radiation a cooler object does not check what the temperature of its surroundings is before sending out infrared energy. It sends it out anyway, no matter whether its surroundings are cooler or hotter. Yes, thermal conduction involves energy flow in only one direction. But radiation flow involves energy flow in both directions. Of course, in the context of the 2nd Law of Thermodynamics, both radiation and conduction processes are the same in the sense at the NET flow of energy is always “downhill”, from warmer temperatures to cooler temperatures. But, if ANY flow of energy “uphill” is totally repulsive to you, maybe you can just think of the flow of IR energy being in only one direction, but with it’s magnitude being related to the relative temperature difference between the two objects. Clearly Spencer thinks that radiative heat transfer is completely different from conductive heat transfer, and can go ‘uphill’. He writes: The only way I know of to explain this is that it isn’t just the heated plate that is emitting IR energy, but also the second plate….as well as the cold walls of the vacuum chamber. Does that mean that while radiative heat transfers don’t ‘check’ to see which way to go, conductive heat transfers actually do ‘check’? • Frank, The separation is not that clear. On molecular level even conduction may “go uphill”, but this is not visible and can be ignored. In conduction as in radiation energy goes in both directions at micro level. In conduction this is related to the motion of energetic atoms or molecules or to vibrations (phonons) in solids. The distances are usually very short. Therefore only the collective conduction is observable and described by an equation that describes only the net flow. In radiation it is often possible to set measuring equipment to detect separately radiation in each direction. One photon may go over a large distance etc. The back radiation is thus observable and it may also be that the easiest way of calculating the net energy transfer represents separately the two directions. In some cases it may be easier to consider directly the net flow, but as I said above, this is not always true. Ah! So there is ‘back-conduction’. I would express it differently. Conduction describes the *process* by which heat flows along an existing temperature gradient. Radiation is a something that a body *does* based on its temperature and emissivity. The former process directly involves both/all bodies that define the local temperature gradient; the latter by definition only depends on the characteristics of the radiating body itself. At least that’s the way I look at it. • The approach used in describing conduction can easily be extended to part of radiative heat transfer, to those wavelengths with strong absorption. Heat is transferred in accordance of essentially the same diffusion type differential equation in atmosphere by radiation near the center of the 15 um IR band. For wavelengths with weak absorption this approach does not work well, because such radiation does not proceed with small steps in diffusive fashion but by long leaps to a point where the temperature may be significantly different or even escape through the whole atmosphere. Most backscattering occurs in the region where the diffusion-like process describes the heat transfer rather well. On this basis one could describe all this with the diffusion equation and remove most of the back scattering from being considered explicitly. The way the calculations are done does of course not affect what really happens, but it affects often the way this is described. • Frank, Radiative heat transfer consist of two radiative energy flux, one from hot to cold and one from cold to hot. Radiative heat or net radiative energy flows from hot to cold, radiative energy in both directions. It is a little bit confusing, since “energy” and “heat” are sometimes used interchangeably, which is strictly speaking a bit wrong. However, scientists are doing that occasionally and the reader needs to bring it into context. Bad style, though. The second law as stated by Clausius reads: “There is no change of state that only results in transferring heat from cold to hot.” Note, it is not energy in general. Heat in this context should not be interchanged with energy. Best regards • Frank Davis Look at the blackbody spectrum of an object at say 300K. Superimpose the BB spectrum of the identical object at 400K Now using the spectra predict what would happen if these two objects were brought closer together so that they radiate to each other. We notice that; 1. The hotter object has at the short wavelength end, frequencies absent from the lower temperature object. 2. Pick any wavelength that both objects have in common. You will notice that the hotter object is emitting more radiation than the colder once. Now examine the hot surface; It is emitting more radiation of every wavelength than it is receiving. You can now hopefully appreciate that a colder object can never increase the temperature of a hotter object. • I understand your point, Bryan. Perhaps you agree with Guenter Hess’s comment just before yours, in which he wrote: If that’s how it is, then if it were possible to block or divert the radiative flux going from the hotter object to the colder object, while continuing to allow the radiative flux from the colder object to the hotter object (a sort of diode), then the colder object would heat the hotter object. • Frank Davis For the hotter object to radiate to the colder it must “see” the colder object. Since light rays must be able to travel backwards (rectilinear propagation) no such diode effect is possible. We therefore are forced to agree with Clausius that even for radiative transfer Heat only travels from the hotter object to the colder object. Yes I agree with Guenter Hess’s comments. 46. I would like to take this section of Chapter 1 as a point of departure for my comments. It says: “We have formulated a basic model of the atmosphere acting as an air conditioner/refrigerator by transporting heat energy from the Earth surface to the top of the atmosphere in a thermodynamic cyclic process with radiation/gravitation forcing, consisting of ascending/expanding/cooling air heated by low altitude/latitude radiative • descending/compressing/warmingair cooled by high altitude/latitude outgoing radiation, combined with low altitude evaporation and high altitude condensation. The model is compatible with observation and suggests that the lapse rate/surface temperature is mainly determined by thermodynamics and not by radiation.” Yes, of course they’d like to formulate a simple “model” that works this way, as some of their other conclusions might nicely fall in line, and in so doing, to re-write some laws of physics in the process, but unfortunately, their simple thermodynamic model is simply not the way the real atmosphere of the planet works, nor in fact the way the laws of physics work. It takes hardly anything more than a few basic real world observations to provide proof that radiational balance is a far more potent regulator of atmospheric temperatrue then the authors of this book would like in their “simple” model. But then, isn’t that the point they are trying to refute? For observational proof, take the role of water vapor as an GH gas, using the predicted GCM forecasts that the planet will see higher night time temperatures due to the increase in water vapor keeping more LW radiation near the surface. Witness to this is the fact that 37 U.S. cities and hundreds of other cities across the globe set night time high temperature readings in 2010, a year in which saw a record in precipitation. Based on a their simple thermodynamic cyclic process, this result would not be expected as that additional night time heat at the surface would surely have been carried away via convective thermal processes and added to the TOA output. This increase in global water vapor, measured over the past few decades is exactly as predicted by every GCM when using well established and quantified GH physics with the addtional radiative forcing caused by the additional accumulation of CO2 and water vapor in the atmosphere. Warmer night time temps are exactly what one would expect when considering the real world (i.e. measured) absorbtion and retransmission of LW radiation by increasing amounts of GH gases in the troposphere. Futhermore, one only needs to step outside on a calm cloud-less winter night and then step outside on a similar night when is has a nice overcast sky to feel the radiative GH effects of the water vapor in those clouds. I would ask the authors this: how would their model explain the warmer night time ground temperatures as measured throughout the world if not for the LW radiative effects of additonal GH gases? • Futhermore, one only needs to step outside on a calm cloud-less winter night and then step outside on a similar night when is has a nice overcast sky to feel the radiative effects of a smaller delta-T between the earth’s surface and the water vapor in those clouds. • Some data/citations, please? A word of caution – a clear night can feel much colder than an overcast one, even if the air temperature, as measured by thermometer, is the same. That’s because your perspiration evaporates more readily in drier air, so the perception of temperature can be largely subjective. 47. R. Gates: So, how do you explain the Medieval Warm Period? • Steven Mosher 1. our knowledge of the extent and amplitude of the MWP is VERY uncertain. 2. The presence of large amplitude warmings is evidence FOR long term natural oscilations, it is NOT evidence against the physics of radiation. 3. The final temp is the result of many forcings, not merely C02. Basically, your comment is OT to the discussion of the physics of the tyndall gas effect • Thank you Steven, I couldn’t have said it better myself, though I would welcome a discussion of the MWP on some other thread, perhaps in the context of Dansgaard-Oeschger and their likely Holocene cousins, the Bond events, a subject which fasninates me to no end… • not to hijack the thread but mostly for my own clarification, can we also agree to the converse: that the existence of the physics of radiation are not evidence against long term natural oscillation? Discussions such as this one may frustrate some, but I do feel they go a long way to clarifying what aspects of the science are clear and where and why there is uncertainty and/or a lack of clarity. Moreover, can we acknowledge basic processes but still differ as to their relative impact, rate and magnitude of change, and, of course, our ability to adapt to the changes they invoke? 48. To R Gates: Yes the model is simple but the point is that it is more complete (with thermodynamics) than a model with radiation only, which is the basic model of CO2 climate alarmism based on a “greenhouse effect” from radiation alone. • I would agree that both forms, thermodynamic and radiative, need to be included in any full understanding of the climate dynamics, but specifically, when speaking to the well-established science behind the GH properties of atmospheric gases, I believe the simple thermodynamic model falls far short, and can simply not explain or predict real world effects of GH gas increases as well as a GCM’s can when considering their full LW absorption/retransmission radiative effects. • Dear Mr. Gates, it is the other way around. The main physical reason for the effect of GH gases is not „back radiation“ , but rather the effect on the TOA balance, which is a decreasing outgoing longwave radiation (OLR), before reaching a new stationary state. “Back radiation” is only an internal energy flux that does not alter the energy content of the earth system. Changing OLR, however changes the energy content. The concept of emission height or “cooling to space” together with thermodynamics/lapse rate is enough to explain the greenhouse effect. Heat transfer by radiation, latent heat or sensible heat is enough. “Back radiation” is a parameter included in heat transfer by radiation ,though. Absorption/Reemission or “back radiation” alone cannot explain the greenhouse effect: I know that there are texts out there that try that, but they stay incomplete. • I agree that back radiation shouldn’t be invoked as the “cause” of surface and atmospheric warming. A TOA flux imbalance is required for the temperatures to change, but the mechanism by which the imbalance is transmitted by the atmosphere to the surface involves back radiation. If downward radiation to the surface didn’t increase as a result of greenhouse gas forcing and the consequent TOA imbalance, the surface wouldn’t warm. • “…but the mechanism by which the imbalance is transmitted by the atmosphere to the surface involves back radiation…” • In the context of the greenhouse effect surface and troposphere warm simultaneously because of the TOA imbalance, we have a radiative – convective equilibrium. The sun warms the surface. The net effect of longwave radiation is cooling to space, integrated across the globe. Back radiation increases with temperature, not the other way round. Back radiation is a parameter in the energy balance of the surface, even though you can measure downwelling radiation. Downwelling longwave radiation can heat a patch of surface, if the air is warmer on top of it. However, globally integrated downwelling longwave radiation is more than balanced by sensible heat, latent heat and radiative energy from the surface. Otherwise we would not have an decreasing temperature gradient with height on average. • Back radiation increases with air temperature, and in turn increases the temperature of the surface. That is how atmospheric heating from an energy imbalance is transferred to the surface. If the lapse rate is linear, the temperature changes equally at all laltitudes. In reality, lapse rates may not always be perfectly linear, but the approximation is a reasonably good fit with observations. It is not correct to imply that downwelling radiation only heats the surface if the air is warmer on top of it. It heats the surface even when the air is cooler, as is typically the case. • To avoid confusion about terminology, my point is that back radiation from an atmosphere cooler than the surface makes the surface warmer than it would be otherwise. The net IR flow is from the surface upward. • Steven Mosher Thanks Guenter. You will note however that now the conversation has shifted from Johnson defending his mistakes to you explaining how things really work. They are of course related. • Are we discussing CO2 greenhouse effect, or general atmospheric warming? It is important to note that there is no way to determine if “downwelling” IR has been emitted from CO2 or any other atmospheric molecule. All molecules and therefore all gas molecules emit IR. So “downwelling” IR should be expected. But that does prove a net increase in energy, or “greenhouse effect”. If you cannot show with real world experiment that more CO2 = higher temperature, you fail. “More CO2 = less temperature” Why? Because of specific heat capacity. • But that NOT does prove a net increase in energy, or “greenhouse effect”. I should have said! The origin of downwelling IR can be identified by its spectral signaature. Almost all will be from CO2 and water. • Incorrect. The spectral signature is not determined by the substance that emits IR but by the temperature of that substance compared to the surrounding ambient temperature when the IR was emitted. • Claes Johnson, do you agree with Will since he seems to be on your “side” of the debate? • I am on no ones side Judith. I just happen to know that adding CO2 to the atmosphere does not cause warming. In fact it causes cooling. I have demonstrated it with experiment. I have given an explanation with supporting references with regard to specific heat capacity. Further evidence : • Will, If the experiment you reference is the one in the link given in an earlier comment: More CO2=Less Temperature”, then you should know that this experiment, even if conducted with utmost care and precision (which I doubt), proves quite the opposite of what you’re stating. For the container with pure CO2 SHOULD BE, by the very processes you claim don’t occur, be cooler than the one with “ordinary air”, as that “ordinary air”, would, I presume, contain ordinary water vapor, and as such, with a much greater percentage of “ordinary water vapor” and would naturally show a greater GH effect (assuming of course that all the other varibles are the same). In addition the experiment is flawed for many other reasons, for the title states “more CO2 = less temperature,” and in such an experiment one would expect to have a control container that is kept under the same conditions as all the others, and then one would expect that the only varible to change would be the amount of CO2 in a serios of other containers. One could then produce a series of data points that would show how the temperature of the container varied with the only variable being the change in the amount of CO2. All this aside, I highly suspect that the container with “pure CO2” is indeed that, as one can see condensation on the inside, and since CO2 (under these pressure and temperature conditions) is a non-condensing gas, then that condensation is most likely water vapor, so the entire experiment is invalid as the container is certainly not “pure CO2”. • Will I agree. Another proof: In a scientific argument, the judge is the observation, not the theory! • Visit the HITRAN database. Each IR emitter has a spectral signature. The temperature of the emitter vis-a-vis its surroundings is irrelevant, and in fact, the temperatures are for practical purposes identical – i.e., they exist in local thermodynamic equilibrium (LTE). The temperature of the emitter does influence the quantitative balance in the intensity of one spectral line vs another from that emitter, but the wavelength of the CO2 and H2O lines is almost completely unaltered by temperature – at least within the atmospheric range of temperatures. • “the wavelength of the CO2 and H2O lines is almost completely unaltered by temperature – at least within the atmospheric range of temperatures.” All you need to know: • Will – Using emphatic language (“Nonsense”) doesn’t strengthen a case that can’t be made. To the extent the site you link to is informative, it confirms my statement. It refers to positions, intensities, and line widths of CO2 and H2O, but with no suggestion that the wavelengths oft these molecules are shifted by temperature. Any such change under atmospheric conditions would be miniscule. If you have data to the contrary, link to it specifically rather than citing a long list of article titles. • Wrong again Will, the spectrum is determined by the identity of the emitter, however it can not emit more at any wavelength than that defined by S-B. • Which is determined by its absolute temperature. Which in turn is determined by its surrounding ambient temperature as per its altitude. Say above 5km @ -80º C . As for your comment below : “Not true, O2, N2 and Ar notably in our atmosphere do not!” (emit IR) So that would mean that 99% of the atmosphere cannot cool to space via radiation at TOA right? Come on! • Judith Curry wrote in her posting: “I was hoping to put to rest any skeptical debate about the basic physics of gaseous infrared radiative transfer.” We see again that there is little sign of that becoming true. • Downwelling IR is not “Backradiation”. Downwelling IR does not add energy to the system because it is energy which is already present. It does not cause net E increase. Let us leave the subject of downwelling radiation there. The so called “Backradiation” is the energy we expect to find from the claimed “greenhouse effect”. The ability of a substance to absorb/emit, or radiatively transfer IR does not say anything about its ability to store that energy. Increasing CO2, increases the radiative transfer properties of the atmosphere in the far infra-red region. How is that even remotely like a “greenhouse effect”? How does a decrease in overall resistance of a poor conductor such as air, produce an increase in temperature? It is unphysical. It is the opposite of reality. “The physics of deep convection have been formulated since 1958 and are based on sound thermodynamics and measurements on location. The trends of the temperature in the high atmosphere in the last half century are very negative, starting on this height where the convection reaches. That means that more CO2 has a cooling effect rather than a warming effect.” See here: Correct, now you’re getting it, which is precisely why the change in CO2 concentration is so important (999645 ppm of the atmosphere does not absorb or emit IR). • Phil you are silly. ALL substances above 0K emit IR. That is not controversial physics. Your misleading statement has been repeated many times by the warmist’s but repetition cannot make it true. Why are you here making such false statements and clouding the issue? • Steven Mosher Wrong. Please tell me you have nothing to do with the design of aircraft, sensor systems, or other devices meant to protect our country. Start with this design guideline. • LOL – Very good Mosh. As a former weapons instructor I appreciate why you attached this citation. Those rocket scientists certainly knew a thing or two about missle guidance, CO2 and the IR spectrum. • Another non-sequitur Steven? • All molecules and therefore all gas molecules emit IR. • Imho Johnson first chapter is quite good, and is consistent with the explanation Guenter gives. Chapter 2, on the other hand, is…hum, well, it is clearly inferior to classical black body radiation, which is the most polite thing I can say ;-) Problem is that below the troposphere, heat is exchanged by both radiation and convection (with latent heat release ), only conduction can mostly be ignored. So no simple model, either purely convective or purely radiative, is complete. However, all flux analyses I have seen show clearly that more heat is transported by convection (and a lot more when latent heat release is present) than by conduction. It follows that, if a simple model including only one heat transfer mechanism has to be chosen, better to use a convective one. Moreover, convective lapse rate is a stability condition, so I see it (and, from what I get, classic climatology “above the atmosphere=rigid shell level” see it the same) as a limit for temperature gradient that can not be exceeded, due to stability reason. It thus makes sense that one can derive a max ground temperature from TOA temperature using this lapse rate, without knowing exactly how much the heat flux. Above TOA, we have radiative transfer, so we know TOA temperature from S-B law. Heat flux is then determined by conservation of energy, convective heat flux is just what is missing to ensure equilibrium. The only error I see with this model is that it is too simple: 1D, and it does not take into account the fact that radiation is diffuse, so all radiation to space does not occur at a precise TOA level, it is only an average notion. But still, compared to simple shell-like purely radiative atmosphere (1D also, all those shells and the earth are considered perfectly conductive in the horizontal directions), the model with the lapse rate is head and shoulder above: at least it does not neglect the largest heat transfer to keep only the smaller radiative one because it is tractable. This is one of the biggest error in climatology vulgarisation: the CO2 blanket is completely wrong, but it may be enough for those allergic to science/mathematics. The radiative shell (or multishell) models are mathematically complex enough do deter those one, and thus is presented as a simple but usefull model. It is not, it is almost as wrong as the CO2-reflective blanket, and frankly, it paint a very poor image of climatology for those scientifically-minded enough to understand it, but who start to evaluate it compared to an earth-like atmosphere … • The first chapter has a major error in assigning the 10 C/km lapse rate to radiation while also referring to it as the dry adiabatic lapse rate. Radiation has nothing to do with the 10 C/km dry adiabatic lapse rate. A radiative equilibrium is isothermal, not isentropic. This mess confuses the whole later argument about lapse rates. • Jim D Bad news and good news First the Bad news …….”a major error in assigning the 10 C/km lapse rate to radiation while also referring to it as the dry adiabatic lapse rate. “…… The dry adiabatic lapse rate is given by dT/dh = -g/Cp Where g = Gravitational Field Strength Cp = Heat Capacity. In other word the temperature acquired by air molecules after contact with the surface drops by almost 10K per Km of ascent. Now in the case of the dry adiabatic troposphere although water vapour may be absent, CO2 being well mixed should be there as usual. However it seems to play no part that I can see. Even more alarming, in this Nasa description of the atmosphere with various conditions specified there is no mention of greenhouse gases! Surely the radiative effects of CO2 must get at least a tiny mention, shouldn’t they? Now the good news The greenhouse theory has been banished to the TOA. The radiative gases radiate long wavelength EM radiation to space to attempt an overall radiative balance for the Earth. It acts like the drain hole at the bottom of the bath. The Sun acting like the water flowing from the bath taps. If the drain hole is too narrow, water level rises(temperature); if too wide temperature falls. Now back to a dry atmosphere; the temperature lapse rate will still fall at 9.8K/km in the troposphere. The net effect then of changing CO2 and H2O vapour is to move the tropopause up and down. Now this truncated version of the Greenhouse Theory is one that I think is very plausible. • In some way we can agree that the tropospheric lapse rate is fixed by the dry and moist adiabatic lapse rates, and therefore its whole temperature profile is linked to the surface temperature, which is in turn affected by a radiative balance. CO2 can’t change the lapse rate, which is based on physical constants, such as g, cp, latent heat constant, gas constants, etc., but can only affect the surface temperature to raise the effective radiating level of GHGs. The troposphere’s only degree of freedom is the surface temperature in this simplified model that represents CO2 effects in one atmospheric column. • The lapse rate is determined by thermodynamics of moist air as long as there is a sufficient heat flow from the surface to the upper atmosphere to keep the real lapse rate at the adiabatic limit. That requires that the surface is warm enough to release the required amount of energy excluding that part that escapes through the atmosphere without being absorbed. The heat flow is a combination of radiative transfer, convection and advection of latent heat. Convection is the part that guarantees automatically that the temperature gradient cannot exceed the adiabatic lapse rate. Therefore the strength of the radiative transfer does not influence the result as long as the surface is warmed so strongly that the adiabatic lapse rate would be exceeded without convection. Adding CO2 influences the situation in at least two ways. The first is due to the reduction in the amount of energy that escapes without being absorbed. Due to this effect less energy is leaving directly from the surface. The same applies also to the low clouds. In equilibrium all this reduction must be compensated by increased radiation from the upper atmosphere and increased heat flow from the surface to the upper atmosphere. The second effect occurs around tropopause. The increased CO2 concentration moves the effective radiating altitude of CO2 higher up. Combining both effects we notice that the radiation that escapes from the upper atmosphere must be both stronger and originate higher up. Both requirements lead to an increase in the temperature of the atmosphere at a fixed altitude if upper troposphere near tropopause. The two effects are separate. The first comes from the increase of CO2 at lower altitudes, the second from its increase at tropopause. My understanding is that the first effect is stronger than the second, but I have not done any calculations to support this conjecture. • Pekka Look at a description of the broad outlines of the atmospheres structure with particular emphasis on the troposphere. There is no mention of the greenhouse effect. The effect of water vapour is explained through the mechanism of latent heat. Of course CO2 and H2O radiate in the IR. It just doesnt seem to be that important. • Bryan That is a description of certain issues. That something else is not mentioned there is not an argument against that. I didn’t notice anything there that would in some way contradict what I wrote here or in numerous other messages on this site. It is also dishonest to pick one sub-chapter from the tutorial stating that it does not discuss greenhouse effect when the previous sub-chapter does discuss it. I think you might try to avoid being dishonest. • Pekka …”I think you might try to avoid being dishonest.”.. I try to avoid using language like that. I have no way of knowing how honest you are but I give you the benefit of the doubt. I was genuinely surprised when I came across the NASA document. Beforehand I would have thought that the radiative effect of CO2 would have to be accounted for even in a dry adiabatic Earth atmosphere. In fact it would be a good experimental method of isolating the CO2 effect from the H2O effect in the limit. There seems to be a growing body of opinion that the radiative effects of CO2 are either minor or self cancelling. A number of IPCC advocates are now promoting this and say the real and significant greenhouse effect is to be found at TOA. • To put it simply, CO2 affects the absolute temperature, not the lapse rate in a dry atmosphere. This is why it is important. It displaces the whole temperature profile according to its radiative effect. • Bryan, I have become less polite to you after your baseless insulting comments towards me some times ago. I told you that the previous sub-chapter of the same tutorial tells that the CO2 is important. Why do you neglect that and choose to concentrate on the next, which discusses other things. If you find the chapters contradictory, the fault may be in your understanding of the content and its significance. For that the only help comes from studying the basics. Trying to make guesses from more advanced texts (even when they are tutorials like in this case) leads often to such misunderstandings that are visible on this site all the time. • Jim D I think we are in close agreement about the broad outlines. On the dry adiabatic atmosphere I used to be a bottom up advocate. Surface temperature determined by Sun/Earth interaction. Gravity giving rise to lapse rate of 9.8K/km. This very simple structure then modified by convection, latent heat and radiative effects till the convective impetus petered out at the tropopause. Above the tropopause the radiative effects adjusted to keep the Earth energy in/out in balance. However recently I find the top down approach quite compelling. The TOA conditions acting like a gate. The consequences of the gate being too narrow being passed back down by the same dry adiabatic lapse rate to determine the surface temperature. • Pekka I’m sorry if I addressed you in a way that you found disrespectful. I think I used the word IPCC apologist rather than my usual term IPCC advocate so I must have been loosing my cool. I think that one undisputed plus for Judith’s site has been to tone down the insult level. However if you are a sceptic you have to develop a much thicker skin. For a laugh go onto a site like Deltoid and pretend to be Nasif Nahle. You wont get out alive! • Bryan, The net discussions are often difficult. Short messages cannot always transmit the tone correctly. Some of the participants are provocative by purpose, and some others write claims that they know to be false, even deliberate lies. In climate science and in particular in the physics behind the climate science there is very much that I have full confidence in based on my schooling and understanding based on that. There are many other things I have much less confidence in and also conjectures that I consider more likely to be false than true. In these discussions I comment most often on issues I am certain about. Trying to do that as well as I can and getting answers that show no evidence on willingness to learn, is often frustrating and leads to doubts about the goals and even honesty of other participants. All concrete hints to the same direction strengthen these suspicions. At the same time I know perfectly well that many points are difficult and cannot be verified personally without specialized education. I try to stay polite, but sometimes it leads to a point, where I start to think that I am played with and that I am making fool of myself unless I react strongly. I know that this is going to happen also in the future, if I continue to comment on climate sites. • Bryan, I think the dry adiabatic atmosphere can be thought of from both perspectives, top and bottom, which both lead to a requirement that the whole temperature profile is displaced in the warmer direction when CO2 is added. My view is that more CO2 initially reduces outgoing IR but also causes the surface to warm, which in turn convectively forces the atmosphere to warm, increasing the outgoing IR till it balances again. • I just came across this discussion, and since it was a discussion rather than an argument, I thought I would offer my perspective. In general, a TOA radiative imbalance due to impeded loss of IR to space is translated into more energy at each layer, ultimately impacting the surface temperature. In turn, this further warms the atmosphere over time as the surface temperature rises. The immediate result of atmospheric warming is an increase in lapse rate beyond the adiabat due to greater warming at low than at high altitudes. This results in static instability that triggers a convective adjustment restoring an adiabatic profile (which in most regions eventually proves closer to a moist than dry adiabat due to latent heat transfer with release at higher altitudes). The radiative changes are very rapid. The dry convective adjustment (according to Andy Lacis) is slower, and the full change including the latent heat effects occurs over many days or longer. The “super-adiabat” would tend to enhance surface warming because of the higher lapse rate. On the other end, the moist adjustment creates a negative lapse rate feedback that reduces the warming effect. This, however, is accompanied by a positive water vapor feedback, and the combined water vapor/lapse rate feedbacks are generally computed to show a net positive effect. • Fred, are you finally going to tell us about the hot spot? I believe you need a hot spot for there to be any appreciable top down warming don’t you? • kuhnkat, Is this really so difficult? Nobody claims that there would be warming in the sense you imply – nobody at least of people supporting main stream climatology. Therefore there is absolutely no need for such a hot spot. This is not in contradiction with the fact that atmosphere radiates to surface and contributes to a temperature increase. If you do not understand the point after all these discussions and hundreds of messages where it has been explained in different words, then I propose looking in the mirror. • Pekka, you are a very reasonable, intelligent, respectful person. I respect you for your knowledge and comportment. Unfortunately I am often none of the above. Frank started discussing heating at elevation which is caused by bottleneck in IR emissions. He did not give a mechanism for the purported bottleneck. He also talked about heating from the top down. With emissions bottlenecks, heating from top down, backradiation, and eventual heating of the surface, exactly what am I supposed to assume he is talking about?? I have actually read explanations of this effect and have always been confused about how the bottleneck comes about. The statements seem to say that the heating will raise the effective emission altitude as the heated atmosphere expands. As the new higher altitude is supposed to be cooler than the old average altitude less IR can be emitted. Hopefully you can clear this up for me. If the atmosphere expands from warming, doesn’t that say the higher altitude will be about the same temperature as the old altitude? That is, the altitude will average higher but the temp will be about the same because everything is warmer. If we are saying that this warming will not happen it would seem to me that the temperature is more controlled by the lapse rate and convection, in which case there will be no significant warming in the first place without major perturbation. Thank you for any clarification you can give on this “hot spot” issue. • kuhnkat, Nobody of us is capable of always finding clear expressions for his messages. While many issues are not really complicated, they involve anyway numerous details and attempts to explain the issues in limited space and simpler language requires leaving something out. All too often happens that just those things left out are for some reason in the mind of the other party of discussion. Another problem is that the concepts are not defined precisely. What means “warming a body”? In these discussions some participants expect that the effect that warms must be the final source of heat or energy that rises the temperature to its final value. A colder body can never do that for a warmer one. Many others mean by the sentence “body A warms body B” that taking the A away would lead to a colder B. This is very often possible even when A is colder than B, if B is heated also by some other source. I have still difficulties to understand why this second way of interpreting “A warms B” is not understood by everybody. I commented to the most recent post of Judith that many people can much better form general views on issues than present scientific type arguments in their support. It is very common, that the role of detailed arguments is overvalued. They are overvalued often both by those who are competent in presenting them and by others for whom a more general intuition works much better. This is also a source of dispute and confusion, when people are sure that they are right in the main issue, but cannot justify it by a detailed arguments. There is too much belief that detailed arguments are the way of winning argumentation, even when that does not work at all. In climate issues this fact comes up all the time. Even for experts a more general and intuitive approach may give more reliable results than trying to prove by detailed arguments when not enough is known about those details. • You certainly nailed it! Very good. This may be because it’s now past midnight in Sweden. 49. Judith, I want to comment that I am increasingly an admirer of your approach, especially on this technical thread. By letting others take a turn at being the authority, people seem to come to more openly examine their own ideas and knowledge – including errors. By just minding the store, wrong assumptions and weak knowledge claims are brought to the surface by others, instead of driven underground by your authority. It’s a better learning process than confrontation. 50. To complement the many comments made above indicating that the radiative transfer principles contributing to the greenhouse effect, including the role of back radiation (downwelling longwave radiation) are consistent with the laws of physics, it’s worth pointing out that the back radiation predicted from these equations has been confirmed by measurement. For a general overview, readers should revisit the Radiative Transfer Models post to review the links Judith Curry has cited, with particular reference to the Atmospheric Radiation Measurement (ARM) program – the post is at Radiative Transfer For a particularly informative description of the ARM program, see – ARM Prrogram 51. Claes, You write: “Let us now sum up the experience from our analysis. We have seen that the atmosphere acts as a thermodynamic air conditioner transporting heat energy from the Earth surface to a TOA under radiative heat forcing. We start from an isentropic stable equilibrium state with lapse rate 9.8C/km with zero heat forcing and discover the following scenario for the response of the air conditioner under increasing heat forcing: 1. increased heat forcing of the Ocean surface at low latitudes is balanced by increased vaporization, 2. increased vaporization increases the heat capacity which decreases the moist adiabatic lapse rate, if the actual lapse rate is bigger than the actual moist adiabatic rate, then unstable convective overturning is triggered, 4. unstable overturning causes turbulent convection with increased heat The atmospheric air conditioner thus may respond to increased heat forcing by (i) increased vaporization decreasing the moist adiabatic lapse rate combined with (ii) increased turbulent convection if the actual lapse rate is bigger than the moist adiabatic lapse rate. This is how a boiling pot of water reacts to increased heating.:” I think your model is incomplete, since the “heat forcing” as you name it is external and you describe only energy flux that is internal. “Heat forcing” increases the energy content of the earth system and therefore leads to increased temperature on the long run to decrease your so-called “heat forcing” by increasing outgoing longwave radiation (OLR). Your model leads necessarily also to increased temperature. You describe radiative-convective equilibrium as well. So what is different in your model compared to the classical model Best regards 52. “If they are wrong, prove it” Done already 53. One thing that always puzzles me when IR and the GHE are discussed is why on a nice clear summer day in Atlanta I don’t melt. I mean, we supposedly have an AVERAGE downwelling radiation of 324 wm-2. I would imagine that the downwelling radiation at noon on a humid day in Atlanta would be higher than the average due to all the water vapor in the air. Let’s make it 25% higher, or 405 wm-2. Now, let’s add the sunshine, which is certainly greater than 900 wm-2 at noon. So we now have 1305 wm-2 on my greybody. Using the SB equation, with emissivity of 1, that translates to 116 C. Something doesn’t add up. • Hi Jae… Excelent observation! You could calculate the energy the human body would absorb, from those 405 W/m^2, by knowing that it has an average absorptivity of 0.7. Imagine the hard work the body would perform for getting rid of that excess of energy! • A black body radiates 400 W/m^2 at a temperature 0f 17 C. It’s the sunlight that would cause a problem for an object unable to shed heat via perspiration, reflection, conduction, or respiratory heat loss. With 900 W/m^2 absorbed, its temperature would equilibrate at 82 C. • At an ambient temperature of 40 °C, a normal human body absorbs 43.4 W. That figure represents an intensity of 160.71 W/m^2. However, Jae mentions c.a. 1305 W/m^2 the energy emitted by the atmosphere, if the stuff of backradiation were true. Fortunately, as Jae points out in his post, it’s not true because, if it were true, the human body would absorb the dizzying amount of 913.5 W, which would represent an intensity of 3,383 W/m^2. On the other hand, if you are considering an idealized blackbody emitter, emitting 400 W/m^2, then the human body would absorb 280 W, which corresponds to an intensity of absorption of 1,037 W/m^2. Now, let’s consider a blackbody-ambient at 17 °C; the human body would be losing, not gaining, 23.17 W (-23.17 J/s), which corresponds to -85.82 W/m^2. • Your figures aren’t well explained. The average human has a surface area of about 1.7 m^2, so I’m not sure what you mean when you imply that 160 W/m^2 corresponds to 43.4 W absorption. More importantly, an ambient temperature of 40 C is very hot (and represents much higher than average back radiation). It is equal to a Fahrenheit temperature of 104 F, which is very difficult for humans to tolerate on a sustained basis, although they can adapt temporarily through sweating and panting. It is incorrect to state that 1305 W/m^2 is emitted by the atmosphere. Most of that figure comes from the assumed value of 900 for sunlight, which would be an immense problem for an individual who could not adapt, and would be unsustainable for any extended period. Back radiation has little to do with it. Finally, in the example I gave, which you cite, of ambient temperature at 17C, this is easily tolerable, because human metabolism generates enough heat to compensate for the heat loss. In fact, tolerable climates for humans require some degree of heat loss to the environment, because we can’t shut down our metabolism, and so if we couldn’t lose heat, we would quickly die. In essence, the values I gave in my earlier comment are correct, and the most significant problem in the cited example is the sunlight. • Fred: I originally thought you digged the conversation, bro., but it appears that you don’t have a clue! • Dear Fred, 0.27 m^2 exposed to radiation, unless it is naked. 40 °C is a usual temperature, here, during summer. The average absorptiviy of the skin, in a normal human being, is 0.7. I never said you’re wrong. I only made the calculations for the conditions you specified in your post. At 17 °C the human body would lose 23.17 W of energy, which would be transferred to the environment. It would be a problem if we were endothermic organisms. Fortunately, we are self-regulating thermodynamic systems; otherwise, we should spend many hours under the sunbeams, as lizzards, for example. Now, if you say that a blackbody at 17 °C is emitting 400 W of thermal energy, how much Watts it would emit in my location when the temperature can reach, easily, 40 °C in summer? • Nasif – A true black body at 40 C (313 K) would radiate about 544 W/m^2 in accordance with the SB equation. Humans can’t afford to sustain a body temperature of 40 C for very long. At 37 C body temperature, they lose heat by all the mechanisms I mentioned above, not just radiation. I’m sure humans can tolerate an ambient temperature of 40 C for intervals, but I doubt they can tolerate it for a very long sustained period, day and night, without some exogenous cooling source, such as drinking cold water. • Dear Fred, Exactly! An idealized blackbody at 40 °C would emit 544 W/m^2, which is not the case if we consider the real system atmosphere-lithosphere. The external operator, for the case of my location, where we undergo up to 40 or higher degrees Celsius during the summer daytime and 30 or more degrees Celsius through the nighttime (and, believe me, we have survived it through many days), cannot be other but the Sun, and you will agree on this because the atmosphere cannot “store” such load of heat. Primarily, because the absorptivity of the whole atmosphere, including a 4% of water vapor, is quite low (by the order of 0.01 when considering the mean free path length of photons and the time they spend to leave the Earth’s atmosphere). That’s why, I sustain that the current models on TAO (or TOA) are absolutely flawed. • I’m not sure what your point is. The emissivity of the atmosphere in the IR range of greenhouse gas emssion and absorption is certainly less than unity, but although the emissivity of any small atmospheric layer, even near the surface, is small due to the low concentration of greenhouse gases, the total downwelling longwave radiation comes from multiple layers and is substantial. Radiative transfer codes derived from the Schwartzhcild radiative transfer equations, in conjunction with observed values of CO2, H2O, and surface temperature, yield values for both OLR and downwelling radiation that match observations very well, confirming the validity of the principles on which they are based. • Fred… I’m referring to the time that a photon takes to abandon the atmosphere, as wide as it is, and to the distance that a photon can travel without touching a molecule air, those molecules that can absorb it or scatter it. From the databases of both parameters, we find that the air, as dense as it is, we find that the emissivity of the air, 4% of water vapor included, is 0.01; no more. The atmosphere is not a blackbody. Perhaps those observers on the downwelling radiation are observing other things, except any downwelling radiation? • Here is the reductio ad absurdum. A human body is like a black body at 37 C which emits about 525 W/m2. Now according to this theory proposed above, nothing can emit towards a human body that isn’t as warm as it, so when you go out at night you are losing heat at 525 W/m2. Wouldn’t you cool down really fast even on a balmy night with a 20 C ground temperature? The fact is, everything emits towards everything else regardless of relative temperature. We do have incoming radiation to us at night even from the cooler ground. Go out and try it. Explain how this is different from the atmosphere radiating towards the warmer ground. • Wow! Jim! You have got rid of S-B Law! Please, tell me, are you related in some way to the Hockey Stick producers? Besides, you made us, humans, real blackbodies! Jim, a human body has a temperature of c.a. 37 °C. If it (the human body) is exposed to an environment at 17 °C, it would lose 23.17 W, i.e. the energy transferred by radiation from the human body to that environment at 17 °C, according with the S-B Law derived formulas. No more. The formula is quite easy: Q = e (A) (σ) (Te^4 – Thb^4) Where e is the emissivity of the system (human body in this case), A is the area exposed area of the human body, σ is Stefan-Boltzmann constant, Te is the ambient temperature in K, and Thb is the average temperature of a normal human body. Go on, make your calculations. • See, you now have the ambient air radiating towards the body when the slaying book says it can’t because it is colder. • Jim… I’m not having the air radiating towards the body, but quite the opposite. The body is losing energy, not gaining it from the environment. Under those conditions, the body is pushed to generate more thermal energy, from metabolism, to maintain his energy state in a quasi-stable state. In summer, only when the environmental temperature is higher than the body’s temperature, the body gains energy from the environment; however, the thermoregulating system starts working to get rid of the excess of thermal energy absorbed. If you applied the S-B formula correctly, you had to obtain a negative result, which means that the body is losing energy, not gaining it; the body must generate more energy through the cellular respiratory process and other mechanisms for not cooling off, in this case. • The Te term in your equation comes from back-radiation is all I am saying. If you believe your equation, you implicitly agree with back-radiation. I am not saying your equation is wrong, I am saying it proves back-radiation exists. • The Te is the temperature of the environment and it comes from the energy it has absorbed from the surface. e (A) (σ) Te^4 Heat received from surroundings e (A) (σ) Tb^4 Heat emitted by body • Jim and Phil… You’re much confused. It’s the energy from the human body to the environment… Have you noticed that the energy flows ALWAYS from the warmer system to the colder system? Backradiation doesn’t apply because it is the human body what is radiating, not the environment. Again, for this case, the human body is LOSSING energy, NOT gaining it. • No, Thb is from the body to the environment, Te is from the environment to the body, which is why they have opposite signs. Since the environment is colder than the body, this is the term the slaying book says should be zero. We clearly agree the book is wrong on this matter. The environment is preventing the body from losing heat at an unrealistic rate of 525 W/m2 in the same way as the atmosphere prevents the ground from losing heat at an unrealistic rate (where a similar formula applies with Thb being from ground temperature, Te from the atmosphere). • LOL… Thb is temperature of the body in K, and Te is temperature of the environment in K. :) If you read well my posts, I’m always referring to an “idealized” blackbody. Got it or start again? • Nasif, you have already contradicted slaying the dragon by having the Te term, but you haven’t realized it yet. I suggest you argue with those authors about that term. I am not arguing about it. • I’m afraid it’s you who’s confused Nasif, the environment radiates according to its temperature and the body absorbs it, the body also radiates according to its temperature. The net effect is that when the body is warmer than its surroundings the body loses heat (when the environment is hotter than 37ºC the body gains heat). The environment doesn’t stop radiating because the warmer body is present, ‘back radiation’ is always present. that’s what the term, e (A) (σ) Te^4, represents. • Nope, confusion is on your side. I’m afraid you think the environment is never colder than your body. The formula is the S-B equation, and you’re blatantly misinterpreting and twisting it, as usual in AGW idea. • Good Grief, why dont you give it a rest. Nahle is correct. • Phil, you’re absolutely wrong. If you eliminate the term Tb^4 from the formula, you would be referring to the energy of the atmosphere. It has nothing to do with “energy received from surroundings”. You have only one term, the temperature of the environment, and it is the result of the FLOW of energy IN the environment. • To Phil… Anser this question for me: what the value of “e” could be in the formula that you say it is “energy received from surroundings”? If you are referring only to the temperature of the air, then you have to introduce the value of “e”, and in the case of the human body, you have to introduce the value of “e” for the emissivity of the human body. It’s very simple. You’ve dissected the formula and you’re referring to two different things. • To Phil… I suggest you look up Kirchoff’s Law, as the heat is being absorbed it should be ‘a’ not ‘e’, however a=e. • @Phil My question: “To Phil… Phil’s answer: You didn’t answer my question… Yours is blah, blah blah. I repeat, it is S-B equation. Check your books. If your environment has e = 1 and a = 1, you’d be scorched… Mmm… • Fred?? Everyone keeps telling me that we ADD all incident radiation, no matter where it is from, to determine what the temperature should be. What are YOU saying here?? • What I’m saying is the Noel Coward song line – “Mad dogs and Englishmen go out in the midday sun.” • Is THAT your scientific basis for all of your comments? • jae, absolutely correct. Another simple example of how the purely radiative calculation as proposed by Pierrehumbert and other climate scientists gives completely the wrong answer. The correct answer of course, is that you need to take into account other mechanisms of heat transfer such as convection and evaporation. This point has been made dozens of times on all these threads. • Bruce Cunningham There is convection etc, within the Earth’s atmosphere, but the only way that heat (energy) is released from the Earth to outer space is through radiation. Reflection of incoming solar radiation by clouds is the big question. Until someone can accurately define how this changes the amount of incoming energy, predictions of future temperatures cannot be accurately calculated. • No sweat? 1) Water vapor in the air changes from 1% to 4%. 2) CO2 in the air is about 0.038%. 3) Since the industrial revolution, the proportion of CO2 in air has increased by 0.01 % (from 280 to 380ppm) 3) Both water vapor and CO2 are greenhouse gasses. 4) A natural change in 3% of water vapor in air does not cause global warming. 5) How can a change in 0.01% of CO2 (3/100th of the natural change in water vapor) due to human use of fossil fuel cause global warming? • Your question is somewhat off-topic, but relative humidity has not increased over the past century, while CO2 has risen almost 40 percent over its pre-industrial concentration. Water vapor, in fact, has such a short atmospheric lifetime that its absolute humidity value cannot remain elevated in the absence of some other factor that causes the atmosphere to warm and thereby retain more water. That is why it operates as a feedback mechanism amplifying warming mediated by CO2, solar increases, or other forcings rather than acting as a forcing in its own right. If the average relative humidity had in fact increased by 300 percent, the warming would have been immense. I believe this has been discussed in the threads on feedback and on climate sensitivity. You might want to review the previous discussions before proceeding further, so as not to repeat material already covered. I don’t agree with “short atmospheric lifetime” argument regarding water vapor. Does not every half a day, the temperature drops at night? Is the “atmospheric lifetime” of water vapour less than half a day? Do not tell me what I discuss here. This is not your blog! • Girma Interesting point. In light of what Dr. Curry has said elsewhere: ..could you help me out by relating what you say directly to the topic at hand, and how the two connect? Much obliged. • Thanks, Bart. I tried to say it very tactfully, but your direct approach is better. • Please take discussion of water vapor to the Pierrehumbert thread. • Derry MCCarthy you might find an answer of sorts here by Robert H. Essenhigh, Department of Mechanical Engineering, The Ohio State University, Columbus, USA. In press in the journal ‘Energy and Fuels’, but now available at ACS website • Derry… The residence time of carbon dioxide in the atmosphere could be as long as you wish… The important thing here is that the lapse time for the thermal energy to stay in the atmosphere is quite low: 0.0097 milliseconds! The mean free path lenght of one photon of thermal energy is 21 m. Besides, from experiments realized by many physicists, at its current concentration in the atmosphere and under the current physical conditions of the atmosphere, the carbon dioxide cannot absorb-emit more than 0.002 of thermal energy. • “carbon dioxide cannot absorb-emit more than 0.002 of thermal energy.” Hi Nasif. Is this figure a percentage? If so, what is it a percentage of? Surface emission? • As you rightly imply, it cannot. 55. The bottom line in all this is that there is absolutely no proof–or even a reasonable demonstration–of an “atmospheric greenhouse effect.” All planets/moons with an atmosphere have a surface temperature that is much higher than the SB equations–based on the IR from those bodies–at about 100 mbar–suggest. It is high time that the “climate science community” HONESTLY faces the questions that are posed by the skeptics (and stop with the dishonest, unconvincing, meaningless, disgusting, and typically liberal insult of “denialists). The “community” has already lost the public and only has politicians and rent-seekers on its side. The smart ones are already publishing papers refuting the stupid, ever-present “catastrophe” of our times (aka, Chicken Little). Grow up! 56. Fred Molten, one can make an interesting thought experiment about “back radiation”. Let’s assume we have the earth system as a stationary state with 280 ppm CO2, well mixed. Normal lapse rate. In the first case, we bring in a thin layer of CO2 that contains a similar amount of CO2 compared to the whole atmosphere in a thin layer next to the surface. In the second case, we bring in a thin layer of CO2 that contains a similar amount of CO2 compared to the whole atmosphere in a thin layer next to the top of the atmosphere. Both layers are equilibrated with respect to temperature. “Back radiation” is highest in the first case, but surface temperature is lowest. It is the emission height that counts. As I said, it is the cooling to space that rules. That is, why I don’t think “back radiation” is a necessity to explain the greenhouse effect. Best regards • I don’t believe there is any way to warm the surface without back radiation. In its absence, radiative imbalances in the atmosphere would change atmospheric temperature but not surface temperature (except for the minimal effects of conduction). Regarding your thought experiment, my assessment is the following, at least at first consideration. If we ignore water vapor as well as non-radiative phenomena, I believe that the same number of CO2 molecules will absorb the same number of photons, regardless of altitude. At equilibrium, they will emit as much energy as they absorb, and the temperature of that layer will therefore rise until it suffices for that emission to occur. For the high altitude case, this would cause a temperature inversion such that temperature is much higher at the height of the absorbing layer than it is below. This is clearly an unphysical situation, but something vaguely similar occurs in the stratosphere, where ozone absorbs solar UV, resulting in a temperature inversion. There may be other factors that I’m ignoring in addressing your thought experiment, but my first paragraph rather than the second is what I would emphasize – the surface can’t warm unless it receives the radiation needed to warm it. • There is only one way the physics really works, but there are many ways of putting this into words and more than one way of formulating the equations used to calculate the correct results. There are no limits on the number of ways the physics can be misrepresented and we have already seen pretty many in comments on this site. Countering these erroneous claims is made more difficult by the fact their details may well be in agreement with some of the correct descriptions while the errors are in putting these pieces together. Some of the erroneous theories are pure nonsense from start to end, but not all of them. There is a continuing argumentation on whether one mechanism can heat an object which is actually receiving heating through many processes or from many sources. Then one may claim that any single process cannot heat it, if the processes are individually weaker than cooling of the object. Such arguments are presented as if all heat sources would not add up whatever their mechanism is and as if each of the heat sources would not have its share in the total heating. How can this kind of argumentation be supported by so many? 57. Claes Johnson, Your statement that “back radiation” is fictional, a figment of the imagination for any length of time longer than a fraction of a second, I totally agree. I will read your paper (book) as I get time and I might not totally agree with the methods you use to describe this. Maybe so. I have always viewed “back radiation” as a null operator: — 2 units or energy leaves a surface cooling the surface by that 2 units. — That 2 units are absorbed by molecules (GHGs) warming the gases locally. — 1 unit of energy is radiated to space and lost to the system and also cooling the gases by 1 unit. — 1 unit is radiated back to the surface to be reabsorbed warming the surface by 1 unit and also cooling the gases by 1 unit. — NET EFFECT: In the end the surface has cooled by 1 unit and 1 unit is lost to space, all in a few milliseconds. All other effects have totally cancelled. One way to view this is a reduction of effective emissivity of the surface by at factor near one half. That seems very close to your initial statements I was reading and I agree, there is no real warming. After reading onward I may not agree with the exact methods you use to place this effect into a physics framework but I will read it, that takes time. • Yes, Wayne, but if the surface was at a temperature that demanded it radiate 2 units and it only radiated a net 1 unit, then it is not in equilibrium anymore and its temperature must go up. (I think I got that right, I normally lurk on the technical threads and keep my head well down!) Regards, Rob • Hi Rob, I started to write you a detailed explanation, but after reading many of your comments, I’m afraid it would be pointless if you are not able to take my example above and limit it the exact case I gave. The two units must be radiated upward, those two are not all radiated upward (your injection of temperature), and those two units must be absorbed and not transported directly to space without absorption (window). If you can not grasp even that simple example there probably is no hope of you understanding Dr. Miskolczi’s methodology he used in his latest papers and which is very close to my example above. Kind regards. I like to lay low too. Open your mind, the AQUA AMSU temperature just hit the same temperature that was read thirty years ago, how can that be? If I were you I would get real curious right now. I have already found my answers. 58. Michael Larkin Whatever else can be said of this thread, I am enormously grateful for the earlier link to Roy Spencer’s explanation of the GHG effect which is worth repeating: This is the clearest explanation for non-specialists like me that I have ever come across. I’ve saved it to my hard drive. I have a question. Spencer says that with no atmosphere, the earth’s surface would be around 0 deg. F (-18 deg C or 255 deg K). Suppose all GHGs (but nothing else) were removed from the earth’s atmosphere (I’m assuming there would be no water on the planet). Would the temperature be greater than 0 deg. F? I’m hoping that’s on topic, because I’m trying to establish in my own mind whether just the presence of an atmosphere pretty much as dense as the one we have now, but sans GHGs, would in some way produce warming. I hope it makes sense to ask the question. • afaik, yes: what I think would happen is that all radiation would occur at the surface, because the atmosphere would be perfectly transparent for all wavelength, at by K., would also emit no EM radiation (In reality, it would not be like that, but I guess it is the idealised situation you have in mind). So, the surface T at equilibrium would be computed the same as in the no-atmosphere case. But I am not so sure about the T profile in this transparent atmosphere. Quite fast, we should reach the lapse rate for this gravity field and adiabatic fluid, by convection. I think, after some time, conduction should produce uniform T, which seems to be the no -heat flow limit regime (well, assuming 1D problem)… but I am not sure uniform T is the equilibrium in a gravity well, some equirepartition principle may mean that T goes down the higher you go (some interpretation of virian theorem would say so too, which makes sense: monoatomic gases modeled as elastic spheres, should have a lower velocity at top of atmosphere…else they would reach escape velocity) which would falsify simple conductive transfer, except if “total” temperature incorporate somehow potential energy. Interesting question, I would be interested about what gaz kinetic theory specialists would have to say about that, all in all my hinch would be for non-constant T and conduction process acting with a “total” T incorporating potential energy…. • yes, definitely a non-constant T at equilibrium due to gravity: after all, simple heat transfer linearly proportional to T gradient is not a fundamental law, it is derived from kinetic gas theory, one of the hypothesis being, iirc, no volume forces. Gravity is a volume force, so I am almost sure the Fourier law for conduction is not strictly valid in this case (it is a first order phenomenological law, nothing fundamental there), but that heat transfer must incorporate gravitational potential energy….. • Yes the convective equilibrium lapse rate is g/cp, about 10 K/km, so I would expect something like that. It is complicated by variations in surface heating with latitude and the diurnal cycle, so it is not clear what temperature this would equilibriate to over the surface, but since the non-GHG atmosphere has no other cooling mechanism than contact with a colder surface, the surface temperature would somehow control its eventual equilibrium temperature profile. Michael – Removing only GHGs would have slightly greater cooling effects than removing the entire atmosphere. This is because atmospheric molecules (O2, N2, CO2, etc.) scatter some sunlight back to space, and in their absence, all solar radiation would reach the Earth’s surface. The 255 K figure assumes no other changes. In fact, in the absence of water, there would be no ice, snow, or clouds, and the Earth’s albedo (percent of sunlight scattered or reflected back to space) would decline significantly. As mentioned above, some scattering would still occur from air molecules, and some from light-reflective surfaces such as sand, but it would be far less than the current 30 percent figure. As a result, the Earth would absorb more heat, and warm well above 255 K. I don’t know what the exact temperature would be. It would be colder than today, but probably by only a modest amount. • A small correction – Above, I should have omitted CO2 from my example of light-scattering molecules, because you were asking what would happen if it were removed. Of course, N2, O2, argon, etc., would remain, and their contributions would be little diminshed by the removal of a minor constitutent by volume such as CO2. • Michael Larkin Thank you for your clear and not-too-technical response, Fred. Might have seemed a peculiar question, but it elicited useful extra information for me. 59. To summarize my position: 1. Radiative heat transfer is carried by electromagnetic waves described by Maxwell’s equations. The starting point of a scientific discussion of radiation should better start with Maxwell’s equations than with some simplistic ad hoc model like the ones typically referred to in climate science with ad hoc invented “back radiation” of heat energy. If there is anything like “backradiation” it must be able to find it in Maxwell’s wave equations. In my analysis I use a version of Maxwell’s wave equations and show that there is no backradiation, because that would correspond to an unstable phenomenon and unstable physics does not persist over time. I welcome specific comments on these two points. 60. 2. Agreed. And I am not too comfortable with the model hierarchy used in Climatology: pure radiative models are imho correct,but they do represent the main heat transfer in earth system well…so are useless for earth. TOA+lapse rate is better, but I think they are not so solid mathematically, I do not really like the treatment of it. Should be consolidated, and then it is 1D, so predictive value is not clear, but at least this model could have heat transfer similar enough to actual heat transfer on earth to be somewhat useful. Finaly, there are GCM….but they are huge, use numerical methods I do not like (FD for something with complex continental shapes – yuck), and introduce a lot of approximation (solving NS equation on earth lenghtscale is ridiculous…so it is not NS that is solved, but some kind of approximation of it. Never really have seen the PDO that are solved in fact, which is in itself very worrying. Lot of blackboxes modelling different process connected to each other (radiative module – ocean module – salinity module – biological C cylce module), so it is more an ad-hoc model that something starting from first principles or even a solid set of PDO. OK, not easy to do better, but the validation is pitifull for this kind of model, which live and die by extensive validation. 1. Not agree: Maxwell’s equation are ok, but you need quanta (or a replacing full theory) to deal with radiative heat transfer: It is not even needed to accept black body treatment by Plank to know Maxwell alone will not be up to the task: those EM are radiated by molecules, that can not be modeled by maxwell: remeber the paradox for Bohr atom model of orbiting electrons? Why does the electrons not fall down in the nucleus, when all its kinetic energy should be dissipated by bremstrhallung/synchrotron radiation? This was a fundamental problem (before or at the same time as BB radiation) that was solved by quantization. You may not like Quantum mechanics (I myself have trouble with it, it seems like an unfinished and overly complex theory), but it is extremely successfull, maybe the biggest success of physics. Going against it is a huge task, there is a reason it was accepted between the wars although it is quite often counter-intuitive: it explains and predict a lot, much more than simple BB radiation. By the way, you continue to mention that backradiation would be unstable. A few posts (some of mine too) challenged this. You still not have explained why you believe it would be unstable, just that it is and is a flaw of S-B model. This is not a tenable position, you need to show how S-B is unstable and how your theory is not. Good luck! • So if you don’t accept Maxwell’s equations for radiation, which are then your equations and what do they tell you? • Maxwell equation are valid for propagation. For emission/absorption, you need to take into account the quantitized nature of emmiters/absorbers when those emitters are molecules or atoms. Which is the case for the IR wavelength of interest. If you want to use continuous Maxwell down to atomic lenghtscale and energies, you predict unstable atoms. Everything should go back to neutronium, which will be a problem for predicting EM radiation with Maxwell equations ;-) • Actually I believe that it is possible to describe the situation without the need of standard way of introducing the quantization. The quantum field theory of electromagnetism (QED) is used in practical calculations as perturbation theory in the form of Feyman diagrams, but this is not necessary in principle. Similarly the quantum transitions of molecular states are introduced in the spirit of Copenhagen interpretation of quantum mechanics. This is again not necessary while very useful in practice. Both choices are valuable practical tools in quantitative physical analysis, but they are not really required. In principle one can formulate the whole problem by writing the full equations to describe all molecules in the atmosphere and all radiation by Schrödinger equation and Maxwell’s equations and possibly introducing modifications related to QED. There is no basic reason to assume that these equations cannot be used in another way, which does not involve the traditional way of quantization at micro level but aiming directly to answering some macroscopic questions. It may even be possible that this approach gives many results more easily and directly than the standard procedure. What I have seen in the text of Claes Johnson is certainly not a complete and valid presentation in this line of thought, but it may be partially correct and it might be possible to continue in this direction and reach correct results. I have full trust that the final results would agree with the results of the standard approach, but it is likely that the same results would indeed be reached in a way that does not include back radiation. This would be an extension of the idea of wave-particle dualism. The description in terms of waves does not include back radiation, but it would still give the same quantitative results. Agreeing with accepted physics does not require dogmatic adherence to the standard way of describing the details. • agreed, that’s what is a little bit disturbing about QD imho: not easy to draw where quantum description start, and where classical physics end. For example, a lot of classical QD imply wave/particles in external potential….but those potential are themselves caused by phyical objects, so by W/P assemblies. Why are they represented by perfectly know and unchanging potential fields them, like some kind of ghost of classical Newtonian entity? I guess QD has progressed since (I only have some training about early stage QD, probably from the Plank/Einstein era, and still it is vulgarisation). But I have 2 problems with C. J. approach. one is that is is hopeless imho to try to use Maxwell equations only, you have to introduce some quantization, or an equivalent effect, to avoid molecules to radiate even at 0K just by electron orbiting. Or you can say that bohr atom model is not correct, but this is just another way to make QD come back through the backdoor…As you said, QD can be introduced in many ways (which I find slightly disturbing, but I also agree with you that QD is one of the most (if not the most) succesful physical theory), but Maxwell has to be complemented somehow. C.J approach seems to be “add a phenomenological structural damping for elementary resonators”. I am fine with that, even if I think it explain less than quanta as introduced by planks and so is a poorer approach. The problem number 2, unfortunately, can not be bystepped just by saying that choosing the method is a matter of personal preference: the radiative exchange presented is not equivalent to S-B law, we have R = 4 s T³ (T-T_cold) versus R = s (T⁴-T_cold⁴). Not the same, and I prefer S-B for symmetry reason (the fact that each body radiates without having to know his surrounding is a huge plus in S-B), but here personal preference has no play: the difference is so high as to be easily tested by simple calorimetric experiment. Maybe I have misunderstood C.J., and his derivation is in fact strictly equivalent to S-B. But then, why the fuss? it is only a re-interpreation of the same formula, and by definition, should have exactly the same effect, being used for computing calorimeter calibration, heat exchange in a turbine, or GH effect… • kai, I am not for CJ, I am only noticing that much that has been used against it is not valid argumentation but presents lack of knowledge about the variety of the ways the same basic physics can be approached in practice. The full dynamic equations are very complex and cannot be solved directly. Therefore some ways have been developed for solving then stepwise. The standard approach goes through the micro physics. The method is based on perturbation theory which is equivalent to introducing photons. The method implies also discussing the emission and absorption of the photons by transitions between the ground state and vibrational state of individual molecules. Each photon is a separate entity having a random phase of EM fields in relation to other photons. This is in accordance with a state collapse in the Copenhagen interpretation of quantum mechanics. Thus we describe the wide macroscopic phenomena as a combination of a huge number of independent microphysical phenomena. This leads to good results because the higher order terms of the perturbative analysis are very small and the coherence between micro-processes very weak. While the above approach has provided very good results, it is not the only possible approach of making the original insolvably difficult problem solvable. Another approach would be to look at the macroscopic problem and use some clever averaging and smoothing to make the field equations solvable. I am not at all sure that this can be done in practice, but it is not excluded. If the approach works in is likely to involve solving Maxwell’s equations with some clever way of describing the interaction of electromagnetic fields with molecules. This interaction must conform with quantum mechanical description of molecules, i.e. with the Schrödinger equation, but this may be done without the use of the state collapse of Copenhagen interpretation. Like the Schöringer’s cat the molecules will remain both alive and dead, i.e. it is not known whether they are in the exciter on in the ground state. What I have written is highly speculative and would have its right surrounding in a site, where different interpretations of QM are discussed, such as Copenhagen interpretation, many worlds, hidden variables etc. What I have written is in line of my own longstanding thoughts on these issues and I do not know, how many others would agree on them. • One may ask, how is the above second approach consistent with the fact that we can measure backradiation with a measuring device. There is no problem in that. In that approach electromagnetic field is present everywhere in space and the measuring device is interacting with this field. In this approach the field is no more forward or back radiation, it is just EM field in a state consistent with all matter that interacts with it. Gas that is conventionally described as radiating back-radiation is influencing this field reducing the energy carried by the field upwards, but there is not specific back-radiation. 61. Backradiation is unstable because it would correspond to a negative dissipative effect, which is unstable just like the backward heat equation with negative diffusion. This is well known and supported by solid math. You cannot unsmooth a diffused image by negative diffusion. If you don’t believe try it in photoshop. • It is not: “back radiation” (I put it in quotes, because there is only radiation, not main or back) is always lower than the other one. You can not analyse stability of “back radiation” only, you have to include all radiative exchange in your stability analysis. Including all radiative exchange, S-B always predict net heat exchange from hot to cold, never the opposite. In a many-body case, it is thus a diffusive equation, not a negative diffusive equation, and it is thus stable. If you do not agree with this statement, you just have to provide an example using S-B relations that would lead to unstable situation, where entropy would decrease (hot will get hotter, cold colder). Best would be to start with isothermal and find a perturbation that grow, but even in non-isothermal situation, if you can find an example where entropy is decreased by S-B, it would be enough ;-) • Claes, for clarity can we also stipulate that “Backradiation” in this context refers to the net energy increase caused by the so called “greenhouse effect”. As opposed to downwelling radiation which is the result of general emission based cooling of air at 30km alt and above. • There’s no ‘negative diffusion’ and no instability. The cooler body transfers heat to the warmer one, but the warmer one transfers more heat to the cooler one, so the net heat flux is always from the warmer to the cooler. 62. Sometimes experiments serve better than words at resolving differences of description. Consider a 1-D system of two black body plates set at 100K and 400K. The Stefan-Boltzmann flux is 1446 W/m^2. Next, insert two intermediate plates, positions otherwise irrelevant. The steady-state temperatures become (100K, 304.53K, 361.62K, 400K) and the flux drops to 482W/m^2. Next insert a central fifth plate. The temperatures are now (100K, 283.67K, 336.69K, 372.36K, 400K) and the flux 361 W/m^2. Adding the fifth plate has lowered the temperature of one plate and raised that of another. Does Johnson’s physics yield these numbers? (I have no idea!) If not, there’s a simple experiment to do. If so, where’s the beef? • “Sometimes experiments serve better than words at resolving differences of description.” Sorry Quondam, I did not see an experiment, I just saw words. Please go and perform your experiment, preferably recorded to video, see how that goes. I doubt you will achieve the same results as your “thought experiment”. • I have no doubt that he would since that is standard Radiational Heat Transfer Engineering which is applied in such situations every day! I invited Claes to apply his method to such problems several times but he ignores it. I guess the mathematician likes to derive his new equation but can’t test it against real world situations. 63. Perhaps on this point we could also ask: – How much time does the energy represented by a photon from the Sun spend in the Earth system before it is lost to space? – How many individual molecules does that energy represented by a photon from the Sun spend time in before it is lost to space? – Why does the surface only warm by 0.017 joules/m2/second during the height of the day when the sunshine is beating down at 960.000 joules/m2/second. – Why is there no “time” component in any of the greenhouse radiation physics equations. The vacuity of the greenhouse gas hypothesis to answer these questions, to my mind, is its undoing especially since we are now finding so much empricial evidence telling us CO2 causes no warming. • Are these your questions, John? Bill Illis asked the same questions at lucia’s last night. • John, I think those are very good questions to ask in my opinion. I have always thought that the best way we can understand the effect of manmade CO2 was to calculate the extra time energy spends in the earth climate system in response to an increasing greenhouse effect. I think that way of thinking about the problem gives us the best handle on how big of an issue it really is. I think the essential problem, however, is that the transient nature of climate is neglected in most of these treatments. As a molecular physicist who studies time-dependent transient behavior of absorbing molecules, it seems to me that this is the area in which climate science needs the most work. I heard a talk by Ricky Rood in which he told an audience member that the typical atmospheric transient is gone in a few days, yet La Nina and El Nino events, which represent the coupling of the atmosphere and oceans, have very long time transients. These could be represented in the amplitude fluctuations of the El Nino/La Nina events, their phase or their damping and range from a few weeks to a few years, maybe even decades. We don’t even know yet. So his answer struck me as quite odd. The steady state solution in most important cases in a limiting case. Since we (the community of scholars and interested public) are convinced this case is pretty well understood, it’s time to move on to transient scenarios that better model the real world each person sees on a year to year basis. I think from there we might be able to answer the questions you pose. I think they deserve an answer. • John – let me address your various points one at a time. 1. Time is an important component of computations involving radiative warming of the Earth and atmosphere as a function of the concentration of CO2 and other greenhouse gases. However, this is not because of the time needed for radiative energy transfer within the atmosphere, which is almost instantaneous. Rather, it is because heating of the surface is a time-related function of specific heat capacity, combined with elements of thermal conductivity, and in the oceans, turbulence and convective mixing. More below. I don’t know where your figure of 0.017 W/m2 for solar heat uptake come from – can you provide a reference to the relevant data? However, I’m not sure the figure is very meaningful. Land warms (and cools) much faster than water, but 70 percent of the Earth’s surface is ocean, and most of the heat from the sun and from back radiation originating in the atmosphere is stored in the ocean. Because ocean heat capacity is so enormous, diurnal changes in radiation entering from above exert appreciable temperature effects only near the surface. Mixing of the upper layers quickly averages out these effects, so that temperature changes in the entire mixed layer are very unresponsive to short term variation in radiation. For this layer, one tends to think in terms of months and years, and for the entire ocean, centuries to millennia – not hours. In essence, most of the W/m^2 radiated into the ocean is absorbed, the remainder being reflected as a function of albedo, which in the case of water is relatively small. Of course, increased absorbed radiation is met with an increase in emitted radiation, along with an increase in latent heat transfer via evaporation and convection. I suspect the figure you cited, if accurate, may refer to very superficial layers of the ocean, but in any case, one must specify what “surface” is involved when citing such statistics. Ultimately, the warming from the exposure you describe will be greater than the figure you cite. 2. The number of molecules among which a photon’s energy is diffused is astronomical because of thermalization. The vast majority of excited CO2 molecules, for example, are de-excited by collision with neighboring gas molecules, thereby raising the average kinetic energy (i.e., the temperature) of their surroundings. Since the energy of each collision is immediately distributed widely via further collisions, one would have to calculate a mean number based on the Boltzmann distribution. I’m sure it could be done, but I’m not sure how informative it would be for our purposes. 3. By similar reasoning, I’m not sure how informative we would find an analysis of the mean time a photon’s energy spends in the climate system, although the calculation could probably be done. Perhaps it would provide a clue as to the warming potential of greenhouse gases, but if so, it would be a very indirect means to that end. In explaining the greenhouse effect to non-scientists, CO2, water, and other GHGs are sometimes described as “delaying” the escape of radiation to space, but the description is misleading. It is true that energy radiated from the surface, and absorbed and reradiated many times before escaping is delayed in a temporal sense, but the time delay, which is extremely small by our mundane concepts of time, is not the mechanism underlying the warming. Rather, warming occurs because of a temporary imbalance between the incoming solar radiation and the longwave radiation escaping to space due to the fact that the GHGs intercept upwelling radiation and cause it to be reradiated in all directions including downward. This imbalance is translated into increased radiative energy absorbed within each layer of atmosphere down to the surface, and a balance can be restored only when each of these entities warms sufficiently so that outgoing longwave radiation, which depends on temperature, returns to its former level. Because escape is impeded by higher GHG levels at any given altitude, energy must reach a higher altitude for adequate escape, and since higher altitudes are colder, they must be warmed from below to mediate IR emission sufficient for a full restoration of balance. In essence, the greenhouse effect can be quantified not by asking “how long?” but rather by “how high, and how cold?”, and computing the results over a spectrum of wavelengths. These theoretical calculations are now well confirmed by observational data. 4. I’m surprised by your claim that empirical evidence refutes a warming role for CO2. I’m familiar with the climate science literature, including data from recent and current measurements, as well as data extending back more than 400 million years – all converging from multiple sources to demonstrate a very substantial role for CO2. It would be illegitimate in science to insist that any phenomenon, including a warming role for CO2, can be demonstrated with 100 percent certainty, but in this case, the level of certainty is high enough to approach 100 percent. I’m unaware of any evidence at all that suggests the absence of CO2-mediated warming, and so I believe your statement is simply wrong. However, I would be interested in appropriate data references that have led you to make your claim. In truth, though, the realistic element of uncertainty is not whether CO2 warms the climate appreciably, but to what extent. This quantitation has been the subject of numerous discussions here and elsewhere. • Fred, that was a thorough answer and quite informative. I did take notice of one particular statement. I think this statement is only meaningful if we assume the climate system is a strictly steady state system. Obviously, for a steady state system time dynamics are not interesting because we’ve assumed that they have dissipated, whatever they were. That is the definition of steady state. The climate system, however, is inherently dynamical and its the transient in the climate system that cause the up ticks/down ticks in snowstorms, hurricanes, floods (when people aren’t causing them) and the other ‘wonderful’ events we witness in this world. I also think that it is incomplete to think that the radiative transfer happens instantaneously. I agree that when we focus on the gases in the atmosphere thermalization occurs very quickly and when a lone CO2 or water is excited and along enough, radiative decay happens faster than we can perceive. That said, it may be possible for transients in the atmosphere to manifest themselves in other aspects of the climate system and get propagated for much longer times. Couplings to the oceans, cryosphere and biosphere are very poorly understood at this point in time, especially because they are heterogeneous. I can imagine reasonable cases in which transients of greenhouse effect could cause plant growth that impacts an ecosystem for many years or cause the overturning of a current in a different way or melts/freezes portions of glacier, in all cases causing changes that last much longer than ‘instantaneous’. To the zeroth order, I think the steady state picture provides a useful tool. I just wonder if we’ve used most to all of its utility. • Bill Illis has answered these question’s at lucia’s Blackboard. Hey, Judy, how about a main post for this gem. It’s shiny. • Maxwell – I agree completely that in our current non-steady state, time constants are an important element in determining climate dynamics. I tried to make that point when I mentioned the very long times involved in ocean heat storage. I can’t agree with John that these elements are neglected, and in fact, both models and observational studies are often aimed at quantifying the time relationships. My more limited point was that radiative changes in the atmosphere in response to a change in radiative balance at the top of the atmosphere occur extremely rapidly. It is the non-radiative elements of climate dynamics, including convection in the atmosphere and energy transport and storage in land and oceans that consume more time. 64. I’ve been looking for easier ways to understand what’s happening to global temperature and why. The concepts of back radiation and the second law of thermodynamics both seem to me to make the reasoning very complicated, with the result that one can use these concepts to prove anything you want, including that the planet is cooling, or that it is warming. We’ve seen endless examples of this sort of reasoning not just on Climate Etc. but all over the web. So I asked myself, is there one single phenomenon to which all such questions can be reduced, which doesn’t allow the outcome to be argued either way according to what one believes? I think there is. It is how many photons are leaving Earth. Or how much radiation if you don’t like thinking about photons. There seems to be no serious debate as to how much radiation is arriving. The intensity of sunlight at 1 Astronomical Unit (AU) from the Sun, which is where we are, is around 1370 W/m2. The area of the Earth capturing this as a disk is around 127 .5 million sq. km (precisely one quarter of the area of the surface of the Earth as a sphere). And Earth’s albedo is around 0.3, meaning only 70% of the intercepted insolation is heating Earth. Multiplying these together gives 1.37 * 127 * 0.7 = 121.8 watts, with the decimal place 3+12 = 15 places to the right. This comes to 122 petawatts, a phrase that’s easily googled if you want to check the math. For equilibrium, that is, in order to maintain a steady temperature, Earth must radiate 122 petawatts to outer space. Each photon of that radiation can come from only two places: the Earth’s surface, or a molecule of one of the greenhouse gases in the atmosphere. These two sources of radiation behave very differently. Earth’s radiation is sufficiently broadband as to be reasonably modeled as radiation from a “black body” at around 288 K. In sharp contrast the greenhouse gases radiate at certain wavelengths called emission lines. These lines coincide in wavelength, if not always exactly in strength, with absorption lines. The radiation leaving Earth can therefore be classified into two kinds: the black body radiation leaving the surface of the Earth, and the emission lines leaving the atmosphere. The last line of these tables shows that 80% of the blackbody radiation leaving Earth’s surface is between 7.62 and 32.6 microns in wavelength. Some of these wavelengths are open to the escaping radiation while some are blocked by the absorption lines of the atmosphere’s many greenhouse gases. The two dominant greenhouse gases are H2O or water vapor and CO2 or carbon dioxide, having respective molecular weights of 18 and 44. (There are variants of these with an extra neutron or two in each atom but those are in a distinct minority and hence can be ignored here.) Human population has been growing exponentially for many thousands of years, doubling around every 90 years or so in the past couple of centuries. The per capita fuel consumption has also been growing exponentially over this period, with the result that we are doubling our contribution of CO2 to the atmosphere every three or four decades. The late David Hofmann, shortly after his retirement as director of NOAA ESRL Boulder, claimed a more precise doubling period of 32.5 years, along with 1790 as the approximate date when the residue remaining in the atmosphere from our additions was 1 part per million by volume (ppmv) of CO2. He assumed this residue to be added to a natural base of 280 ppmv during the previous few centuries. Barring any strenuous objections to these numbers I’m happy to go along with them. The upshot is that we can estimate CO2 over the past few centuries as 280 + 2^((y − 1790)/32.5) where y is the year. For example if y = 2010 then this formula give 389 ppmv which is in excellent agreement with the CO2 level measured at Mauna Loa. All this arithmetic is mainly to make the point that we are increasing the CO2 in the atmosphere, while adding a little corroborative detail. Of the photons escaping from Earth’s surface, some are at wavelengths blocked more or less strongly by CO2. Call a wavelength closed when the probability that a photon leaving Earth’s surface will be absorbed by a CO2 molecule before reaching outer space is less than 1/2, and open otherwise. (Sometimes 1/e instead of 1/2 is used, in conjunction with the terminology of unit optical thickness, but it doesn’t make much difference to the outcome and 1/2 is easier to relate to.) The HITRAN08 database of CO2 absorption lines lists 27995 lines in the above-mentioned range from 7.62 microns to 32.6 microns. Currently 605 of those lines are closed. According to Hofmann’s formula CO2 will double by 2080, which will close a further 120 lines. This will leave 27,270 absorption lines of CO2 still open, of which only a further 2502 lines will close when and if the CO2 level rises to 40% of the atmosphere by volume, a more than lethal level for all mammals. Now the closed lines aren’t truly closed because they can emit as well as absorb. These account for the photons radiated to space from the atmosphere, as opposed to from the surface of the Earth. It is tempting to argue that increasing CO2 will increase the radiation from these closed lines. To see why this is wrong, picture the CO2 molecules in the atmosphere as grains of white sand on a black sheet of cardboard. When there are very few grains the cardboard looks black, but as the grains fill up it gradually turns white. Furthermore the more grains there are, the higher above the cardboard are the visible grains. The same effect is happening with CO2 molecules that both absorb and emit. For any given wavelength, with very little CO2 an observer in outer space looking at just that wavelength sees the surface of the Earth. As the CO2 level increases the observer starts to see CO2 molecules covering the Earth’s surface. And as the level continues to increase, the visible CO2 molecules are found higher and higher, just as with the grains of sand. But the higher they are, the colder, at least up to the tropopause (the boundary between the troposphere and the stratosphere). So radiation from CO2 molecules decreases with increasing level of CO2 in the atmosphere. This is not true of the CO2 molecules in the stratosphere, but there are too few of them to make a significant difference. This is a complete analysis of the impact of increasing CO2 on how much radiation leaves the Earth at each wavelength. It describes what’s going on both simply and precisely, unlike accounts based on back radiation and other phenomena which are far harder to analyze accurately. This analysis ignores the impact of feedbacks, most notably the increase in water vapor in the atmosphere expected from the temperature increase induced by the increasing CO2. That increase could work either way: more water vapor could block heat at other absorption lines since water vapor is a greenhouse gas. But water vapor also conducts heat from the surface to the clouds, a cooling effect. Hence the net effect of such feedbacks needs to be analyzed carefully. However the feedback cannot result in an overall cooling, since the feedback depends on CO2 raising the temperature in order to evaporate more water. The question is only whether the feedback reduces the warming effect of CO2 by some factor between 0 and 1, a negative feedback, or enhances it by a factor greater than 1, a positive feedback. It cannot reduce the warming effect to zero since then there could be no feedback. This pretty much covers the whole thing. • Vaughan: “This pretty much covers the whole thing. ” Nope, you missed out entirely geothermal energy loss from Earth’s core. Where’s that 5000 C degrees of heat going? IN = OUT or BOOM! Your equation means ‘BOOM!’ • John, you raise an excellent question, one that was asked in the 19th century. Based on the thermal insulating qualities of the Earth’s mantle and crust, Lord Kelvin calculated that the heat at the core must be leaking out at a rate that would prove that Earth could not have formed more than 50 million years ago. However the geologists were unable to reconcile Kelvin’s figure with what they were observing in the geological record, which suggested the Earth was billions of years old. This huge discrepancy was a great puzzle for a while, until it occurred to physicist Ernest Rutherford to calculate the heat that could be generated by a small quantity of radioactive material (uranium etc.) in Earth’s crust. He found that it would not take much to exactly balance the amount of heat leaking out through the crust. If this were not so, in the four billion years of Earth’s life the core would long ago have cooled down to something closer to the surface temperature. In effect the small amount of radioactivity in the crust is acting like a stove to keep Earth’s core at a steady temperature over billions of years. Global warming has only kicked in strongly over the past half century. Compared to the billions of years in which the core could have cooled down but didn’t, half a century is nothing timewise. • @ Vaughan Pratt… You say: I have led myself to make the calculations, from the observational and experimental derived formulas, and have found the results corresponding to Photons Mean Free path and to Photons Lapse Time before the absorbent molecules of the atmosphere hit or diffuse them. I have done it for each component of the atmosphere and for the whole atmosphere. Most relevant results are as follows: Crossing time-whole column of mixed air (r = 14 Km, wv = 0.04) = 0.0097 s Crossing time dry atmosphere (r = 14 Km) = 0.0095 s Lapse time rate-whole mixed air (r = 14 Km, wv = 0.04) = 20.78 m Absorptivity-whole mixed air = (r = 14 Km) = 20.79 m Crossing time-water vapor at 0.04 (r = 14 Km) = 0.0245 s Lapse time rate-water vapor at 0.04 (r = 14 Km) = 8.05 m Crossing time-whole column of carbon dioxide (r = 14 Km) = 0.0042 s (4 milliseconds) Absorptivity-whole column of carbon dioxide = (r = 14 Km) = 46.8 m Total aborptivity of the whole mixture of air (r = 14 Km, wv = 0.04) = 0.01 Total emissivity of the whole mixture of air (r = 14 Km, wv = 0.04) = 0.0096 Total absorptivity of dry air (r = 14 Km, wv = 0.04) = 0.01 (rounded up from 0.0099) Total emissivity of dry air (r = 14 Km, wv = 0.04) = 0.0094 Total absorptivity of water vapor at 0.04 = 0.024 Total emissivity of water vapor at 0.04 = 0.0237 Total absorptivity of carbon dioxide at 0.0004, whole column = 0.0039 Total emissivity of carbon dioxide at 0.0004, whole column = 0.0039 Overlap water vapor/carbon dioxide, absorptivity = 0.024 Overlap water vapor/carbon dioxide, emissivity = 0.0235 Those are well reviewed results, supported by observation and experimentation. Now tell me, do you think the “downwelling” radiation heats up the surface? Why to talk about a “downwelling” radiation when we perfectly know that the possibility for the energy to be emitted is, equally, at every trajectory? Besides, there is a photon stream, stronger than any photon stream coming from the atmosphere that nullifies any backradiation” or “downwelling radiation from the atmosphere. The term “backradiation” is absolutely invented and incorrect; why? Because the air is not a mirror. Why dismissing convection, when we perfectly know that it is the prevailing way of heat transfer in the atmosphere? • WordPress have mixed up all the lines corresponding to the data. Please, go to the following table: Since both you and I have rejected the concept of “back radiation” as not helpful (if not for exactly the same reasons—in particular I don’t consider it incorrect, just harder to work with) it sounds like we’re both more or less on the same page regarding that aspect. I agree that convection can make a difference in the thermal insulating qualities of the atmosphere, for example by transporting heat from the surface upwards, e.g. via thermals. What I was focusing on however was the heat leaving Earth for outer space, which cannot be accomplished by convection because there is no significant flow of matter from Earth to outer space. Radiation is the only way available to Earth to shed the 122 petawatts of heat that the Earth is constantly absorbing from the Sun. Hence to understand how an increase in CO2 could heat up the Earth it suffices to consider how increasing CO2 blocks some of the departing radiation. One point I neglected to make is that the heating resulting from blocking radiation raises the temperature of the Earth until it is once again shedding 122 petawatts, the amount of heat it is absorbing from the Sun. The additional lines closed by increasing CO2 make for a smaller atmospheric window through which to push those 122 petawatts. In order to get the same amount of heat through this smaller window, Earth’s temperature has to increase. This is analogous to having to raise the voltage across an increasing resistance if you want to maintain a constant current. 65. Judith, The misconception of science is that it is suppose to be a balanced system. It is far from it. Our concept is so far out of balance with what is actually happening due to the past down theories that apply to ALL of the planet at the same time. Hmmm. Round Planet rotating. • Claes, the issue is this. For the past many decades climate researchers and physicists have put their equations, data and analyses out there. The story of IR emission by gases hangs together very well in terms of observations, theory, and radiative transfer modeling. The challenge is in your court to demonstrate that any of this is incorrect, and to put forward a coherent case that convinces people that are knowledgeable of the observations, theory, and modelling. IMO you have failed to do this. This isn’t about exchanging equations. The body of physics and chemistry that underlies the calculations of gaseous absorption and emission made by line-by-line radiative transfer models is well understood, apart from some issues related to the water vapor continuum absorption under very high humidity conditions (this is understood in terms of the observations, but not theoretically, and hence is parameterized empirically in the models). • “The story of IR emission by gases hangs together very well in terms of observations, theory, and radiative transfer modeling.” And yet still all the counter arguments which have been presented here supported by hard evidence and real-world observation are far more compelling. No thought experiment or computer model can change reality. As Richard Feynman said: • Judy: You just repeat a mantra without mathematical basis. I prove that the “backradiation” of the KT energy budget which you say you believe describes real physics, is not to be found in Maxwell’s equations, which have shown to model almost all of macroscopic electromagnetics. You say nothing about this proof. You are still convinced and probably teach your students that in some mysterious way a cold body sends out some mysterious particles which in some mysterious way heats a warmer body. It is a mystery in every step from scientific point of view, but mystery is not science. I have demonstrated that “backradiation” is fiction, and it is now up to you show that my proof or assumption is incorrect, or accept it as correct. Can we agree on this? So what is wrong with my argument? Have you read it? • Your argument is incapable of explaining radiational heat transfer which is used in practical situations everyday where theoretical predictions are confirmed by measurement. You have dodged the challenge to apply your ‘theory’ to a practical situation, until you do you’re just hand waving. Show your working for us to follow your calculations, until you do it’s just ‘hot air’, I’ll await your calculations. • Sure, they are on the way. There are many things you can compute from Maxwell’s equations. • You talk too much… Demonstrate that Claes is wrong with your own numbers. I’ll be waiting here… … … … … • Dr. Curry, The IR irradiance from the lower temperature/frequency/entropy atmosphere cannot heat the higher temperature/frequency/entropy Earth, as explained by another author of “Slaying” here: even though “back-radiation” can be measured by a thermocouple or thermister that has been cooled by liquid nitrogen to temps lower than the atmosphere in order to measure said “back-radiation.” [alternatively, less expensive units can measure “back-radiation” at ground temperature by e.g. a thermister increasing or decreasing resistance (depending on the type) due to the thermister losing heat to the atmosphere and a mathematical correction is applied to measure temps lower than the sensor] • How about a little thought experiment, or actually a quiz, anyone? Imagine two blackbodies, one has emitted a 9um photon, which will interact with the other blackbody, the other has emitted a 10um photon which will interact with the first blackbody. Now both blackbodies will be warmed by the photon it interacts with. The question is: Which blackbody is warmer, the first or the second? I will listen only to those who can answer the question. • Warmer blackbodies emit more energetic radiation. Photons are very small. I have nothing to add. • Blackbodies make me warm. • I have nothing to add. • Given the amount of information you have provided for your ‘quiz’, it is not possible to tell which body is warmer. Two bodies at different temperatures can both emit photons at both 9 and 10 um. The distribution of frequencies emitted is very large. If you are more precise and specific with the question, I should be able to answer it. • The question is vague because the real question is whether or not there is a two-way flow of energy between the two blackbodies or not. If Cleas Johnson is right, then Planck, Einstein, and the Standard Model is wrong, and there should be some exchange of Nobel Prizes. And maybe a photon can carry more than one peice of information. Like it needs to know where it has been and where it is going. • bob, I don’t know why the question is vague, but it is. More to your point though. If one blackbody gives off energy, and another gives off energy, why wouldn’t they flow energy to each other? Johnson is basically saying that one blackbody (the warmer one) KNOWS that the other blackbody is colder. And he is saying with by only referring to this fact via the source-less Maxwell’s equations, according to his comments here. Did I miss something? • Now that I realize it was your question originally I take back my previous comment. You won’t understand it. • No, you didn’t miss a thing. That’s what I have been trying to say, that Cleas Johnson requires that the photons know where they are going and where they have been. • If Einstein is right, then Claes is right. • Nasif, you do realize that Einstein was the first to propose the existence of the photon, don’t you? • @ maxwell… Don’t you? I know also Einstein deduced induced emission many years before it was confirmed by observation/experimentation. In 1678 Huygens proposed that light was a wave, contradicted in 1704 by Newton who claimed light consisted of particles. Newton’s particle theory was generally accepted over Huygens’ wave theory until 1801 when Young’s two-slit experiment showed that Huygens was right. The wave account then survived for a century until Einstein showed that Newton was right too. However Huygens had no idea what the wavelength was, while Newton had no idea how big the particles were or how a mirror could reflect them. So neither of them had as much claim to their respective theories of light as Young and Einstein, who were the first to actually observe respectively the wave and particle forms of light. Newton called the particles “corpuscles” while Einstein called them “light quanta.” The snappy term “photon” was introduced later. • The IR irradiance from the lower temperature/frequency/entropy atmosphere cannot heat the higher emperature/frequency/entropy Earth, Yes, but what it can do however is reduce the loss of heat. When (for each square meter of surface) you have U watts of heat going up and D watts going down, with D < U, the net loss of heat from the surface is U − D. If U is 396 W and D is 0 W then the net loss of heat from the surface is 396 W. If however D is 333 W then the net loss of heat is only 63 W. This does not contradict the 2nd law of thermodynamics because the net flow of heat is still from the hotter to the colder entity, there just isn't as much flow between two entities that are at relatively similar temperatures. Although 63 W might seem like a lot of heat, in terms of temperature the difference is only 289 − 277 = 12 degrees. (100*sqrt(sqrt(396/5.67)) = 289 K.) There is incidentally a fundamental error in Dr. Anderson’s website. He says “Each time a greenhouse gas molecule absorbs ground radiation energy, it sends half of it back to the surface.” While it’s true that half the energy goes up (not necessarily straight up) and half goes down, the latter need not reach the surface because it may be intercepted by another GHG molecule first. That possibility is one of the things that makes it extremely hard to calculate just how much heat GHGs intercept. This is why I recommend my much simpler way of calculating it, namely solely in terms of the number of photons escaping to outer space. Those are the only ones capable of cooling Earth: if none escape the temperature will rise enormously. Those photons reradiated from the atmosphere bounce around the atmosphere, sometimes hitting the ground and sometimes escaping to space, and are much harder to reason about. Rather than even try to reason about them, just ignore them altogether on the ground that only those photons that escape to space make any difference to global temperature. • Vaughan, a couple of nit picks. Some of the IR goes sideways and depending on height some of the generally downward doesn’t even go to earth as the earth is round and not infinite, so, less than half goes in the direction of the ground. Increasingly less with altitude. As far as what is moving between earth and GHG’s, part of the argument is whether observed IR really transfers a quantum of energy that translates to heat. Various arguments include the fact that a photon is a wave front until it actually transfers its energy to something, which means in quantum mechanics it simply may not do it where we think it should. As there really do appear to be teleconnections between “particles” the photon may KNOW not to transfer its energy to the higher temperature bit just like in conduction where the material KNOWS not to move energy from cold to hot. Finally, fitting in with the idea of a slower cooling of the surface, the warmer surface may simply reradiate the energy from the incoming IR without it affecting the temperature. Then there is the older solid science of wave interference. Long before quantum theory was relatively solid it was known that waves interfered and cancelled each other. Why that is not considered as a possibility for colder not heating warmer or not slowing the warming I simply don’t understand. The energy equations show a NET energy flow and the interference, scattering, and cancellation could be components of creating this NET flow. In the case of a NET flow it should be noted that there would be NO slowing of the rate of radiation from the hotter surface unless the scenario where the photon coming from the colder source is absorbed and reradiated is correct. I am unsure why the lower energy photon would be able to cause a quantum increase in the warmer material though. Again, where are the quantum mechanics to explain this stuff!! My problem with the reradiation of the colder sourced IR is that there is an additive effect that would seem to cause more warming or at least extending the cooling time. This should be measurable. If it isn’t the effect probably isn’t large enough to worry about. The problem with the current numbers is that they do not appear to break out the effects of conduction from depth in the surface. This is a small effect, but, so is the amount of CO2 heating that is alledged to cause feedback with water vapor. So many choices and so few people with the skills to guide us to the correct conceptualization of what is actually happening. • Kuhnkat, The quantum electrodynamics (QED) developed by Feynman and others is an extremely successful theory in describing how photons are created and how they interact. It has been tested empirically to a better accuracy than perhaps any other physical theory. From QED we know how photons interact with material. We know that the photons do indeed release their energy in well understood ways. The is no change that the alternatives that you propose might be true. • Yes. And I am almost sure you can check that low frequency photons can heat “high” temperatures bodies, if, like most westerners, you have a microwave. The food you put in is usually between 270 and 300 K. It is very rapidly heated to 370+K by photons at 2.4 GHz, about 0.1 m wavelength. I do not have blackbody emission curve at hand, but this should be the typical max emission wavelength for a bb opf a few K, maybe 10 K max, no? Much lower than the food temp, for sure. So why is it heated? Because magnetron is a coherent source? I doubt it, nowhere in the heating proces is coherence required afaik… • Kai, why do you forget the concept of heat pumps? Is it just convenient to ignore the actual physics. • Kunhkat, sorry, but here, I completely fail to see any relationship between heat pump and microwave heating. Appart from the fact that a fridge and microwave oven are often quite close to each other in a typical kitchen or in the mall ;-) Seriously, you will have to elaborate a lot more before I consider my failure to see any connection something to be corrected… • Kai, how much energy does you microwave “consume” to heat your food and how efficient is it?? • It is very efficient. Do you think microwave would have been introduced for industrial food heating if they were not? (the first one were much more powerful than the current one – they were scary ;-) ). Efficiency for a heating apparatus is something extremely easy to achieve though, depending on how you measure it. typically almost 100% of input energy is converted to heat…because everythin is ultimately converted to heat! So I guess you refer to heat energy IN FOOD/input energy instead total heat energy/ energy input (which is usually near 100% -possibly escaping sound and EM wave are absorbed too far to be counted). For the “food” efficiency, microwave ovens are very efficient. Magnetron are nice efficient device (I am quite fascinated with tube-age power electronics, klystron, fusor, all that stuff. Nice that everybody have his own magnetron nowadays) , and not much heat is lost outside the oven nor transmitted to recipient (well, it depend in what you put your food). Or maybe you refer to efficiency compared to a perfect carnot cycle (hence you heat pump reference). Sorry, I do not know of any food warming technology using heat pump. Maybe there is, but I never saw any. If there is, I guess for large amount of food it can me more efficient than microwave… Why, you are just about bringing to market an ultra-efficient combined fridge/oven based on heat pump? Congrat to you, but what does it have to do with C.J. theory about radiative heat transfer??? • Try this Kai, it actually explains in detail how ir excites h2o molecules. Notice that if a molecule does not have the correct configuration it will NOT be excited by this method. So, what molecules exactly are preferentially excited by 15 micron IR from CO2?? • Pekka, it is interesting you couch your closing sentence to me with the phrase “there is no chance” when theoretical physicists tell us that the universe may just be one of those chances that you blithely suggest doesn’t exist. Quantum mechanics, as I am sure you are much more aware than I am, is based on statistics. Statistics allow for many stranger things than my simpleton maunderings. But, I am a hardheaded simpleton. Can you refer me to experiments showing the increased radiation from a heated object caused by moving a cooler object close to it?? • You could probably do an experiment yourself at home that would test the ability of a cooler object to raise the temperature of a warmer object. It wouldn’t be perfect, but should give a reasonable approximation. Start with a cool room, and in the center, a 100 Watt light bulb. Turn on the bulb, and let the room temperature equilibrate. Also place a thermometer against the light bulb (shielded from outside influences) and record the temperature at the surface of the bulb. At this point, the bulb is radiating 100 W and the room is losing 100 W through walls, windows, etc. Now surround the bulb at a distance of about 1 meter with wire mesh at room temperature. The purpose of using mesh is to provide space for air currents to escape so as not to interfere with convection. We can also leave the mesh open at the top so that rising heated air will not affect it. Also, because the conductivity of air is very low, we can reasonably assume that most heat transfer will occur by radiation – admittedly, it would be better to perform the experiment in a vacuum, but that wouldn’t be practical. Place a thermometer on the mesh (again shielded, so that it records only mesh temperature). Allow equilibration. The room temperature will not change, because 100 W are still flowing into the room – the amount from the warmed mesh compensating for the reduction due to heat absorption by the mesh. Here are my questions: 1. Do you agree that the mesh will warm due to radiation absorbed from the light bulb? 2. Do you agree that the mesh will remain cooler than the light bulb surface, because not all the 100 W are absorbed by the mesh? 3. Do you agree that the warmed mesh will radiate some of the wattage it receives back to the light bulb? 4. Do you agree that the surface of the light bulb will also continue to receive 100 W from its internal heating element? 5. Do you agree that the internally generated 100 W plus the W from the mesh will exceed the wattage the light bulb surface was receiving prior to being surrounded by the mesh? 6. Do you agree that at equilibrium, the light bulb surface will now be radiating the W described in 5? 7. What do you think will happen to the temperature of the light bulb surface? Why? • Fred, if what you are suggesting starts to happen the filament increases its resistance changing the energy flux. • Assume a filament that emits a constant 100 W. How would you answer the questions? • Fred, I am happy to read about real experiments and discuss them to the extent my garbled knowledge allows. Are you planning on doing this one with appropriate instrumentation? • Kuhnkat – I’m not planning to do the experiment, because I don’t feel a need to prove anything. However, I would still welcome your thoughts about how it would come out on the basis of the questions I asked. I also wrote those with the thought that other interested readers besides yourself might appreciate the reasoning that has been expressed by many of us regarding the ability of a cooler object to raise the temperature of a warmer one, as long as the cooler object didn’t depend on its own energy but could gain energy that originated from an external source. If you would like to answer the questions simply from the perspective of a thought experiment, I hope you’ll go ahead. • Fred, How can my misconceptions contribute to the advancment of the discourse?? • For distinction between Downwelling IR and “Backradiation”, as already discussed and apparently ignored. And for an important understanding of why the “Backradiation/greenhouse effect” in unphysical pseudo science : Also apparently being ignored. 66. As some of my writing in this chain may appear obscure and even support Claes Johnson’s texts, I want make clear that I do not see anything wrong with the standard description involving photons, back radiation and transitions between ground state and the vibrational state of CO2 molecules. I wanted only to tell that the same physics with the same conclusions may perhaps be formulated totally differently. This alternative formulation would be closer to, what Claes Johnson has presented, but would definitively not change the results of the standard approach, which rest on solid experimental and theoretical knowledge of physics. Thus I disagree totally with all his statements that would modify the final conclusions. • Well Pekka, either there is backradiation or there isn’t. It can’t be just a play with words unless physics is a swamp where something can mean anything. • Claes, The physics is the same, but it can be described in different ways. The only way that si well developed and known to work includes back radiation. It may be possible to drop the particles and stick to fields without (second) quantization, but nobody has developed theory on that basis. The wave-particle duality is reality when ways are searched for describing quantum physics in classical terms. People cannot discuss directly in quantum physics. Therefore such different classical type descriptions are used although there is just one real quantum physics behind. Back radiation is a part of the particle type description. It would not be part of the wave type description if that would really exist. The physics would still be the same. Using Maxwell’s equations is a small step in this direction, but it has not been made complete (by you or anybody else as far as I know). 67. Tomas Milanovic Claes Johnson To your claim 1) aka non existence of “back radiation”. As you prefer equations , so just a few very simple ones. Let us consider 3 interacting systems. S1 is the void S2 is the atmosphere S3 is the Earth We will consider that we know some things about the Earth and the void but the atmosphere is complicated . There are clouds , moving gases , many mysterious and complex processes. So we will consider S2 as a black box where the only knowable parameters are the energy fluxes at the interfaces. The only assumption we will take is that S2 (atmosphere) and S3 (Earth) are in a steady state . They may transport and transform energy internally as they want but they neither store it nor release it. For S1 (Void) we will assume that it is in an approximate radiative equilibrium with S2+S3. If we call the energy fluxes F (W/m²) then we have the following equations : At the interface S1-S2 we have F1->2 = F2->1 At the interface S2-S3 we have F2->3 = F3->2 There is no contact and no interface between S1 and S3. That is 2 equation , 4 unknowns. However we can measure F1->2 and F2->1 and find that they are 340 W/m² and indeed approximately equal. Remark : Of course the conservation of energy would require that I write the equation for the whole system and use energy (units J) for a certain time scale . However once I have the TOTAL in and out energy , without loss of generality I can always divide the result by the surface of the interface and by the time to get back to fluxes (W/m²) which are more familiar . This of course doesn’t mean that it is assumed that the real fluxes are 340 W/m² everywhere . They aren’t . This “average” value is just what represents the energy conservation. Back to S3 (Earth) . It is behaving like a grey body with an excellent approximation and emits according to F = ε.σ.T⁴. When we integrate that over the whole surface and divide by the surface to get homogeneous units for all fluxes , we get a value of about 390 W/m² . But the radiation is not the only component of the F3->2 flux . We have also convection , conduction and latent heat transfers . These 3 components can be computed and estimated to about 100 W/m². Now only 1 unknown is left , the energy flux from the atmosphere to the Earth , and it is necessarily 390 + 100 = 490 W/m² . What can that be ? Even if the radiation from S1 (Sun/Void) goes completely through the atmosphere and we know it doesn’t, it is only 340 W/m². There would be still 150 W/m² missing. Convection and conduction towards the Earth is very weak because it is generally warmer than the atmosphere . Part of the latent heat may possibly return. But whatever part of the 100 W/m² come back to Earth , it is still not enough . As what is missing is neither convection/conduction nor latent heat it can only be radiation. Conclusion : the atmosphere radiates “back” on the Earth (hence “backradiation”) at minimum 50 W/m² but actually probably significantly more because not all incoming 340 W/m² get through and not all 100 W/m² of convection/latent return to the Earth . Thus it appears clearly that one doesn’t need any quantum mechanics , second thermodynamics principles or complex radiative transfers to conclude that the “back radiation” is a necessary consequence of the dynamics of the interacting systems S1,S2,S3 , as long as they conserve energy and are in a steady state at least approximately in a temporally averaged sense what we indeed observe. Of course one can then become much more specific and explain how the “backradiation” can be deduced from the first principles too . But I won’t repeat what has already been written 100 times above , I wanted merely to prove its existence which can be of course confirmed either directly or by measuring the fluxes I defined above . To your 2) I largely agree with this opinion . I have exposed on other threads the arguments why I believe that. It has mostly to do with the fact that the system is governed by non linear dynamics which lead to spatio-temporal chaotic solutions. Analytical or statistical considerations of spatial averages alone destroy all spatial correlations and have no possibility to recover the right dynamics. As for the computer models, their resolution doesn’t allow and will never allow to really solve the dynamical equations. What the computers produce are plausible states (e.g states respecting more or less the conservation laws) of the system but they are unable to discriminate between dynamically allowed and forbidden states. This inability to discriminate between allowed and forbidden states becomes of course worse when the time scales get bigger . • But you are dismissing something very important, the total emittancy of the carbon dioxide, which, from experimentation and observation, is quite low. Well applied, the algoritms give 0.02 for CO2 and 0.01 for the whole mixture of the air, including water vapor. I must say that the algorithms derived from experiments give a ridiculous total emittancy for CO2, which is 0.002, at its current partial pressure in the atmosphere. Those are important parameters that are not taken into account by the current models. Carbon dioxide is not a blackbody, according to the most elemental definition of blackbody, but a graybody. The ignorance on this physics issuesintentonally or not, has taken to many people believe in backradiations heating up, or keeping heat, the surface. • Sorry, it should have said: “Well applied, the algoritms give 0.004 for CO2 and 0.01 for the whole mixture of the air…” • Tomas Milanovic Nasif I make no assumption about the blackbox atmosphere , what it contains and what it does . I just observe and measure the fluxes at interfaces and apply energy conservation for systems in a steady state . From there follows necessarily the existence of a radiation flux from the atmosphere to the Earth . I do not attempt to say how much or by what mechanism because others have developped it ad nauseum . I demonstrate that observation tells us that the number is strictly positive what is enough to establish its existence . • “At the interface S1-S2 we have F1->2 = F2->1 There is no contact and no interface between S1 and S3. That is 2 equation , 4 unknowns. ?? Isn’t that 2 equations and TWO unknowns? If you know F1->2, you already know F2->1, if they are equal. 68. One area that Claes approach may give a new way of looking at a problem that has often been discussed on SoDs site. That is what is the fate of the radiation from the colder object when it arrives at the hotter object. To keep things simple lets say both objects are blackbodies. Three tenable approaches are generally given. 1. No radiation from the colder object arrives. 2. The radiation arrives but is simply subtracted from the greater amount of radiation of every wavelength leaving the hotter object. 3. The radiation arrives and is completely absorbed. Lets see how the 3 approaches deal with a simplified problem. Let the colder body be at 290K Lets consider an area of 1m2 some way from the colder object. With the hotter object absent; this area has a flux of 100W/m2 passing through it. (This means 100joules per second pass through the area) If examined the spectrum of the radiation would be BB centred around 15um. Now bring the hotter (1000K) object to this area. Approach 1 says the radiation from the colder object no longer arrives at this area. I consider this to be unphysical and will now drop this as it seems unreasonable. Approach 2 says the subtraction of the radiation will still leave more radiation of every wavelength leaving the hotter object. This satisfies the Stephan Boltzmann equation and also means that the colder radiation has no effect on the temperature of the hotter object. Approach 3 says the 100Joules per second is totally absorbed and add to energy of hotter object. The temperature of the hotter object is increased even if only slightly. Effectively this means that 100J/s centred around 15um is transformed into 100J/s centred around 4.3um. I would say this improvement in the “quality” of the radiative energy is forbidden by the second law of thermodynamics. Further although approach 3 seems to satisfy the Stephan Boltzmann equation there may be a conflict there if the temperature of the hotter object increases significantly. For these reasons approach 2 seems to be the only correct solution. • As you said: this point has been discussed dozens of times on SoDs site. And your error is always the same: approach 3 is not “forbidden by the 2d law”. Approach 2 is impossible: it would suppose that the hotter object magically “knows” that the radiation comes from a colder object. The approach 3 is the correct one. • Ort Perhaps you could expand your reasoning as to why approach 2 is wrong. • The approach 3 is the correct one. Quite, it is a blackbody therefore it absorbs all incident radiation. Only if the same number of photons of higher energy were emitted, however this does not happen fewer photons would be emitted to balance the additional incoming flux. An example in my lab I used a Nd:YAG laser which emitted at 1066nm which I then passed through a crystal which doubled the frequency to give me 533nm output. Two 1066 quanta are combined by the crystal lattice which then emitted a single photon at 533, no thermodynamic laws broken. • Phil. Felton So you are saying that 100J of radiative energy at say 15um is thermodynamically equivalent to 100J of radiative energy at 4.7um? See Hockey Schtick post above. • Yes 100J is 100J, just fewer photons in the 4.7μm band. • Phil. Felton ….”Yes 100J is 100J, just fewer photons in the 4.7μm band.”….. Now you must feel that this is on shaky ground. With other physical equivalents of the “crystal” you could input low quality radiation say from seawater290K(radiative equivalent 15um) and by suitable “crystals” transform it in stages into 4.7um radiation equivalent to 1000K with no losses when absorbed. With such a device ships would have no need of fuel simply extract it from seawater. I think this is a clear violation of the second law I think method 2 is correct • Well what you think doesn’t matter. Frequency doubling (and tripling) crystals exist and don’t violate the second law as does two photon excitation microscopy. • @Phil… Oops! You’ve touched entropy. Does the entropy of a crystal diminishes or it increases? Does the entropy of that crystal surrounding increases or it decreases? Does the entropy of other crystals behave homogenously? Would that crystal preserve its structure as long as the universe exists? You’ve got a biiig problem, and you did it alone. • Phil. Felton The point you bring up is very interesting. If a crystal can double the frequency of radiation with no energy loss then I will have to revise my understanding of the second law. I have been to several websites to get more background information. I have so far been unsuccessful. The more relevant ones seem to be behind pay walls. If you could provide a link to the thermodynamics of frequency doubling crystals it would be a great help • I am ignorant in this area so please let me ask a couple questions. Are these crystals in a passive system similar to a crystal used to display a spectrum? My thought is that if it is in a powered system there may be a pump effect whereas a passive system would have much less possibility for a pump effect to be happening. Why would anyone consider that combining two photons of one frequency into one photon of another frequency with no change in net energy be a plus or minus to either side of the argument? It would apparently conserve mass and energy and the frequency change is proportional? • kuhnkat As far as I know these crystals are only used in lasers. The total power output will be less than the input. So it might be using work to achieve what would not happen spontaneously a bit like a refrigerator. However if someone can prove that a crystal can without any loses double the frequency of radiation then I will need a rethink on the second law. • Beats me! They’re a passive system, the crystal lattice absorbs two photons and is excited to emit a single more energetic photon (double frequency). This gives a good account: • Phil. Felton It seems the radiation has to be of very high intensity like a laser. Later on they talk about increasing the efficiency. They dont specify whether this is energy efficiency however. I will need to keep looking. • Thanks gentlemen. • I would assume a trade-off between frequency and amplitude. • BryaN, You are safe. The picture with the article shows a residual wave in addition to the desired second harmonic. It looks like only part of the beam is doubled and they filter out the residual for the microscopy. • kuhnkat Yes it looks like the a fraction of the fundamental went through and the desired output the second harmonic is then utilised. Its strange that to make sense of this phenomena we have to use the language of wave physics. Why should a particle phenomena like the photon have harmonics? It lends support to Claes ideas. I would like to see a further analysis of the thermodynamics of this system. • Bryan, actually I do not see it as strange at all. If there weren’t serious issues with the whole wave versus particle bit it would not have taken so many great minds so long to come up with the current compromises. The fact they settled on quantum theory as the explanation in no way invalidates the experimental data on what appeared to be waves at work. I think there is an issue with people thinking of a physical particle when quantum theory doesn’t really say there are physical particles. My limited reading seemed to indicate that electrons moved closer to waves than waves moved to electrons. They are both just convenient ways for us to think about a sets properties and how they interact. • One of the ways Climate Scientists obfuscate the physics is confusing energy and temperature. They are separate at the level that is being discussed in climate science. Getting a particular frequency out of a CO2 molecule does not mean it is the temperature as assumed by planck radiation. The frequency is determind by the molecular bond and not black body emission. The temperature would be indicated by the number of photons emitted by the CO2 molecule at atmospheric temperatures. Apparently CO2 has do be at combustion chamber temps for planck radiation to become significant. • In the cases I am aware of, the crystal which will take two photons and add them to create one photon of twice the energy, are very carefully selected or designed materials for having that effect on a particular wavelength photon. They do not do this for other frequencies of radiation incident upon them. So, the case you describe is a very special case and, yes, no violation of physics occurs in that case. But water, rock, and dirt do not generally have this property. • Charles The radiation from the laser does seem to have some unusual properties. For instance it does not obey the inverse square law. • huh? It sure does obey inverse square law, if not energy conservation would be violated. Simply, it has very high directivity, the divergence angle of a typical laser beam is very low (often almost as low as his frequency allow). But within this very small solid angle, it sure obey inverse square law: in a perfectly transparent medium, the intensity of the laser will be much lower (and the surface illuminated much larger) 1 light year away, an even a few km away, the broadening is already noticeable… • Dr. Anderson, ‘But water, rock, and dirt do not generally have this property.’ The important distinction with respect to nonlinear optics is that the nonlinear optical process necessitates coherence in space and time between the mixing beams. From there one can get sum and difference frequency and harmonic generation. That’s why lasers are used in such situations. But those are not the only kinds of nonlinear effects possible in a material. There are many more incoherent nonlinear processes in which the different photons acting upon a material are not coherent. Excited state absorption and spontaneous light scattering are two such situations which have fairly high cross-sections. So while many rocks do not have the property of being crystals with specific bi-refringent properties, there are still many nonlinear optical processes that can occur, all which you neglect in the piece that has been featured in the comments here. • Ort, How does one electron magically know the state of another electron in quantum mechanics??? Magic is apparently how our world works. It does what it does and we must figure out the rules and make up explanations that are palatable to our limited minds. • Thanks for that Bryan, • Hockey Schtick Thanks for the link, he knows what he is talking about being a specialist in materials physics • Yeah but he’s made a few mistakes. Following his argument, by emitting photons the surface necessarily cooled the instant those photons left leaving energy states ready to absorb any returning photons. • In the equilibrium case of solar radiation flux upon the surface, the Earth’s surface temperature is constant and the emission of a photon does not cool the surface. Of course at night, with no incident solar radiation, the surface is constantly cooling as infrared photons are emitted. In that case, a photon absorbed by a water molecule or CO2 may result in emission of a photon from that molecule and the photon may be absorbed by the cooling Earth’s surface, thereby retarding the cooling. Where I said the emission of the photon from the Earth’s surface cooled it, I was talking specifically about the phenomena of cooling at night. I wanted to make sure the reader knew that I was not denying that the presence of infra-red absorbing molecules in our atmosphere can contribute to a retardation of surface cooling at night and to make it clear how it did this, when it could not do it in the case of the surface at a constant or increasing temperature. • Your support is clumsy: don’t you care that this theory is in contradiction with what Bryan said? His theory and Bryans “approach 2” cannot be correct at the same time. Choose one side. (anyway, they are both wrong) • Ort: no, it is in agreement with Bryan’s approach 2. And tell us exactly why you know “both are wrong” Phil Felton: doesn’t matter – obviously the process continues to cycle, with no heating of the hotter object • Of course, here are the conceptual differences: Brian: no backradiation (supposedly because the 2d law) with a theorical case of two blackbodies (so, total absorptivity). Charles Anderson: backradiation, but absorptivity of the Earth surface = 0 for the longwave radiations (all the confuse 2d paragraph). That’s obviously false: you can check any textbooks for the absorptivity vs. wavelength for all the different type of opaque materials. • Ort: no, Bryan’s approach 1 is “no backradiation,” which he dismisses. Approach 2 is that there is “backradiation,” but the colder objects “backradiation” cannot heat the hotter object. This is exactly what materials physicist Charles Anderson explains in detail, and you fail to understand why the absorptivity is effectively 0 by a hotter temperature/frequency/entropy body from a colder body – did you even bother to go to his blog post instead of just reading the small excerpt? • I did and he’s wrong! • Pekka ……”but the rate of radiation is not changed by the absorption. Thus the incoming radiation influences the heat balance of the body.”….. This seems to be self contradictory • Bryan, I was not fully precise. There will be an effect through increased temperature of the body. I meant that there is no immediate effect related to the absorption. For a real surface even this is not quite true, but only a very good approximation, but for a black body it is true. • Pekka With reference to the option2 and 3 in my post above. To all realistic intents and purposes there is little practical difference between them. The heat by calculated by SB equation goes from hotter to colder body. Option 3 has the unfortunate implication of upgrading the quality of the radiation from the colder object which conflicts with the second law. Also the possibility of an increase in temperature is a signature of Heat transfer from colder to hotter which Clausius said was forbidden. • Bryan, I answered to your other message on this point. Your argument is in error. • There is a curious definitional issue here. A black body is often defined as a body that will absorb all wavelengths of radiation incident upon it. This is a case however in which we need badly to talk about real materials, such as those in the surface of the Earth. I extensively use a technique called FTIR spectroscopy to identify and characterize materials in my laboratory. The technique commonly uses infrared radiation covering the range from 2.5 microns to 25 microns in wavelength. A material placed on a IR transparent window, such as diamond, is irradiated as the IR wavelength is varied and any absorption results in a scattering of the IR radiation so that much less is reflected back to the IR detector. If real materials absorbed all IR in this broad range of wavelengths, the technique would be pretty useless. The range of IR radiation wavelengths covers most of the spectrum of radiation from a material emitting IR at a temperature of 288K. Near IR spectroscopy covers the longer, low energy tail of the 288 K emitter and while absorption here tends to be greater, it is still much less than 100%. That makes near IR spectroscopy a useful technique also for studying many materials. Most of the Earth’s surface is covered with water and the biggest window for water in the range of IR radiation near that of a 288 K emitter is pretty well aligned with the peak of the emitter spectrum. So water does not absorb all incident IR. Plants certainly do not either. Indeed, we often perform FTIR on plant materials and food products extracted from them. Near IR spectroscopy is also used on plants and food products extensively. FTIR is used less frequently on minerals because they commonly are not very good absorbers. • Sorry. I do not actually do near IR spectroscopy. I should have remembered that it applies to the IR wavelengths in the tail of the solar spectrum, not in the tail of the spectrum of an emitter at 288K. Near IR is therefore irrelevant in this discussion. • Your ‘theory’ of non absorption by a surface of thermal radiation from colder emitters (you don’t say what happens to the incoming radiation), is clearly invalidated by the fact that microwave ovens work, (see Kai’s posts elsewhere). The usual frequency used is 2.45 GHz (wavelength 122mm), your surface at 300K doesn’t emit much radiation at that wavelength! So why does that get absorbed in an oven? While you’re on here why don’t you explain that when you use your FTIR spectrometer you don’t have to do it in a vacuum because O2 and N2 don’t absorb IR, some of the ‘sceptics’ on here don’t believe that. Perhaps your practical experience will convince them? • The specific frequencies that 99% O2 and N2 absorb emit at are filtered out. Such a device would be worse than useless if that was not the case. • PF; Same answer as to most AGW silliness: it’s the H2O, st**id. • In that case, please explain me the approach 2 , and don’t forget Brian was talking about two black bodies. Now, about Anderson: “why the absorptivity is effectively 0 by a hotter temperature/frequency/entropy body from a colder body “. You fail to understand that the absorptivity of a surface, which is the proportion of radiation absorbed vs reflected, at a given wavelength, is a constant property of the material. No matter where 15um photons come from (from a cold body, a hot body, a distant body, a shaking body, an “active” body, a “passive” body), the ratio of absorbed 15 um photons is the same. Another time, you can check easily textbooks for the absorptivity vs. wavelength for all the different type of opaque materials : the position of Anderson is untenable. • Ort | So you quite happy that 100J of radiative energy at say 15um is upconverted to 100J of radiative energy at 4.7um, without any work being done? • Without an explanation, this question does not make sense for me. Details, please. (sorry to have mispelled your name in my last comment) • Ort If you look at one consequence of option 3 it means that 100J of radiative energy at centred at 15um is up converted to 100J of radiative energy centred at 4.7um, without any work being done? This is contrary to the second law. This is why option 2 is correct. It satisfies the Stephan Boltzmann equation without violating the second law • It has nothing to do with the second law. For the black body the wavelength of the incoming radiation makes no difference, when the amount of energy is the same. 100J heats by 100J. After the absorption it is in the heat of the body and for that the type of the incoming energy makes no difference, only its quantity in energy units. As stated by really many writers the black body absorbs also any wavelength whatever its own temperature. The second law has nothing to say about this. It tells that more radiation goes from the hotter body to the cooler than wise versa. It does not say anything about what happens when radiation hits a body. • “If you look at one consequence of option 3 it means that You seem not to understand the Stefan Bolzmann law (radiation of the body occurs, even in a vacuum), and the laws of thermodynamics neither. In fact, your assertion itself is totally confused and erroneous, linking two independent phenomenons with apparently an implicit energy equality (is that what you call “2d law”?) which can not be applied to your body, which is not a closed system! You already had long, clear, detailed, repeated and explanations of this on SoDs site by different contributors, more patient than I am; so it seems I am losing my time. • Ort Back to the previous question you included the quote but did not answer the implication. Instead you ignored it and went into a irrelevant rant. If the increase in quality of the radiation does not happen then options two and three are the same. If it does happen the second law is violated. • There is no such thing like your imaginary direct process of “upconversion” by “work”, so, I repeat, your rhetorical question as formulated does not make sense. Emission of thermal radiation is a function of temperature of the body (and if not a black body, a function of emissivity, a material property) and that’s that. Period. If ever there is some incoming radiation, whatever its wavelength, it will be absorbed (black body). But no matter if there is or not some incoming radiation from other bodies, and what could be its spectrum, the status of the outside world has no effect on emission of thermal radiation. Now, in all the possible configurations, if doing the sum of all the energy exchanges (including the radiating ones) between the black body and its environment, you find: E_in > E_out, then the temperature will increase (in function of the mass and of the heat capacity). If E_in<E_out, it will decrease; if E_in = E_out, no change. There is nothing "thermodynamically wrong", here. With your choice of same energy values (the last case), you tried at the same time to imply an imaginary direct causal link between emission and absorption, by the means of "upconversion", your word. You are supposing that the emission of thermal radiation is due to photoexcitation: you have invented some physics. You are now free to repeat ad nauseam "2d law, 2d law!", but don’t expect another response from me. • Ort If you read the original post it was the consequence of to the radiation of having the hotter object there as opposed to its absence as it passed through the defined area. Absent ; 100J at BB spectrum centred around 15um. Option 2 no effect on the temperature of the hot body other than to reduce the heat loss from the hot body. Option 3. To increase the temperature of the hot body. The 100J Joules is upgraded to be centred around 4.3um. This violates the second law as stated by Clausius. Heat flows from a hot object to a cold object never the reverse. The increase in the “quality” of the radiation reduces entropy =>against 2nd Law. If the problem was solved using vectors, there would be a single vector pointing from hot to ciold • The case you cite is different. The incident radiation comes from a hotter source, not a colder source or one of an equal temperature. When I measure the absorptivity of radiation in my lab, I use a light source with a filament or emitter which is hot compared to the material I am reflecting and absorbing radiation upon. LEDs are pretty cool compared to a tungsten filament, but they are still warmer than the room temperature object being examined for its absorption of light. Note also that IR detectors image objects warming than themselves, not objects cooler than themselves. • Great explanation, IMHO! • Does your rant actually refer to Anderson’s paper or to something else? What is all this stuff about 1000 K, e.g.? Where does he say that the “ground cannot absorb low frequency light because of vibrational states?” Maybe I missed something? • jae, if you insist on making a comment, you ought to make sure you have read what the content necessary for such a comment. From schtick’s comment taken directly from Anderson’s site, ‘The same is the case with some of the low energy, longwave infrared radiation returned from greenhouse gas molecule de-excitations. The Earth’s surface will not accept them since the excitable vibrational states are already excited and vibrating assuming that its temperature has not dropped since the returned photon was emitted by the ground. There simply is no available energy state able to accept it.’ He is saying that there are no ‘states’ that can absorb low frequency IR light because they are already in an excited state. If we ignore the factual inaccuracy of this statement to begin with (excited states still absorb IR light to get to further excited states) he is basically saying that there is an almost permanent vibrational population inversion in where there are more molecules in the surface of the earth that are excited rather than in their ground state. How else could he insist that, on average, ‘low’ frequency IR photons are not absorbed by the surface of the earth? If being in an excited state stops such a process, most molecules must be in such an excited state, right? Wrong. That is about as nonsensical a statement as one can make. If what he is saying were true, we could make a laser of the earth. I’m not seeing the ‘earth-laser’ in the near future. On top of that, it’s not as though each molecule only has one excited vibrational state. Each electronic manifold has many, many such states, each with its own selections rules for absorption of IR light or scattering of light. So even if the molecule is in an excited state, it can still absorb a photon of the appropriate energy to excite vibrational population to an even higher lying excited state. Because we are discussing vibrational transitions on the electronic ground state manifold, we do not have to take into consideration the topology of the potential energy surface itself. That means almost all of the overtones (excited state transitions) are of about the same energy as the fundamental. That means that the energy emitted by the decay from the first excited state will be very close to the energy necessary to make the transition to the second excited state from the first. That’s a great deal of quantum mechanics, but the point is that his premise is wrong to begin with, so whatever conclusions he makes with it are incorrect. On top of THAT, since he is discussing temperature, we can ask what the energy in a photon that excites the asymmetric stretch of CO2 corresponds to. Using Einstein’s equation and making an equality with the thermal energy from the Boltzmann constant, we find that such a photon has the equivalent of over 1000 K. When we follow his logic (ground to warm to absorb ‘low’ frequency IR light) it falters on the fact that a negligible portion of the earth’s surface is over 1000 K. Therefore, using his false logic, the vast majority of the earth’s surface should still absorb IR light emitted by CO2 molecules because those photons correspond to a temperature that is much, much hotter than the vast majority of the earth’s surface. So not only is this guy wrong on the front of the greenhouse effect, he is wrong about the optical properties of molecules and the optical properties of materials like the earth. I’m happy I came across him though. I wouldn’t want his firm doing any work for me. Who knows what he’d tell him. Can you follow all of that, or should I break it down for you even further? • Anderson writes: Which is baloney. He seems to be unaware that temperature is an average, for one thing. • “You’re telling me that the ground can’t absorb photons corresponding to a temperature of 1000 K? Really?” He he your post just made me think of a perfect household example for challenging (imho killing, but let’s see what aswer propoents of the no absorption can come in): How can my microwave very efficiently heat my food, when it emits (a lot of) photons at very low frequency (2.5 Ghz, 10 cm wavelength, centered around the emission peak of objects much much colder than my food !!!) • Right, in the context of the effective photon temperature, this definitely defeats Dr. Anderson’s theory. The photons have an effective temperature below 100 K (I think) while the food is at room temperature. • maxwell, “The photons have an effective temperature below 100 K (I think) ” Note that term “effective” you use. Please get someone to explain what the significance of it is in respect to the discussion. • MW cookers are tuned to excite water molecules. That’s why the handle of the coffee cup is barely warmed while the liquid contents are strongly heated. I haven’t tried it, but I assume it would be hard to MW-heat Melba toast! • MW ovens heat materials with a strong absorptivity
e3f092d5e5adc61f
de | en Materials Science Module PH0022 [AEP Expert 2] Module version of SS 2011 available module versions SS 2023SS 2021SS 2018SS 2017SS 2014SS 2011 Basic Information • Mandatory Modules in Bachelor Programme Physics (6th Semester, Specialization AEP) Total workloadContact hoursCredits (ECTS) 150 h 45 h 5 CP Responsible coordinator of the module PH0022 in the version of SS 2011 was Jonathan Finley. Content, Learning Outcome and Preconditions 1. Semiconductors and their nanostructures 1.1 Fundamental properties of semiconductors and their nanostructures. Introduction and overview of course, basic properties of semiconductors, electronic properties of key materials (group IV and III-V semiconductors), quantum effects in semiconductor mixed crystals, epitaxial growth of multilayer materials, andsystems with varying dimensionality. 1.2 Bandstructure engineering, top down and bottom up nanofabrication.  Bandstructure engineering, 2D systems using mixed crystals, top-down methods (lateral patterning, AFM lithography), bottom up approaches (self-assembly, CEO, patterned substrates, catalytic growth of nanowires), physical and optical characterization methods. 1.3 Doping, equilibrium and non-equilibrium carrier statistics. Doping of semiconductors, carrier-statistics in thermal equilibrium (intrinsic, extrinsic materials), the P-N junction (band profile, built-in potential, currents flowing), examples: light emitting diodes and lasers. 1.4 Electronic structure of nanostructured semiconductor materials. 2D systems, Band offsets, alignments, experimental determination, electronic sub-bands in low dimensional systems, Schrödinger equation in heterostructures, isotropic and anisotropic effective masses, superlattices, quantum wires and quantum dots. 2. Electron transport in bulk and mesoscopic materials 2.1 Diffusive to ballistic electron transport. Ohms Law and current density, important scattering mechanisms (ionized impurities, phonons), mobility and velocity field relationships. 2D electronic systems, modulation doping, patterning the 2D electron gas, ballistic electron transport in quantum point contacts and conductance quantization. 2.2 Magneto-transport in structured solids. The conductivity and resistivity tensor, The classical Hall effect, Landau quantization, Shubnikov da-Haas effect, Hall effect in 2D systems, Integer quantum Hall effect, fractional quantum hall effect. 2.3 Tunneling transport through potential barriers. Electron incident on a single heterointerface, the transfer matrix method, rectangular barrier, the WKB-approximation to calculate the tunneling rate, double barrier resonant tunneling diode 2.4 Electronic transport in mesoscopic devices Coulomb energy and temperature, Coulomb blockade, Single electron switching devices, charge stability diagram, the single electron transistor (SET), Nanowire SET, few electron artificial atoms. Double Q-Dots, measuring / manipulating single charges and their spins. 3. Optical properties of engineered solids 3.1 Introduction to optical properties of materials Optical coefficients, electrons in an electromagnetic field, interband transitions in solids, absorption spectrum and joint density of states, interband luminescence, excitonic effects in inorganic and organic materials. 3.2 Optical properties of quantum wires and dots Inter-band optical transitions in quantum wells (parity and polarization selection rules), interband luminescence in solids, Inter-subband transitions in quantum wells, mid infrared emitters and detectors, optical properties of quantum dots, single photon emitters and quantum cryptography. 4. Carbon based nano-materials 4.1 Graphene and Carbon Nanotubes   Graphene (crystal and electronic structure), carbon nanotubes (CNTs), wavevector quantization and electronic structure, metallic and semiconducting CNTs, methods of CNT synthesis, measuring electronic structure, quantized conductance in your living room, graphene=material of the future ? Diamond, color centers and the magical NV- center. 5. Quantum emitters and structured nanophotonic materials 5.1 Quantum engineered light emitting devices Spontaneous vs. stimulated emission, carrier recombination (radiative vs. non-radiative), LEDs, enhancing light extraction efficiency, lasers (need for, design elements, gain spectrum, condition for self-sustaining laser oscillation, transparency) influence of dimensionality on laser performance. 5.2 Photonic crystals The concept of a “photonic crystal”, photonic bandstructure, fabrication methods for 3D and 2D photonic crystals, defects to localize and trap light, integrated optical devices, photonic crystal fibers. 5.3 Controlling the light-matter interaction Photonic nanostructures, optical resonators, quantization of the electromagnetic field in a cavity, the Purcell effect, new generations of quantum light emitter and ultra-efficient nano-lasers of the future. Learning Outcome no info no info Courses, Learning and Teaching Methods and Literature Courses and Schedule VO 2 Materials Science Papadakis, C. Wed, 14:00–16:00, virtuell Fri, 10:00–12:00, virtuell UE 1 Exercise to Materials Science Jung, F. Responsible/Coordination: Papadakis, C. dates in groups documents RE 2 Consultation Hour to Materials Science Papadakis, C. dates in groups Learning and Teaching Methods Die Vorlesung wird kompakt in der ersten Hälfte der Vorlesungszeit gelesen. Die Vorlesung wird ergänzt durch Tutorübungen. no info no info Module Exam Description of exams and course work Exam Repetition Top of page
90bc3637a0d400e3
Quantum Computing for electronic structure calculations Authors: Max Rossmannek, Panagiotis Kl. Barkoutsos, Pauline J. Ollitrault, and Ivano Tavernelli  Title: “Quantum HF/DFT-embedding algorithms for electronic structure calculations: Scaling up to complex molecular systems” Journal: The Journal of Chemical Physics 154, 114105  Year: 2021  The Schrödinger equation is one of the fundamental equations which helps us understand, among other things, how the electrons behave in a molecule. The goal in the scientific field is to solve the Schrödinger equation to obtain a specific wave function to describe an object’s full quantum configuration or state mathematically. This equation is a potent tool that theoretically allows us to calculate all the other molecular properties of the system, such as molecular geometry, energies, spectra, dipole moments, and many more. However, calculating all the properties of a complex system leads to an exponential number of possibilities that become difficult to calculate even with the world’s best supercomputers. This is especially true in cases when electrons become “highly correlated,” i.e., they start influencing each other intensely. Also, it becomes difficult for many-electron atoms or molecules to find exact solutions to the system due to the interactions between more than two bodies. This problem is similar to the many-body problem in classical mechanics. Therefore, the exact solution for the many-electron atoms or molecules with more than 2-3 atoms doesn’t exist.  Quantum computing is, in that regard, a promising resource. Quantum computers, unlike classical computers, make use of the complex “spooky” phenomenon of quantum mechanics (quantum entanglement and superposition) to solve problems, which even today’s most powerful supercomputer could never solve. In a recent paper, a team of researchers at IBM Zurich, led by Ivano Tavernelli, has successfully improved computation by combining two approximation methods with quantum computing. The strengths of each approach make up for the weaknesses in the others. In their research, Tavernelli and coworkers used Qiskit (a computer software used for computational processing) to successfully implement two approximation methods known as Hartree-Fock (HF) theory and Density-Functional Theory (DFT) with quantum computing. While both of these approximation methods aim to solve the Schrodinger equation, they do so in different levels of detail. HF theory approximates each electron’s motion and treats the effect of all the electrons as a smeared out, average electrostatic field to calculate the total molecular wave function of the system. However, the motion of electrons in an actual molecule is much more complicated than this depiction because electrons are “correlated” in their movement, meaning they are more effective in avoiding each other. Therefore, HF isn’t able to treat electron correlation properly, which allows scope for further improvement.  Density functional theory DFT is an attempt at this improvement. While HF is based on the wave function approach, DFT is based on electron probability density, but both aim to solve the Schrödinger equation. Unlike the wave function, the electron density is measurable from experiments such as X-ray diffraction or electron diffraction. It is also more comprehensible when compared to the wave function approach. To understand the difference in complexity between the two systems, let’s take the example of a water molecule that contains ten electrons, eight electrons from the oxygen atom, and one from each hydrogen atom. The HF wave function for just one water molecule would be composed of 40 variables: three-position coordinates (x,y,z) for each electron and the fourth coordinate for each electron describing its spin. While no matter how big a system is, the electron density of the whole molecule will only depend on three position variables. Therefore, to deal with larger molecules, DFT is mathematically more feasible. DFT is routinely used on classical computers. However, Tavernelli’s group is the first one to use DFT embedding in combination with quantum computers. To take the potential advantage of the available quantum algorithm, only a portion of the full system is computed using a high-level quantum computing approach, while the rest is traded with an efficient but approximate electronic structure method such as HF or DFT. The idea of this scheme is to divide a more extensive molecular system into smaller tractable parts.  The system is divided into an “active space” composed of the small set of electrons directly involved in the chemical reaction, calculated by quantum computing, and an inactive space consisting of everything else that HF or DFT estimates. Active orbitals feel the presence of inactive electrons of the environment, whose calculation is outsourced to classical HF or DFT framework. Thoughtfully partitioning the system allows the use of fewer computational resources at a good level of accuracy. Figure 1 shows the distinction between active and inactive spaces. Figure1: Separation of the system into active and inactive molecular orbitals. The active orbitals (blue box) are treated on a quantum computer, while inactive ones (orange box) are part of HF/DFT embedding and are computed on classical computers. The other inactive core electrons (white box) can be treated as having an effective core potential which can further help reduce computational cost. This work introduced a DFT embedding scheme. Here, a classical computer performs a DFT calculation for a full system and provides an initial density and the one and two-electron integral terms. (These terms are present in the mathematical expression of total electronic energy of the system). This density is then divided into active and inactive parts. Also, the two-electron integrals are split into long-range and short-range components. This is necessary to avoid doubling the counting of electron correlation terms. These parameters are adjusted until the density of the system is minimised. The Variational Quantum Eigensolver (VQE) algorithm is used, which runs partly on a classical computer and partly on a quantum computer. For a given “guess” density, VQE uses the quantum computer to evaluate its energy, while it uses the classical computer to choose these parameters to further lower this energy. This process is iterated until the result convergences, which is very close to exact ground state energy After developing their computer code, Tavernelli and coworkers tested its performance on molecular systems. The most notable achievement of this approach was a remarkable accuracy for strongly correlated nitrogen molecules. These results will improve ammonia production, which is widely used in fertilizer industries using the Haber-Bosch process. Also, the DFT scheme was tested on the oxirane molecule, a system too big to be solved by state-of-the-art quantum computers, and gave promising results. This scheme is an excellent first step to further carry out electronic structure calculations for larger molecular systems using quantum computing. This will enable HF and DFT embedding schemes to make quantum computing possible for solving essential questions, which remain beyond the scope of classical computers and will lead to discoveries. It has potential applications in many fields such as chemistry, drug discovery, strongly correlated systems, field theory, material science, and many others. Leave a Reply
62d823091f37be0d
This question comes from a rather famous quote by Paul Dirac which goes like this, I have seen this quote a number of times and it never struck me as being peculiar because to my knowledge the actual physical laws describing everything we care about in chemistry are known. However, I was recently at a talk (it was a presentation to one of my classes, but the speaker gave a talk about his research) about density functional theory, and this quote was shown as a sort of introduction for why density functional theory was developed and is applied as a form of approximation. After showing this quote, the speaker said "well that's not strictly speaking true..." and then moved on and didn't say anything else. I meant to ask what he meant by this after the talk, but I forgot because I was asking about something else. So, to expand on this: Are there any physical laws which are unknown (or believed to be unknown) which directly relate to chemistry? Are there any mathematics which have not been developed which are necessary to fully describe some aspect of known chemistry? • 11 $\begingroup$ The wave function of a multi-electron atom can not be solved for. That is a pretty basic one. $\endgroup$ – Jon Custer May 5 '16 at 2:33 • 1 $\begingroup$ Electronegativity still doesn't have a solid mathematical description. Electron correlation still has no exact mathematical description. $\endgroup$ May 5 '16 at 3:11 • $\begingroup$ Need to upvote Jon's comment like 500 more times. $\endgroup$ – Lighthart May 5 '16 at 4:48 • $\begingroup$ A relevant link from another physicist's point of view. Follow the links inside for more. $\endgroup$ May 5 '16 at 10:18 • $\begingroup$ Jon Custer's point does not refute Dirac's quote, but is exactly what Dirac meant by it: The fundamental wave equations are known, but solving them exactly even for an atom (except for H) is likely impossible. $\endgroup$ – James May 19 at 12:56 First, it is one thing to know the basis for a "law", and quite another to mathematically calculate the effects of that law. Consider just carbon, for example... chains may be made of thousands of atoms, with various functional groups attached to each. Though, as Dirac stated, it helps to have "shortcuts" to computation such as Fast Fourier transforms, there are still problems that cannot be solved in a "reasonable" time. Second, if there are unknown laws, how would we know about it (not to quote Rumsfeld on unknown unknowns)? And finally, even if all physical laws were known and understood, it would still be impossible to predict everything: Kurt Gödel's incompleteness theorems show that in a complex system (it does not have to be very complex; basic grammar-school mathematics qualifies), questions can be asked that cannot be proved true or false. This extends to chemistry and physics. • 2 $\begingroup$ Only very recently has mathematical undecidability been rigorously determined for a "reasonable" physical problem. Even then, there was a lot of discussion about the interpretation of the results, which I am nowhere near qualified to relay. $\endgroup$ May 5 '16 at 10:24 This sounds an awful lot like something I said at a talk this week, so I feel obligated to answer. First, in terms of fundamental interactions, yes, excluding a quantum theory of gravity we have a quantum field theory for how the other fundamental forces work (electromagnetic/weak and strong). @DavePhD mentions that Dirac was wrong at least up to the development of QED. This is true. Dirac could write the non-relativistic molecular Hamiltonian down. He knew that even if he couldn't solve it, all the physics were still there, and so in principle the system was "knowable." This is just like how we can't solve the gravitational many-body problem exactly but we definitely know how Newtonian gravity works. Anyway, fast forward to today. We can write the QED equations of motion down which account for the time-evolution of all field operators. In principle this contains all the interactions necessary to describe molecules. Computational complexity aside, I think it is important to recognize that we must get rid of several degrees of freedom before we can even define a molecule. Most importantly we have to remove the possibility of electron-positron pair creation, since in QED particle number is not constant. Molecules, after all, are defined (in our chemists' minds) as having a fixed number of electrons. Heck, even "particles" themselves are not really present in QED, they are just excitations of the underlying quantum field. We do something similar even in non-relativistic QM, where we fix the geometry of molecules via the Born Oppenheimer approximation (and treat nuclei as classical). If we didn't, the Schrodinger equation would describe every possible geometry of a collection of atoms and electrons (Well, geometry is not well defined in a electron-nuclear wave function, but you get the idea). All this to say, writing the equations that govern a molecule will never be as simple as just "writing down the fundamental interactions", and I think Dirac got that wrong. Approximations will always be necessary as long as we hold onto a conception of a molecule as a fundamental object of study. Molecular QED today is limited to effective Hamiltonian theories involving photon absorption and emission. But most of the same results can be derived from a classical view of EM fields. Most of QED is unnecessary for an accurate description of chemistry. Pair creation processes are in energy ranges that we just don't access in the laboratory. The one area which it could excel is relativistic electronic structure theory. Our current attempts are based off of the Dirac equation, which really only holds for one spin-$1/2$ particle. Given the extension to multiple particles, we have to resort to approximate relativistic treatments. The most accurate -- in the sense that it contains the most physics -- relativistic two electron interaction term I've come across is the Breit interaction, but even that is an approximate electron-electron repulsion term. We don't know the exact relativistic term within the structure of relativistic electronic structure theory. But that's okay for now, since even including the Breit term is overkill for most molecular systems. As far as not knowing all the fundamental interactions relevant for chemistry, let me finish with one example that I find particularly fascinating, and that is the nature of chirality of molecules. One area of study is whether or not one enantiomer of a chiral molecule is energetically more stable than the other. Even if there is a slight difference, over long periods of time it may explain why life tended to evolve using L-amino acids, among other things. This energy difference is hypothesized to be very small, on the order of 10$^{-11}$ J / mol. Anyway, this theoretical difference in energy of chiral molecules cannot be explained using any theory of electromagnetic interactions, because the EM interactions are identical in chiral molecules. Instead, the difference in energy (if there is one) must come from a parity-violating term, which shows up only in the electroweak interaction. So this area of study is known as electroweak chemistry. As far as I know, the exact form of this parity violating term is up for debate (and there may be multiple terms), since it necessarily has to couple to some sort of magnetic perturbation, like spin-orbit coupling. Because no one really knows exactly what this term looks like, it makes it difficult for theorists to predict what the possible energy difference between enantiomers should be. Which then makes it very difficult for spectroscopists to know what to probe for. So this is an example of a fundamental interaction relevant to chemistry (albeit a small one), that we don't really know much about, but would give huge insights into the evolution of life as we know it. • $\begingroup$ Ha. You probably couldn't have answered in that much detail if I asked after the talk. The part about electroweak interactions is particularly interesting. And thanks for the talk! $\endgroup$ – jheindel May 8 '16 at 18:26 Dirac is probably right but, even if he isn't, it probably isn't important for chemistry. The issue highlighted by Dirac is that, even if we do understand all the relevant laws of quantum mechanics as far as they determine chemical properties, that doesn't help us turn chemistry into a branch of mathematics. The problem is that while we understand the equations we don't have good ways of solving those equations except for the simples systems. For example, we only have exact solutions to the electron wave function equations (which is what determines most chemical properties) for the simplest possible atoms (one nucleus, one electron). Everything else is an approximation. This should not be a surprise. The three body problem for Newtonian gravity has no exact solutions (or, more strictly, only a very small number for some very special cases). Quantum wave functions are more complex than that and systems with multiple electrons are not going to have neat mathematical solutions. What this implies is that we can't reliably predict the chemical properties of anything more complicated than hydrogen atoms from the physical laws they obey even if we completely understand the laws. We can approximate but it is hard to tell if reality deviates from the rules because our approximations are poor or because we don't understand some detail of the rules. So even if there is some subtle detail of the laws we don't understand, it would be hard to verify the implications for chemistry. There might be a few areas where obscure parts of quantum mechanics do impact chemistry (though speculation is all we have right now because of the limitation described above). Normally in the quantum mechanics used for chemistry, we are just looking at electromagnetic forces, and that is complicated enough. Some people have speculated that other forces might have small influences that matter for chemistry. For example, there is speculation that some interaction with nuclear forces might explain life's preference for single optical isomers in many living structures. The speculation suggests that optical isomers have very slightly different energies because of a tiny interaction with the asymmetry in non-electromagnetic forces (see this example). But these effects are, if they exist, small compared to the uncertainty in our predictions based on the well-known laws. So the dominant problem is chemistry is the quality of our approximations not the potential existence of entirely new laws. • $\begingroup$ I think that the importance of "analytic solutions" as a measuring stick of whether a system is understood or not is overblown, because what makes a solution "analytic" is arbitrarily defined. It sounds much like separating numbers between constructible and non-constructible by virtue of the fact that the former can be produced with a compass and straightedge, while the latter can't. Just an artefact from old Greek geometers. $\endgroup$ May 5 '16 at 10:41 • $\begingroup$ Non-relativistic quantum mechanics and relativistic Dirac equation have finite accuracy. It means if the error in approximation is smaller than the intrinsic error of the underlying equations, we are still fine. $\endgroup$ – Rodriguez May 5 '16 at 10:45 • 1 $\begingroup$ @NicolauSakerNeto My distinction isn't about the difference between "analytic" and approximate solutions. It is about the difficulty of even getting good approximate solutions, which is still large. That's why computational chemistry needs lots of computer power. $\endgroup$ – matt_black May 5 '16 at 11:05 As already pointed out in a comment by John Custer, Dirac was 100% correct in For light systems, typically thought as H-Kr, non-relativistic quantum mechanics i.e. the Schrödinger equation is sufficient to describe chemistry; for heavier nuclei you need to use relativistic quantum mechanics i.e. the Dirac equation which is a bit more complicated. In many cases we can invoke the Born-Oppenheimer method, and assume that the nuclei are moving in the instantaneous field of the electrons; now all we need to solve is the electronic problem. We know that the exact solution to the electronic problem in this case is achievable with the full configuration interaction (FCI) a.k.a. exact diagonalization method, in which you describe the electronic many-particle wave function by a weighted sum of electronic configurations a.k.a. Slater determinants as $|\Psi \rangle = \sum_k c_k |\Phi_k\rangle$. These electronic configurations are built by distributing $N$ electrons into $K$ single-particle states a.k.a. orbitals. For the method to be accurate, $K\gg N$, and actually to get the exact solution you need $K \to \infty$. Now, to find the ground state (as well as any excited states), you just need to diagonalize the many-electron Hamiltonian in the basis of the electron configurations. But, the problem is that the number of the electron configurations grows extremely rapidly. If we assume that we are looking at a spin singlet state, then you have $N/2$ spin-up and $N/2$ spin-down electrons. For each spin, there are ${K \choose N/2} = \frac {K!} {\frac N 2 ! (K-\frac N 2)!} $ ways to populate the orbitals. The total number of electron configurations for $N$ electrons in $K$ orbitals, typically denoted as ($N$e,$K$o), is then ${K \choose N/2}^2 = \left[ \frac {K!} {\frac N 2 ! (K-\frac N 2)!} \right]^2$. Even for the case of a very small number of orbitals, $K=N$, the number of configurations quickly becomes huge. (8e,8o) has 4900 configurations, (10e,10o) has 63 504, (12e,12o) has 853 776, (14e,14o) has 11 778 624, (16e,16o) has 165 636 900, (18e,18o) has 2 363 904 400, (20e,20o) has 34 134 779 536, and (22e,22o) has 497 634 306 624. Although you can still solve the (8e,8o) problem with dense matrix algebra on modern computers, you see that very quickly you have to become very smart in how to diagonalize the matrix. Because the Hamiltonian is a two-particle operator, it is extremely sparse in the basis of the electron configurations: if two configurations differ by the occupation of more than two orbitals, the Hamiltonian matrix element is zero by Slater and Condon's rules. Moreover, for large problem sizes you also want to avoid storing the matrix, which is why you want to use an iterative method. (The famous Davidson method for iterative diagonalization was actually developed exactly for this purpose!) With smart algorithms, billion-configuration calculations i.e. the (18e,18o) problem have been possible since the early 1990s, see e.g. Chem. Phys. Lett. 169, 463 (1990). However, despite the huge increase in computational power in the last 30 years, the barrier has barely budged: as far as I am aware, the largest FCI problem that has been solved is the (22e,22o) calculation in J. Chem. Phys. 147, 184111 (2017). The thing to note here is that even the (22e,22o) calculation is not large enough to solve a single atom in an exact fashion: you need a lot more orbitals to achieve quantitative accuracy with experiment. Although a high-lying orbital contributes only very little to the correlation energy, there are A LOT of them. Exactly as Dirac wrote, approximations are needed. Density-functional approximations are extremely popular in applications, but they are far from exact. On the other hand, high-accuracy studies often employ the coupled-cluster method, which is a reparametrization of the FCI method; however, it, too would exhibit exponential scaling were it not truncated - i.e. approximated. Dirac published that statement in Quantum Mechanics of Many Electron Systems, received 12 March 1929. In 1948 Verwey and Overbeek demonstrated experimentally that the London dispersion interaction is even weaker than 1/$r^6$ at long distance (say hundreds of Angstroms or more). Casimir and Polder soon thereafter explained with quantum electrodynamics (QED) that the dependence should be 1/$r^7$ at relatively long distances. See section 3.1.1 Van der Waals Forces, in Some Vacuum QED Effects So Dirac was wrong at least up to the development of QED. Your Answer
3ae35fe51f9acbe6
Learning About Atoms The Science PLC at our school is considering what students should know about atoms in 8th and 9th grade science classes (including Physics First).  Just recently, Amber (Strunk) Henry posted on Twitter: This is my attempt to arrange the ideas. Map of the Territory of Things to Know Next Generation Science Standards (NGSS) Here are the progressions found in Appendix E of the standards.  I do digress into talk of matter and substance when it supports later understanding of atoms.  I’ve expanded these to list ideas explicitly and separately. ESS1.A The universe and its stars • (Grades 9-12) Light spectra from stars are used to determine their characteristics, processes, and lifecycles. Solar activity creates the elements through nuclear fusion. The development of technologies has provided the astronomical data that provide the empirical evidence for the Big Bang theory. • Excited atoms/molecules emit light of particular frequencies and wavelengths (collectively called the emission spectrum of an atom/molecule). • The frequencies and wavelengths of light emitted by atoms/molecules depend on the structure of the atom/molecules. • Atoms/molecules absorb light at particular frequencies and wavelengths (collectively called the absorption spectrum of an atom/molecule). PS1.A Structure of matter (includes PS1.C Nuclear processes) • (Grades K-2) Matter exists as different substances that have observable different properties. Different properties are suited to different purposes. Objects can be built up from smaller parts. • Matter can be made of different substances. • Substances have many properties, each with their own uses. • Objects are made of smaller parts. • (Grades 3-5) Because matter exists as particles that are too small to see, matter is always conserved even if it seems to disappear. Measurements of a variety of observable properties can be used to identify particular materials. • Indivisible particles of matter are too small to see. • Measurements of properties characterize substances. • (Grades 6-8) The fact that matter is composed of atoms and molecules can be used to explain the properties of substances, diversity of materials, states of matter, phase changes, and conservation of matter. • Molecules are made of atoms. • Matter is made of atoms and molecules. • Different atoms and molecules explain different substances. • Atoms and molecules behave differently in different states of matter. • Atoms and molecules change their qualitative behavior at phase transitions. • Matter is conserved because atoms are not destroyed in physical and chemical processes. • (Grades 9-12) The sub-atomic structural model and interactions between electric charges at the atomic scale can be used to explain the structure and interactions of matter, including chemical reactions and nuclear processes. Repeating patterns of the periodic table reflect patterns of outer electrons. A stable molecule has less energy than the same set of atoms separated; one must provide at least this energy to take the molecule apart. • An individual atom has structure explained by electromagnetic and nuclear interactions. • The structure of the atom explains: • arrangement of atoms into molecules • chemical reactions • nuclear processes • trends in periodic table • Energy is required to remove electrons from an atom. • Energy is required to break molecular bonds. PS1.B Chemical reactions • (Grades K-2) Heating and cooling substances cause changes that are sometimes reversible and sometimes not. • (Grades 3-5) Chemical reactions that occur when substances are mixed can be identified by the emergence of substances with different properties; the total mass remains the same. • Mass is conserved in chemical reactions. • Measurement of properties of substances identifies when chemical reactions have taken place. • (Grades 6-8) Reacting substances rearrange to form different molecules, but the number of atoms is conserved. Some reactions release energy and others absorb energy. • Chemical reactions result in different molecular arrangements of atoms. • (Grades 9-12) Chemical processes are understood in terms of collisions of molecules, rearrangement of atoms, and changes in energy as determined by properties of elements involved. • Chemical reactions occur when molecules collide and atoms rearrange. • Changes in energy during a chemical reaction depend on properties of the atoms involved. Let me know if you think I’ve forgotten anything here! AAAS Science Assessment The AAAS has a great website under the auspices of Project 2061 that lists ideas and misconceptions related to Atoms, Molecules, and States of Matter. Arnold B. Arons. Teaching Introductory Physics Arons identifies four lines of evidence necessary to build an early quantum model of the atom: 1. Bright line spectra of gases.  This requires understanding of how accelerated charged particles can emit light, how charged particles can absorb light.  It should include the Balmer-Rydberg formulae for hydrogen. 2. Radioactivity 3. Size of atoms (electron cloud and nuclear).  Evidence from multiple sources. 4. Photoelectric effect and photon concept How should this knowledge be arranged? TODO: I’d like to work on a Learning Landscape, Knowledge Packet, or Learning Progression synthesizing these sources, but that will have to be added later. Models of Atoms 1. BB Model of Atoms and Molecules (hard, indivisible balls) • Needed to explain phases of matter. 2. Dalton Model of Atoms (hard, indivisible balls that can combine) • Needed to explain chemical reactions in integer ratios. 3. Plum Pudding Model / Thompson Model (negatively charged electrons embedded in a positively charged medium) • Needed to explain static electricity. 4. Planetary Model / Rutherford Model • Needed to explain the Geiger-Marsden gold foil experiments. 5. Bohr Model / Rutherford-Bohr Model • Needed to explain why electrons don’t fall into the nucleus after radiating EM waves. • Needed to explain the Rydberg formula. 6. Bohr-Summerfeld Model • Needed to allow elliptical orbits 7. Schrödinger Model / Electron Cloud Model • Needed to explain more satisfactorily why electrons don’t fall into the nucleus after radiating EM waves • Needed to explain atoms with more than one electron • Needed to explain periodic table trends • Needed to explain spectra of large Z atoms • Needed to explain intensities of spectral lines • Needed to explain Zeeman effect from magnetic fields • Needed to explain spectral splittings (fine, although this could be done with Klein-Gordan equation and is really a hack onto the non-relativisticSchrödinger equation, and hyperfine) Note: I need to go back to my QM books on this one. 8. Swirles/Dirac Model • Needed to explain spectra of large Z atoms better • Needed to explain the color of gold and cesium • Needed to explain chemical and physical property differences between the 5th and 6th periods 9. Quantum Field Theory Model • Needed for ??? 10. Nuclear Shell Model / Goeppert-Mayer et al. Model • Needed to explain radioactivity About the biggest controversy is disagreement over the need to teach pseudo-historically these models.  This is leaving out all the really bad ones.  However, the terrible picture that society has adopted as the meme for atom (see below) affects student perceptions of the atom. Stylised Lithium Atom by Indolences, Rainer Klute on Wikimedia Commons.  Note that this is only a model, based loosely on the Bohr model.  Also, these 3 electrons couldn’t all occupy the same circular orbit. It would be nicer if students came into classrooms with the following conception of an atom. Helium Atom QM by Yzmo from Wikimedia Commons. This is a much better rendition of the electron cloud but might be as bad for the nucleus. However, it is nice that it shows scale. Physics teachers tend to like the Bohr model in that it can quickly (although magically) explain the Rydberg formula.  However, there are many reasons to dislike the Bohr model. Classroom Experiments TODO: What classroom experiments or simulations could help students to progress in their knowledge of atoms?
dcb81e920d39552f
Selfish circuits, Loving Giving Paths, Nikola Tesla, radiant energy, Shopping Bag Free Energy Imagine a day where • oil is used for making plastics but not a drop is needed for fuel • hydrogen is made from water with no difficulty and runs some engines • no pollution is produced from energy production or usage • the environment is cleaned in energy production and usage • people are cleaned, healed and invigorated in energy production and usage • heating and cooking costs nothing but time and appliances • travel is just the same without wearing out roads • cancer-causing radiation is no longer needed • wars over energy are viewed as ancient history • those who actually help people and this earth are honored as heroes Imagine then a day where electric is open path and doesn't close the loop to kill the source charge... Carlos F. Benitez An Ignored Pioneer and Father of Free Energy Technology Benitez, A Father of Free Energy Technology. It is rather remarkable that Carlos F. Benitez gets so little credit for his patents filed 100 years ago when the processes he shared in them have been built on by such people as Ed Gray, Bedini and many others. While thousands know the later names and have attempted to replicate their secondary work, I find no one attempting to replicate Benitez work even though his patents are the easiest to understand. This is a mystery to me. There are fundamental misunderstandings of batteries in the free energy research world, as well as the general alternative energy world, and this misunderstanding has caused people like Patrick Kelly in his Free Energy Info book to completely disregard one or more of the Benetiz Energy Generation Systems. After recommending the Benitez System as a worthy pursuit we find amid some useful interjections several seriously mistaken statements about batteries, starting with the following: Here Kelly recommends frequent battery rotation, which is consistent with his very detailed focus on the popular “Tesla Switch” which rapidly rotates batteries. However, those familiar with batteries and all the types of battery environments, will know that frequently rotating batteries is what ruins them more than any other type of abuse beyond overheating them. This is the problem with batteries used with solar systems. The lead acid batteries in such systems, for example, experience too much stress in this way, and thus the plates become ruined. Batteries need time to rest between being charged and loaded and being loaded and charged. To force them back and forth is the equivalent to work-hardening metals. I have carefully studied this and observed this for many years as my work is in battery rejuvenation technology and I daily converse with people all over the world about their battery systems. These Tesla Switches are actually battery killers worse than solar chargers. While they may produce some apparent benefit for a short demonstration, no one shows one in continual use for the obvious reason that it is a fast battery killer. So it is ironic that Kelly actually gets this completely and fundamentally wrong in thinking it is “odd” for Benitez to opt for a slower battery rotation rate, as well as with the mistake that batteries have a limited and relatively short life: It makes no sense to call this system brilliant if it is a battery killer. But the fact is that it isn't one. Kelly is stuck on the popular myth in free energy circles that batteries need to be discharged at a C20 rate to be safe when there is no support for such a claim. It may be true that starter batteries should not be deep-cycled or discharged regularly faster than a C20 rate, but the entire golf-cart industry will testify that their batteries are discharged much faster. In fact, with the Renaissance battery charging technology I discharged my 144V pack of golf-cart batteries at the C1 rate of discharge almost daily with my electric Porsche conversion with a steady gain in capacity over three years. We, and thousands of our customers all over the world, have also brought back countless batteries from a state of uselessness, which can show signs of gaining in capacity after even 10 years after they were formerly discarded. While it is true that with conventional charging practices batteries can become damaged and/or sulfated over many cycles, it is now widely known that with proper battery charging this limit of cycles does not exist. Further, Benitez was not using lead acid batteries so these comments of Kelly's were unwisely interjected in this Benitez patent. The Edison batteries he was using were far better than the lead-acid Kelly is referring to. Yet Kelly is fundamentally wrong about even lead-acid batteries. And what is ironic is that he complains that the Benitez system would kill batteries in 500 hours, when they wouldn't, but suggests rapidly rotating batteries around which actually would kill such batteries in much less time. I know this because I deal with all the experimenters who have so killed their batteries with this battery killer Tesla Switch system. I also know how people jump to conclusions about the benefits of something before they test out the long-term benefits and even the biological dangers such systems can cause to the users. So it is remarkable that one of the best systems that Kelly presents in his book he mistakes, disregards, and dogmatically insists Benitez is wrong about while recommending that which actually ruins batteries. We see this in the following quote from the patent followed by Kelly's comments in brackets: But it actually is so. And Benitez did not mention this for discussion purposes. Nonsense. If a lead-acid battery is damaged when discharged faster than the 20 hour rate then every battery manufacturer would state this. But in fact deep cycle batteries are rated at C5, C8, C20, C100, etc., rates which are daily used for years at C5 and lower rates. It is not the C rate of discharge that damages a battery but the way it is charged that does most of the damage. Again, it is true that if you treat a starter battery as a deep cycle battery then you can so damage it, but this is common knowledge and not recommended by any manufacturer. Furthermore, batteries rated at 40AH can give out 40 amps over 1 hour. We have seen batteries gain in capacity to do more than this. While it is true that Peukert's law does reveal that the faster a battery is discharged the less power you get to use, this does not mean that a 40AH Edison battery will give you 40 amps in “a lot less than one hour.” Benitez showed he was fully justified to make this claim as it would be approximately an hour. Kelly suggests a lot less without knowing about batteries really. Kelly again continues on with these same mistaken ideas which amount to turning off the reader from considering the Benitez system to be practical: Kelly needlessly interjects that Benitez thinks about batteries as 60 volts when nothing in his wording suggests this. What is also ironic in all this is that the Tesla Switch comes from Benitez, but it is not done the right way. If it had been beneficial to switch batteries rapidly as he did with capacitors in these same patents, then he would have suggested the idea. But this actually abuses batteries as all who actually do the experiments find out. Furthermore, lead-acid batteries are not only 50% efficient. Who says this? The battery charger can be very inefficient and a machine using batteries can also be inefficient. While conventionally charging a battery is obviously not 100% efficient, it is well above 50%. Again, another problem is conventional charging contributes to sulfating such batteries and over time the batteries that are more and more sulfated take longer and longer to charge. But such comments are greatly mistaken and only serve to distract the reader from considering this remarkable Benitez system as practical. It may be that Kelly confuses batteries with capacitors. Even Benitez shares the commonly know fact that unlike batteries, when two equal capacitors, one being fully charged, and the other being fully discharged, are connected in parallel, after they have equalized the total available power or charge between the two will be half of what was in before they were put in parallel. And yet Benitez system of doing this with capacitors still manages to restore the primary capacitor's charge every cycle, and which his battery systems manage to fully charge, or more than fully charge, the charging battery each cycle, while it is admitted that under normal conditions the losses in doing that (that is not in the transfer of supposed energy, but rather from the internal discharges of the source battery) would be some 409W typically from a 2000W (40A x 60V) rate of charge. Benitez Patents Foundational Free Energy Systems The careful reading of the Benitez patents will reveal that his ideas were foundational to a lot of the free energy research that has been done over the last 100 years. Tesla never gave the world such a clear method for harvesting energy in such ways. Yet he is the primary focus of hundreds of thousands of free energy enthusiasts all over the world. Many people have unknowingly borrowed from these Benitez ideas and he has not been given due credit. His patents are unambiguously claiming overunity. They are all self-running systems. They are practical, inexpensive and require no exotic parts. They produce additional power in the form of charged capacitors, charged batteries, powered lights, resistive loads, AC and DC motors powered, etc. Original transformer arrangements and associated processes were disclosed by Benitez that are at the foundation for so many asymmetrical systems people are experimenting with and even claim as their own ideas today. Why has Benitez been forgotten? Why are other people, who built upon his foundation, the representatives of his work? I am not one to care much about who came up with ideas as we all receive good things from above. But I am more than curious why he is forgotten and why no one bothers to go to the source and even try to replicate his work. I hope this begins to set the record straight, and people can actually find what they are looking for in this as the practical energy solution so many others are promising but not delivering. In one day of reading the Benitez patents you will be far better off in really understanding how to achieve self-running overunity than in spending years in the free energy communities ever looking but always mystified and beyond reach of anything practical. Selfish Unthankful Circuits or Loving Giving Paths. The Road to True Liberty.  Feb 17, 2016. By Rick Friedrich.  This is my final work on the subject of Free Energy and which may be part of a chapter in an upcoming book on the way our present culture enslaves us. This is an attempt to show the world in the simplest way how to have all the power they need in 4 different steps (I made this more of an outline to go with the video and the full details of how our motors run are not mentioned for simplicity). Let this simple analogy of selfishness, opposite to being Loving, be used to transform our world by God's grace!  As you will see I have added two more important stages than previously shown in the context of Selfish Circuits and Loving Paths. My paper:  should make a little more sense now. I wrote it with the intention of including all four diagrams but wanted to wait until after first showing the later two at this last Feb 12-13 Goshen Indiana convention.  The convention was the best one yet. Most everything worked out. I was able to demonstrate 3 motors constantly running with the third diagram for two days. That is two sets of batteries charging, with mechanical load, and thus three outputs for one input. I also showed the AA batteries and bulb demonstration for two days, as you will understand from the video and diagram 2. Later we ran the bigger motors. One large 4 DualPole motor ran the 4 foot fan again, and another larger 4 DualPole was run the same way as the smaller motors according to the third diagram where two sets of batteries are charged rather than one, while powering 4 100W LEDs. We also ran the big industrial motor with a generator. And finally we ran the 10 coiler with the zero point energy box where it was placed in series with the charging battery and ran it's own load while keeping the primary battery charged, and speeding up the motor. The 10 generator coils were powering LEDs. I didn't get time to completely finish that setup so I had it running only on 4 motor coils and circuits and did not get it tuned up as much as last time where I had multiple loads on the trigger coil. After the meeting was over I then pushed the trigger too far too fast and blew out half the circuits as well as the 400W LEDs on the added zero point output load (I have to be careful with that box/system because if I go over the load's capabilities, as in the 400W LEDs in this case, I can blow them out, which is what happened and then too much energy went back into the primary side of the circuit/motor and blew out the transistors. This was after we were melting down the 100W rheostat on the trigger. We all had fun though.  I feel this was an historic event to note as a turning point in history as for the first time these things were shown to the world in the simplest way of understanding how to generate power in multiple ways. This is my attempt to reach the world in the most effective way and without being too technical. People were very happy, and at least several experienced what I was ultimately aiming for in a spiritual conclusion. These shared how they were moved to want to give up their selfishness and find God in all of His fullness. All I hope for in this work is that this analogy may move the world to forsake its selfishness and turn to God and receive all of His goodness towards them. I hope and pray that you too will allow this to bless your life as much as is possible.  Fully satisfied in the completion of this work, Selfish Circuits and Loving Paths Selfish and Unthankful Circuits or Loving and Giving Paths Keeping for Yourself what was Given to Everyone By Rick Friedrich [This may not seem like it is relevant to free energy until you read it through to see the connections.] Let him that stole steal no more, but rather let him labour, working with his hands that which is good, that he may have to give to him that is in need. (Ephesians 4:28) Why Should I give away this Free Energy? This is not a sermon but rather my reasons for my efforts in revealing to the world free energy in the simplest way of understanding it. I do not claim to have invented this, or to have first discovered these things, but that does not matter because “no one can receive anything unless it is given from heaven.” I am thankful for those who have shared their discoveries with the world and mostly to God for giving this to us. In the first diagram I call the selfish circuit also an unthankful circuit. This is because the extra potential energy produced because of the coil is normally disregarded as useless when in fact we show it can be used for several purposes. This is like a beggar who is starving for food ignoring and throwing away half the food he is given as well as the opportunity for him to make his own food. It seems that since Tesla first tried to show the world how to benefit from these methods that various inventors and businessmen have tried to create unique ways to sell products that produce the same effects. Patents have been filed and granted, and few if any want to show the world in the simplest way how to produce free energy. And why should they? Does the world deserve the knowledge of how to greatly reduce or entirely eliminate their power dependence? Do people have a right to sell products that make them depend on them rather than just showing people how to do it for themselves. It is hard to answer that question when you work in this field and have greatly contributed to this science and have not been paid for your work. In my life I have now given many years to this work and have freely shared what I know. Along the way, a few people who have known more information than what the public has access to, have told me to also 'keep back some things so that you will have something more than everyone else.' Is this right? Why should I? If I freely received it, shall I not freely give it? This is hard to decide. In what sense should we try and make money off of what we know rather than give that knowledge away and no longer make any money off of it because then people do not need you. Are we bound to give the whole world the truth about everything or shall we keep back some things for ourselves? I have decided that it is better to give it away in the simplest of ways for the good of mankind, and so risk becoming unneeded, rather than to keep it for myself and make a little money for keeping the world largely in the dark when it is in my power to do otherwise. My History of Selfishness and how God can Change Us Walk with me now a little as I show my experience of selfishness and make the needed connection to recent years and the circuits we use. My story goes back many years to my youth in Toronto. I was a troubled youth and was turned off from the Christian church because I saw no difference in it than the world. I had family problems and I was selfish to the core. I skipped most classes in grade 9 and thus failed. For three years I was involved in crime. Early on I taught myself how to hot wire cars and during the same years was approaching the character that Charlie Sheen played in the 1987 No Man's Land movie. I soon got into a gang and was used by older members to do crazy things in order to fit in and have the security of a gang. I had a claustrophobic insecurity going into high school, when in those days the nifty niners would get seriously picked on by the older grades. It was some comfort to me to eventually have a gang to protect me, which was the toughest around. Those were reckless years and over time the soul gets more and more hardened and careless so that I eventually got caught by the police. I was on the edge and had nothing to live for but the rush of living on the edge. Yet in the midst of my darkest hour I could never deny the existence of God. And as I ran from teachers by day, and police at night, through neighborhoods and in car chases, I could not put away from me the divine presence from my consciousness. I would lose sight of the officer chasing me and it felt like I was running from God. I felt like Jonah running from God and that He wanted me to do something important for Him. But I did not know God at all and really didn't have any interest in being religious. All I knew about religion in my generation did not interest me at all. On many occasions I should have died due to accidents and reckless chases where I jumped over high fences without even knowing what was on the other side. I still have scars from those selfish years. We said we would always stop before we got old enough to serve hard time as adults. But friends continued. One day just before I turned 16 I had taken my last car and swapped out parts to an old car I just bought and was driving around for a few days. I was with my friend driving towards an intersection and my breaks failed and I was hit by a big truck, and again, should have died. We got out fine but what he said troubled me. He said something so unlike him. He said exactly, “Wouldn't it have been neat if we would have died there, as we would have gone to the same place?” I thought surely I we would have ended in miserly. And as I walked home from that accident I knew that nothing would ever be the same and something totally different was ahead. I moped around for days completely dissatisfied with life. Then I decided to go get my things out of my wrecked car. And when I got there they arrested me. Someone had towed the one car I had stripped down to the same yard that my car was towed to and parked them beside each other and figured out what had happened. So at that point I really didn't care about anything anymore. I had my fun and it destroyed me inside and out. Three days from the accident I received another well timed trigger in my life (this was now the third remarkable thing like this). I got a phone call and was invited to a church by a former girlfriend. I really didn't want to go as I was opposed to Christian belief but ended up being convinced to give it a go. I liked some of the social aspects enough to continue going. Two weeks later I got my shots and prints on my sweet 16. Going to church helped me get out of the really bad crowd and kept me from going to jail, but my heart was not changed and my sins were just modified. Not much influence in the church moved me towards giving my heart to God fully. The influences there were mostly counter-productive and only encouraged modifying my selfish heart. There was no talk or encouragement towards me making restitution for all the evil I had done. So I continued on for a year until another major incident happened that let me feel the hand of God again in my life (yes a toilet spontaneously blew up at the perfect time—long story). I had the courage because of that definite sense of the divine involvement in my life to let go of many things and begin to seek Him. Yet not fully, as I feared the idea of having to go back and face everyone I had wronged, as that would be a seemingly impossible task. So for almost another year I was earnest in many ways and was definitely awakened. And a few things definitely stirred me to my core. The pastor of this same church invited me to look after his home while he was gone a day or two. So I stayed there and found this book on the table called Answers to Prayer by Charles G. Finney (I now republish all of his works on I decided to look at it and as I read it I could not put it down. I could not believe the experiences this man was relating and the faith he had before God. This was nothing I had seen in my life and I was very interested to know more. This influence, along with spending a lot of time in the Bible itself and examining very thoroughly the evidences for the faith, weighed heavy upon my heart. And these influences, contrary to everyone around, lead me to realize that I must do what I could to repair the damages I had done to so many people. I saw that I was still totally selfish no matter what good works I supposed I was doing now, and however much time I was in Christian activities. I could not even face those I had stolen from. I feared going to jail over these things and had no idea what would be involved in beginning restitution. But in time I let go of all things and gave my heart fully to the Lord, come what may. I had no other object than to follow all truth no matter what problems may come. I realized that my own family would greatly oppose me for attempting to make restitution and the this church would also reject the idea of a Christian actually giving themselves fully to God and living a godly life. I wrestled with God that one night and He changed me and showed me what I could never let myself see before, and yet He had been showing me for my whole life. He showed me how much He loved me in His patience and tripping up my way. I had never really understood how anyone could not be selfish, and the unselfish love of Jesus Christ broke my heart down as I saw what He did for me, and that He really could take away my anger and selfish heart in this world. I could never take Christian claims seriously, as is the same with most skeptics, when their religion just amounted to God pardoning them while living and continuing to have the same heart and life as those not professing. Today I have finished and published a chapter on this: I could not take things seriously no matter how many proofs I saw for Christianity. It just was not existentially relevant to me until this revelation of the love of God and His promise to actually dwell with us and completely change our hearts and lives (and it still has taken me some 25 years to write that paper in the clarity I now have on the subject). I learned that He actually came for this purpose and was named Jesus to save people from their sins and not in their sins. That he came to take away our selfish heart and make it of the spirit of heaven was so important to me and is to every youth still today. For I was sick of living for myself, it was so unsatisfying. I had come to see that I was made for an entirely different purpose and my way was just frustrated by trying to get rather than give. I had to let go rather than try and control. I had to seek the good of others to actually find the happiness I was contradictorily trying to posses. You cannot seek happiness directly, you cannot force it or get it with a selfish heart. It comes most unexpectedly when you give up and treat everything justly and in proper balance. When you put God in His place first, and treat others as you know you ought to be treated, then you are surprised to experience happiness because you, for the first time, were not directly seeking it and trying to control everything. You let it go and gave yourself to the good of others. This is what happened and I bless God for His extreme grace in my life. So I did begin the next day, in my 17th year, to make restitution. It took me five years to go back to 3 high schools, one middle school, and walk down countless streets in the big city of Toronto, trying to remember just where I did what to whom. It was a very humbling experience. The first one was the biggest one pertaining to a large store I worked at and I expected to face serious time for what I had done. Yet I was forgiven and everyone in the store was completely awestruck by the story. So I continued on day after day, and year after year I would remember something done here or there. Out of thousands of people I met I only had one bad experience where an old man would not hear me. Another experience that started out bad turned out to also make shockwaves. This man had lost property that never was recovered. So when I called him he was furious. But a week later he drove up to my house and could not believe that I was going to pay him back after 4 or 5 years. So he told everyone about it and was absolutely thrilled. There were so many experiences like this that I was so clearly convinced of my need to do that at that time in my life. I had caused so much mischief and distrust in society that it was only right that I try and reverse that trend and restore some faith lost. My brother ended up doing similar things around the same time and no doubt the city of Toronto may have had a revelation of God's grace towards hopeless sinners. The scripture I took as my own is the one at the top of this paper. And while this first experience was a major focus in my life in those early years, I also gave myself in a similar way in every way I could. I worked as a mechanic and would fix up cars and give them to people in need. I worked in 5 different group homes over three years for people who were mentally and physically challenged. After changing my new ministry's name a few times it is now called Alethea (Truth) In Heart where I republished old classics which I made freely available on the internet and to CDs before the net. I say not this to boast but merely to show that God can change a wretch like me to go from purely reckless selfishness to then be moved by the impulse of heaven. Why do people calling themselves Christians today think it so strange that God would want to dwell in His temple and fill it with love divine towards everyone? This was the message of Jesus, John, Peter and Paul was it not? How can Circuits be Selfish and Open Paths be Loving? So now we move to these circuits and how does all this relate to free energy and selfishness?? Well, 12 or more years ago I started researching into alternative energy and sorted through a lot of false and ignorant and credible claims. I had heard that you could get energy from the air somehow. So after learning so much about Tesla and others in his generation, I also found other notable inventors still living that were representatives of similar technology. Some had impossible personalities, but I was willing to find truth wherever it could be found. After finding all that I had originally wanted in about the first 6 months of research and experimentation, I decide to give my time to promoting this important information. For the next three years I created 5 or more forums, some of them had thousands of members and even more readers. I helped to give some direction to somewhat of a movement which had somewhat more or less existed from the eighties. I received no money for doing any of this and was completely volunteer as I had been in other ministries. During those years many people demanded that circuits, motors and systems be made available for the public so that they could more easily use this technology. So I organized parts to be machined and started winding coils. I just collected enough money for the cost of the parts, shipping, and to have one part for myself. This service grew and I also started in my battery charger company Renaissance Charge in 2007, to design, and sell my chargers. Eventually I merged these together. So where I am I at now? Over years I learned more and more about improving upon these processes and the picture became clearer and clearer. Tesla is not the easiest to understand and worked mostly in AC, which not what I really work with in my motors. Others use very advanced math and physics that is beyond graduate level experience. Thereare key words often used here and there that are important to understand. But after talking with thousands of people who range from college students, hobbyists, businessmen, professors, and even government, people all over the world seeking answers to understanding what is really going on with our motors and systems and free energy in general, I realize now that they really are not getting the very basics of this at all. They are sill looking at circuits in the conventional way. Even though the motor and videos are fairly simple, and people can think about them and experiment with them and see what happens, they still misunderstand them and therefore fail in their application. Tyndale's Covenants and Reciprocity. So it was really only recently, after more than a year of recent conventions in Europe and in the US that it finally clicked with me in an analogy. I had been reading William Tyndale, who gave us the first English Bible (500 years ago) translated from the original languages; his life, martyrdom and great influence upon our language and culture. I was fascinated with his godly life and pure beliefs that were so contrary to his times and even those leading figures after him. He was and has always been suppressed, and even today few people know much about him. So I found in him a very unique view of covenants made between God and man that contradict so much Christian theology today. He said things like if you do not forgive others God will not forgive you. Simple enough for those who read these words from Jesus, but not really popular today for those who say Christianity is not about God making you better but merely that you are considered righteous in name only. So here we are again back to the same vexing idea as earlier in this paper. Well this idea of Tyndale's covenants lead me to think of the word reciprocal and reciprocity which is not unfamiliar language with Newton. So I was thinking of the connection between the pulsing of a motor coil and, what I have used in the most simple terms (for the common reader) to be called the duplicate or mirror or echo power resulting in the charging of another battery or powering another load (as we have done for many years in our motors). And while I was considering the spiritual nature of the covenants Tyndale was bringing up in his introduction to the New Covenant (otherwise called Testament), that they were pure and unselfish examples of virtuous actions between God and man, I realized that the same problems exist in morals and spiritual matters which exist between conventional understanding, with its circuits that the whole world uses, and with all free energy circuits. So in morals we find two different kinds of actions and circuits if you will. There are those who are selfish and those who are unselfish. And if you consider carefully the true nature of things you will see that people are usually entirely selfish every moment of every day until that selfishness can be reversed by divine grace, or they are indeed redeemed from this moral depravity and unending nightmare. And this is why I took so long to develop this idea that was so illustrated in my life. I feel the full force of these things having seen my selfishness so vividly in so many ways. And it is my great desire to help anyone be free from the same. Even after I made some of the most advanced discoveries lately in free energy, I felt little desire to pursue this technology in light of the fact that the great need for this world is not in technology but in fact finding the very presence of God in their own lives that can indeed rectify all of our problems. And it was as if everything was perfectly timed again to bring this all together. I have often wondered if there was any important reason for me to be in this work, and if the good Lord would somehow use it for the good of mankind and perhaps his spiritual kingdom. Maybe my time in all this has been in vain? But suddenly I saw it come together in this because of good Tyndale. I saw how the closed loop circuit in conventional everyday circuits are in what could be considered a selfish loop or path and that any truly free energy circuit is in an open path that is analogous to a loving or benevolent giving to another. And so I saw not only the perfect analogy to make it simple and catchy enough for the whole world to finally and readily see, but also a way to glorify God and draw all men to see just how He gives us His grace so freely to all!! Praise Him! Praise Him! What better way than to show it in this simple analogy. When you close the loop with a motor and the flyback diode, you kill the potential given to you, that you can use to power an identical load. That power in the coil itself, as it is for that moment, is not only powering the mechanical load, but there is a duplication (not trying to be technical here) of the potential energy available to be used elsewhere, if it is but connected. This added load, instead of the diode, which I say by analogy is selfishly placed, this added load (as we have shown as a battery) is a receiver of this free grace and love so-to-speak. When a motor is pulsed, like in a brushless motor in your computers, the coil has something else going on at the same moment that is always considered a problem to be eliminated rather than channeled in the right direction. This is like the difference between a pessimist and an optimist. The pessimist sees life as partly or mostly meaningless or against him and the optimist has faith that things work out somehow. In this case the pessimist will just try and clamp out this negative or destructive force rather than consider that maybe it could be used for something useful. And still further analogy is in the fact that selfishness blinds the eyes whereas virtue is thorough, honest, unprejudiced, open, seeking good, etc. The schools teach these circuits even though some of the greatest authorities in electrical engineering have shown us there is more (Like Gabriel Kron). The schools teach students from earliest ages to make prejudiced judgments in never teaching them how to think but rather what to believe. They are put in a little box and never allowed out of it even at the PhD level. All you are allowed to do is draw a closed loop and never report or care about what happens when you pulse things. What happens has been shown over 100 years ago now if any will bother to research. Selfishness covets, and controls and manipulates people so that they cannot have the grace of God given to them. It sells things that are given freely. Selfish circuits are the only thing most people have ever heard of. They pay for their energy because their circuits kill the very source of their energy rather than multiplying it. Yet the creation has revealed a different reality. Biology shows us a different kind of energy man cares little to explore. And when we wish to consider the ways God has established from the beginning maybe then we can find what we have been wanting all along. Maybe if people had bothered to look at how a bird's wing was shaped that they then could see how flight was possible with very easy experimentation. But when you are blinded by prejudice then you cannot see what is flying above your heads all day long. You can also show videos and demonstrations but people will not see it because they can't see it because they are committed to not seeing. There is a Will controlling their judgment and research. It is selective research to find only facts that support one belief. That belief is paid for by those who profit the most in this world. One big circle. One big closed loop! One selfish circuit that breeds only selfishness and injustice. While the open loop gives freely to all, if they but open their minds, it is just the same with all of God's grace. Open up not to close upon itself, just as if to eat bread only to use your energy for yourself. But eat so you can give to all with loving arms. Open your arms in fullness of joy! Pulse your motor and rotate your wheel, but now also charge another battery or power a light at the same time. Freely the pulse is given in openness, so freely it gives to another. In the same way that you are causing a force in the motor coil, you do unto the other battery or another coil the same. How is that for another analogy? Both are reciprocal actions in a way. This is real energy. People trivialize it, but it is not even limited to being equal, and can be greater depending on how open your arms are or how big the receiving load is (and that is another story, second chapter so to speak). So it all boils down to the simple analogy how many angles you want to look at it: A closed loop circuit is like a selfish person who consumes their energy upon themselves and there the good dies in them. An open loop path, on the contrary, is like a giving person who uses the same energy to do the same work, but experiences the satisfaction of multiplying the loaves and fishes and giving it to others in need just the same. This is natural law and this creation testifies of the same. Now what will you do with this? Will you run off and use it for yourself only? Will you not first seek the God of heaven and thank Him for what He has so freely given you? Will you also seek Him for something so much more important that He also freely gives? Has this truth come to you with out this analogy? Seek Him therefore for His saving grace to take away a selfish heart to be one that continually freely receives and freely gives! Do not believe the false prophets in religion, and the false profits in business and schools, that you can only be a selfish consumer of opposite character to God, or can only have power that runs out. Take off your blinders and see all the facts. Stop selecting facts that people limit you to! Do original research. Test everything, hold fast to the good. Get a hold of the Ultimate Source and thank Him that He really does give us all that we need. Give to others that which matters most. If you give them this truth of loving energy, make sure you show them how it was shown also. For the giving of energy independence is infinitely less important than the giving of spiritual life. I am now satisfied that this work has not been in vain, as this analogy is so perfectly timed and connected with the need for both revelations to man. And Oh that the whole world would open their eyes to these both! What will you do about it my friend? Rick Friedrich February 4, 2016 The answer is found with Gabriel Kron in Electric circuit Models of the schrodinger Equation. We have made all of our motors into self-runners with this information and demonstrated such at our conventions.  "PREFACE. Maxwell’s equations are foundational to electromagnetic theory. They are the cornerstone of a myriad of technologies and are basic to the understanding of innumerable effects. Yet there are a few effects or phenomena that cannot be explained by the conventional Maxwell theory. This book examines those anomalous effects and shows that they can be interpreted by a Maxwell theory that is subsumed under gauge theory. Moreover, in the case of these few anomalous effects, and when Maxwell’s theory finds its place in gauge theory, the conventional Maxwell theory must be extended, or generalized, to a nonAbelian form. The tried-and-tested conventional Maxwell theory is of Abelian form. It is correctly and appropriately applied to, and explains, the great majority of cases in electromagnetism. What, then, distinguishes these cases from the aforementioned anomalous phenomena? It is the thesis of this book that it is the topology of the spatiotemporal situation that distinguishes the two classes of effects or phenomena, and the topology that is the final arbiter of the correct choice of group algebra — Abelian or non-Abelian — to use in describing an effect. Therefore, the most basic explanation of electromagnetic phenomena and their physical models lies not in differential calculus or group theory, useful as they are, but in the topological description of the (spatiotemporal) situation. Thus, this book shows that only after the topological description is provided can understanding move to an appropriate and now-justified application of differential calculus or group theory. Terence W. Barrett" Electromagnetic Phenomena Not Explained by Maxwell’s Equations  [Based on Barrett, T.W., “Electromagnetic phenomena not explained by Maxwell’s equations,” in A. Lakhtakia (ed.), Essays on the Formal Aspects of Maxwell Theory (World Scientific, 1993), pp. 8–86.] "The conventional Maxwell theory is a classical linear theory in which the scalar and vector potentials appear to be arbitrary and defined by boundary conditions and choice of gauge. The conventional wisdom in engineering is that potentials have only mathematical, not physical, significance. However, besides the case of quantum theory, in which it is well known that the potentials are physical constructs, there are a number of physical phenomena — both classical and quantum-mechanical — which indicate that the Aµ fields, µ = 0, 1, 2, 3, do possess physical significance as global-to-local operators or gauge fields, in precisely constrained topologies.... Although the term “classical Maxwell theory” has a conventional meaning, this meaning actually refers to the interpretations of Maxwell’s original writings by Heaviside, Fitzgerald, Lodge and Hertz. These later interpretations of Maxwell actually depart in a number of significant ways from Maxwell’s original intention. In Maxwell’s original formulation, Faraday’s electrotonic state, the A field, was central, making this prior-to-interpretation, original Maxwell formulation compatible with Yang–Mills theory, and naturally extendable... This recent extension of soliton theory to linear equations of motion, together with the recent demonstration that the nonlinear Schrödinger equation and the Korteweg–de-Vries equation — equations of motion with soliton solutions — are reductions of the self-dual Yang–Mills equation (SDYM),5 are pivotal in understanding the extension of Maxwell’s U(1) theory to higher order symmetry forms such as SU(2). Instantons are solutions to SDYM equations which have minimum action. The use of Ward’s SDYM twistor correspondence for universal integrable systems means that instantons, twistor forms, magnetic monopole constructs and soliton forms all have a pseudoparticle SU(2) correspondence." The Assumptions people have about Free Energy are not new in the History of Intellectual Prejudice.
e4d866f9a688132f
Mathematical/Physical Model realQM is a physical model in terms of classical continuum mechanics of an atom or ion consisting of a pointlike kernel of positive charge Z surrounded by N electrons of negative unit charge, as a free boundary problem for a system of N distributed non-overlapping electron unit charge densities in 3d Euclidean space \Re^3. For a neutral atom N=Z, while N<Z for an ion with positive charge Z-N and N>Z for an ion with negative charge N-Z, as an atom which has lost or gained electrons. The chemical properties of an atom is largely determined by the energy required to remove an electron (ionisation energy) and the energy released by capturing an electron (electron affinity), as prime tasks for an atom model. The electron charge distribution of the ground state of an atom/ion is characterised by minimising a total energy as the sum over electrons of kernel potential energy, inter-electron potential energy and so-called kinetic energy as a measure of charge compression. Each electron minimises its contribution to the total energy under a free boundary condition of continuity of charge density. realQM is a many-electron model as a collection of one-electron models over a partition in space coupled by electron potentials and satisfying a free boundary condition of charge continuity. As such realQM combines simplicity, generality and physicality. In mathematical terms realQM starts from a wave function Ansatz • \psi (x) = \sum_{j=1}^N\psi_j(x)        (1) as a sum of N real-valued electron wave functions \psi_j(x) depending on a common Euclidean 3d space coordinate x\in \Re^3 and having non-overlapping spatial supports \Omega_j with boundaries \Gamma_j for j=1,...,N, together filling \Re^3. We assume that \psi_j\in H^1(\Omega_j ) for j=1,...,N ,where H^1(\Omega ) is the set of real-valued functions defined on the domain \Omega in \Re^3 which are square integrable along with first derivatives. We ask the electron wave functions \psi_j to statisfy the normalization condition • \int_{\Omega_j}\psi_j^2\, dx = 1\quad\mbox{for}\quad j=1,..,N,       (2) attributing unit charge to each electron with \psi_j^2(x) representing the charge density of electron j. We consider \Gamma as the union of \Gamma_i\cap\Gamma_j for i,j=1,...,N to be a free boundary to be determined along with the wave function \psi and as a free boundary condition, we shall ask \psi to be continuous across $\Gamma$, that is that \psi_j and \psi_k agree on \Gamma_i\cap\Gamma_j. We express satisfaction of the free boundary condition of continuity by asking that \psi\in H^1(\Re^3). We start considering real-valued wave functions \psi (x) depending on the space coordinate x, and later extend to complex-valued wave functions with time dependency. We seek the neutral ground state of the atom with N=Z as a real-valued function \psi\in H^1(\Re^3) of the form (1) satísfying (2), which with the \psi_j varying freely over H^1(\Omega_j) minimizes the total energy: • TE(\psi )\equiv K(\psi )+PK(\psi )+PE(\psi )          (3) as the sum of kinetic energy (with Planck’s constant h normalized to 1): • K(\psi )=\frac{1}{2}\int_{\Re^3}\vert\nabla\psi (x)\vert^2\, dx, attractive kernel potential energy: • PK(\psi )=\int_{\Re^3}W(x)\psi^2(x)\, dx, and repulsive electronic potential energy: • PE(\psi )=\sum_j\int_{\Omega_j}\sum_{k\neq j} V_k(x)\psi_j^2(x)\, dx, where W(x)=-\frac{Z}{\vert x\vert} is the potential generated by a pointlike positive kernel of charge Z and V_k(x) is the potential generated by electron k defined by • V_k(x)=\int_{\Omega_k}\frac{\psi_k^2(y)}{2\vert x-y\vert}dy \quad\mbox{for}\quad x\in\Re^3. We see that the total energy TE(\psi ) of an electronic configuration defined by the wave function \psi (x) has a negative contribution from Coulombic attractive kernel potential energy PK(\psi ), a positive contribution from Coulombic repulsive electronic potential energy PE(\psi ) without self-repulsion, and a positive contribution from K(\psi ) as a measure of concentration of electron charge. We see that • PE(\psi )=\sum_{k\neq j}\int_{\Omega_j}\int_{\Omega_k} \frac{\psi_k^2(y)\psi_j^2(x)}{2\vert x-y\vert}\, dx\, dy, and understand that k\neq j expresses that an electron does not interact with itself and that the factor 2 accounts for the double presence in the sum over all k\neq j. A minimising wave function \Psi =\sum_j\Psi_j\in H^1(\Re^3) satisfies the following system of one-electron Schrödinger equations expressing vanishing of the gradient of TE(\psi ) with respect to free variation of \psi_j over H^1(\Omega_j) under the side condition \int\psi_j^2\,dx=1: • H_j\Psi_j\equiv -(\frac{1}{2}\Delta+W+2\sum_{k\neq j}V_k)\Psi_j = E_j\Psi_j\,\mbox{in }\Omega_j,\, \frac{\partial\Psi_j}{\partial n_j}=0\,\mbox{on }\Gamma_j    (4) for j=1,...,N, where E_j acts as a Lagrange multiplier for \int\Psi_j^2\,dx=1. We observe the presence of the homogeneous Neumann condition • \frac{\partial\Psi_j}{\partial n_j} = 0 \,\mbox{on}\,\Gamma_j, as a variationally imposed condition reflecting free variation of \psi_j in \Omega_j. Further, the factor 2 in 2\sum_{k\neq j}V_k reflects the presence of \Psi_j in the equations for \Psi_k with k\neq j through the potential V_j. We can thus formulate the effective boundary condition to be satisfied on the free boundary \Gamma as follows: • \Psi \,\mbox{is continuous and}\, \frac{\partial\Psi}{\partial n} = 0, with n normal to \Gamma. The free boundary thus carries both a homogeneous Neumann condition and a Dirichlet condition asking continuity which makes a connection to what is referred to as a Bernoulli free boundary problem for the Laplacian in a domain with a combined Neumann and Dirichlet condition on a part of the boundary. We observe that the total energy TE(\Psi ) of a minimising \Psi =\sum_j\Psi_j with eigenvalues E_j is not given as \sum_jE_j because of the factor 2 of the electronic potential in (4).
619ff29760541a10
Welcome to the Brave New Atom World of realQM This site describes a new model of atoms and molecules in terms of classical continuum mechanics in three space dimensions in the form of a free boundary problem for a system of partial differential equations in non-overlapping electron wave function/charge densities satisfying a free boundary condition involving continuity and a homogeneous Neumann condition. The new model is referred to as realQM (real quantum mechanics) as a model with interpretation in physical terms, to be compared with the model presented in books referred to as stdQM (standard quantum mechanics) with non-physical interpretation as particle statistics (according to the Copenhagen Interpretation by Bohr-Born-Heisenberg). realQM is computable while stdQM is uncomputable. realQM has physical meaning while stdQM has a statistical non-physical meaning. realQM combines simplicity, generality and physicality. The question is to what extent realQM describes true physics. One may argue that macroscopic physics may be complex/random as consisting of many interacting microscopic pieces, while microscopic physics consisting of few pieces can only be simple/deterministic. With such a realQM perspective, the ground state of an atom is simple and leaves no door to the randomness of stdQM. To get an overview, browse the header menu. It is also helpful to try the following question: The first live presentation of realQM to the World was given at the conference  50th Anniversary of Journal of Structural Mechanics, August 24-25, 2017, Vaasa, Finland as We shall see that it is natural to apply the homogeneous Neumann condition for electron wave functions on the free boundary separating electrons, and also where the two electrons in the innermost shell (or the one electron for Hydrogen) meet(s) the kernel. This requires the kernel to have positive radius, which then can be used as a model parameter allowing perfect match to observation for two-electron ions, from which atoms with outer shells can be built, see Helium and Two-Electron Ions. An alternative giving a similar effect is to use a Robin boundary condition of the form \frac{\partial\phi}{\partial r}=-Z\phi for a positive radius r. This is the effective condition at zero radius built into the Schrödinger equation with a point source kernel of charge Z. realQM thus opens to inspection of the inner mechanics of an atom, such as the effective radius of the kernel vs electrons, which is hidden to experimental inspection. In particular, even the basic case of the Hydrogen atom gets a new model in realQM with the homogeneous Neumann condition on a kernel with small positive radius staying away from the kernel singularity, as compared to the model of stdQM with vanishing kernel radius keeping the singularity of the kernel potential. We here follow the device that physics without singularities may be more transparent and closer to reality than physics with singularities with hidden physics. Compare with this discussion.
3546fb8d6f68f998
tisdag 31 maj 2016 Samtal med Kodcentrum om Matematik-IT Jag har idag haft ett konstruktivt och mycket trevligt samtal med Jessica Berglund och Lisa Söderlund på Kodcentrum om eventuellt samarbete vad gäller att sprida programmeringens evangelium till svenska elever och svensk skola. Kodcentrum har hittills satsat på Scratch som introduktion till programmering och verkar ha behov av att kunna leverera vidareutbildning,  och kanske Matematik-IT därvid kan vara ett alternativ. Vi får se om Kodcentrum vill utnyttja denna möjlighet under nästa läsår. Chansen finns... Vad gäller programmeringsplattform har jag alltså använt Codea (programmering på iPad för iPad), men det finns många alternativ, tex Corona för PC/Mac som använder samma språk som Codea (Lua). Codea kostar några kronor, medan Corona är gratis. Sen finns det ju många andra möjligheter som Xcode, Swift, Python, JavaScript, Perl....Till slut måste man välja något specifikt språk/plattform, om man vill säga/göra något konkret med någon mening... på samma sätt som med kärleken, som är ju evig och det bara är föremålen som växlar... PS Det finns en inställning som verkar ha många företrädare, att möta Regeringens uppdrag till Skolverket att införa programmering i skolan, inte med att helt sonika följa uppdragets direktiv och göra så, utan istället ersätta konkret programmering med betydligt mindre konkreta slagord som "digital kompetens" och "datalogiskt tänkande". Tanken är alltså att gå runt den heta gröten och inte ta del av den och istället konsumera ev urvattnade derivat av den goda och stärkande gröten. Inte att lära sig programmera (det behöver man inte eftersom det finns så många programmerare), utan istället lära sig att det finns något som kallas programmering. Inte att de facto få lära sig multiplikationstabellen och att använda den, utan istället få lära sig att det räcker att veta att den finns och att några kan den (den behövs ju egentligen inte eftersom det finns så många mini-räknare). Tanken borde istället vara att om man äter programmeringsgröten och smälter den, så får man bättre förmåga att utveckla både "digital kompetens" och "datalogiskt tänkande", om det nu är huvudsaken, än om man bara går runt gröten. Det är bättre att kunna multiplikationstabellen och kunna använda den, än att inte kunna den och inte veta att använda den, även om det finns miniräknare. Varför? Därför att människan är en tänkande varelse och tänkande bygger på förståelse. måndag 30 maj 2016 New Theory of Flight: Time Line Potential flow around circular cylinder with zero drag and lift (left).  Real flow with non-stationary turbulent 3d rotational slip separation and non-zero drag (right).  The New Theory of Flight published in J Mathematical Fluid Mechanics, can put be into the following time line: 1750 formulation by Euler of the Euler equations describing incompressible flow with vanishing viscosity expressing Newton's 2nd law and incompressibility in Euler coordinates of a fixed Euclidean coordinate system. 1752 d'Alembert's Paradox as zero drag and lift of potential flow around a wing defined as stationary flow which is 1. incompressible 2. irrotational 3. of vanishing viscosity 4. satisfies slip boundary condition  as exact solution of the Euler equations. 1904 resolution of d'Alembert's paradox of zero drag by Prandtl stating that potential flow is unphysical, because 4. violates a requirement that real flow must satisfy • no slip boundary condition. 1904 resolution of d'Alembert's paradox of zero lift by Kutta-Zhukovsky stating that potential flow is unphysical, because 2. violates that a sharp trailing edge in real flow creates  • rotational flow. 2008 resolution of d'Alembert's paradox of zero drag and lift by Hoffman-Johnson stating that potential flow is unphysical, because  • potential flow it is unstable at separation and develops into non-stationary turbulent 3d rotational slip separation as a viscosity solution of the Euler equations with substantial drag and lift.   Recall that d'Alembert's paradox had to be resolved, in one way or the other, to save theoretical fluid mechanics from complete collapse, when the Wright brothers managed to get their Flyer off ground into sustained flight in 1903 with a 10 hp engine.  Prandtl, named the Father of Modern Fluid Mechanics, discriminated the potential solution by an ad hoc postulate that 4. was unphysical (without touching 2.) and obtained drag without lift. Kutta-Zhukovsky, named Fathers of Modern Aero Dynamics, discriminated the potential solution by an ad hoc postulate that 2. was unphysical (without touching 4.) and obtained lift without drag.  Hoffman-Johnson showed without ad hoc postulate that the potential solution is unstable at separation and develops into non-stationary turbulent 3d rotational slip separation causing drag and lift.  The length of the time-line 1750-1752-1904-2008 is remarkable from scientific point of view. Little happened between 1752 and 1904 and between 1904 and 2008, and what happened in 1904 was not in touch with reality. For detailed information, see The Secret of Flight. 1946 Nobel Laureate Hinshelwood made the following devastating analysis: • D’Alembert’s paradox separated fluid mechanics from its start into theoretical fluid mechanics explaining phenomena which cannot be observed and practical fluid mechanics or hydraulics observing phenomena which cannot be explained. The only glimpse in the darkness was offered by the mathematician Garret Birkhoff in his 1950 book Hydrodynamics, by asking if any potential flow is stable, a glimpse of light that was directly blown out by a devastating critique of the book from fluid dynamics community, which made Birkhoff remove his question in the 2nd edition of the book and to never return to hydrodynamics. The 2008 resolution of d'Alembert's Paradox leading into the New Theory of Flight by Hoffman-Johnson,  has been met with complete silence/full oppression by the fluid mechanics community still operating under the paradigm of Hinshelwoods analysis.  söndag 29 maj 2016 Restart of Quantum Mechanics: From Observable/Measurable to Computable                 Schrödinger and Heisenberg receiving the Nobel Prize in Physics in 1933/32.. If modern physics was to start today instead of as it did 100 years ago with the development of quantum mechanics as atomistic mechanics by Bohr-Heisenberg and Schrödinger, what would be the difference? Bohr-Heisenberg were obsessed with the question: • What can be observed? motivated by Bohr's Law: • We are allowed to speak only about what can be observed. Today, with the computer to the service of atom physics, a better question may be: • What can be computed? possibly based on an idea that • It may be meaningful to speak about what can be computed.  Schrödinger as the inventor of the Schrödinger equation as the basic mathematical model of quantum mechanics, never accepted the Bohr-Heisenberg Copenhagen Interpretation of quantum mechanics with the Schrödinger wave function as solution of the Schrödinger equation interpreted as a probability of particle configuration, with collapse of the wave function into actual particle configuration under observation/measurement.  Schrödinger sought an interpretation of the wave function as a physical wave in a classical continuum mechanical meaning, but had to give in to Bohr-Heisenberg, because the multi-dimensionality of the Schrödinger equation did not allow a direct physical interpretation, only a probabilistic particle interpretation. Thus the Schrödinger equation to Schrödinger became a monster out of control, as expressed in the following famous quote:  And Schrödinger's equation is a monster also from computational point of view, because solution work scales severely exponentially with the number of electrons and thus is beyond reach even for small $N$. But the Schrödinger equation is an ad hoc model with only weak formal unphysical rationale, including the basic ingredients of (i) linearity and (ii) multi-dimensionality. Copenhagen quantum mechanics is thus based on a Schrödinger equation, which is an ad hoc model and which cannot be solved with any assessment of accuracy because of its multi-dimensionality and thus cannot really deliver predictions which can be tested vs observations, except in very simple cases. The Copenhagen dogma is then that predictions of the standard Schrödinger equation always are in perfect agreement with observation, but a dogma which cannot be challenged because predictions cannot be computed ab initio. In this situation it is natural to ask, in the spirit of Schrödinger, for a new Schrödinger equation which has a direct physical meaning and to which solutions can be computed ab initio, and this is what I have been exploring in many blog posts and in the book (draft) Many-Minds Quantum Mechanics. The basic idea is to replace the linear multi-d standard Schrödinger equation with a computable non-linear system in 3d as a basis of a new form of physical quantum mechanics. I will return with more evidence of the functionality of this approach, which is very promising... Note that a wonderful thing with computation is that it can be viewed as a form of non-destructive testing, where the evolution of a physical system can be followed in full minute detail without any form of interference from an observer, thus making Bohr's Law into a meaningless limitation of scientific thinking and work from a pre-computer era preventing progress today. PS It is maybe wise to be a little skeptical to assessments of agreement between theory and experiments to an extremely high precision. It may be that things are arranged or rigged so as to give exact agreement, by changing computation/theory or experiment. lördag 28 maj 2016 Aristotle's Logical Fallacy of Affirming the Consequent in Physics One can find many examples in physics, both classical and modern, of Aristotle's logical fallacy of Affirming the Consequent (confirming an assumption by observing a consequence of the assumption): 1. Assume the Earth rests on 4 turtles, which keeps the Earth from "falling down". Observe that the Earth does not "fall down". Conclude that the Earth rests on 4 turtles. 2. Observe a photoelectric effect in accordance with a simple (in Einstein's terminology "heuristic") argument assuming light can be thought of as a stream of particles named "photons" . Conclude that light is a stream of particles named photons.  3. Assume light is affected by gravitation according the general theory of relativity as described by Einstein's equations. Observe apparent slight bending of light as it passes near the Sun in accordance with an extremely simplified use of Einstein's equations. Conclude universal validity of Einstein's equations. 4. Observe lift of a wing profile in accordance with a prediction from potential flow modified by large scale circulation around the wing. Conclude that there is large scale circulation around the wing.  5. Assume that predictions from solving Schrödinger's equation always are in perfect agreement with observation. Observe good agreement in some special cases for which the Schrödinger equation happens to be solvable, like in the case of Hydrogen with one electron. Conclude universal validity of Schrödinger's equation, in particular for atoms with many electrons for which solutions cannot be computed with assessment of accuracy. 6. Assume there was a Big Bang and observe a distribution of galaxy positions/velocities, which is very very roughly in accordance with the assumption of a Big Bang. Conclude that there was a Big Bang. 7. Assume that doubled CO2 in the atmosphere from burning of fossil fuel will cause catastrophic global warming of 2.5 - 6 C. Observe global warming of 1 C since 1870. Conclude that doubled CO2 in the atmosphere from burning of fossil fuel will cause catastrophic global warming of 4 - 8 C. 8. Assume that two massive black holes merged about 1.3 billion years ago and thereby sent a shudder through the universe as ripples in the fabric of space and time called gravitational waves and five months ago washed past Earth and stretched space making the entire Earth expand and contract by 1/100,000 of a nanometer, about the width of an atomic nucleus. Observe a wiggle of an atom in an instrument and conclude that two massive black holes merged about 1.3 billion years ago which sent a shudder through the universe as ripples in the fabric of space and time called gravitational waves... 9. Observe experimental agreement of the anomalous magnetic dipole moment of the electron within 10 decimals to a prediction by Quantum Electro Dynamics (QED). Conclude that QED is universally valid for any number of electrons as the most accurate theory of physics. Note that the extremely high accuracy for the specific case of the anomalous magnetic dipole moment of the electron, compensates for the impossibility of testing in more general cases,  because the equations of QED are even more impossible to solve with assessment of accuracy than Schrödinger's equation. The logic fallacy is so widely practiced that for many it may be difficult to see the arguments as fallacies. Test yourself! PS1. Observe that if a theoretical prediction agrees with observation to a very high precision, as is the case concerning the Equivalence Principle stating equality of inertial and gravitational (heavy) mass, then it is possible that what you are testing experimentally in fact is the validity of a definition, like testing experimentally if there are 100 centimeters on a meter (which would be absurd). PS2 Books on quantum mechanics usually claim the there is no experiment showing any discrepancy whatsoever with solutions of the Schrödinger equation (in the proper setting), which is strong evidence that the Schrödinger equation gives an exact  description of all of atom physics (in a proper setting). The credibility of this argument is weakened by the fact that solutions can be computed only in very simple cases.  fredag 27 maj 2016 Emergence by Smart Integration of Physical Law as Differential Equation Perfect Harmony of European Parliament: Level curves of political potential generated by an empty spot in the middle. This is a continuation of previous posts on a new view of Newton's law of gravitation. We here connect to the Fundamental Theorem of Calculus of the previous post, allowing a bypass to compute an integral by tedious laborious summation, using a primitive function of the integrand: • $\int_0^t f(s)ds = F(t) - F(0)$  if  $\frac{dF}{dt} = f$. This magical trick of Calculus of computing an integral as a sum without doing the summation,  is commonly viewed to have triggered the scientific revolution shaping the modern world. The magic is here computing an integral $\int_0^t f(s)ds$ in a smart way, rather than computing a derivative $\frac{dF}{dt}$ in a standard way. The need of computing integrals comes from the fact that physical laws are usually expressed in terms of derivatives, for example as an initial value problem: Given a function $f(t)$, determine a function $F(t)$ such that • $DF(t) = f(t)$ for $t\ge 0$ and $F(0) = 0$, where $DF =\frac{dF}{dt}$ is the derivative of $F$. In other words, given a function $f(t)$, determine a primitive function $F(t)$ to $f(t)$ with $F(0)=0$, that is, determine/compute the integral by the formula • $\int_0^t f(s)ds = F(t)$ for $t\ge 0$.  Using the Fundamental Theorem to compute the integral would then correspond to solving the initial value problem by simply picking a primitive function $F(t)$ satisfying $DF = f$ and $F(0)=0$ from a catalog of primitive functions, allowing to in one leap jump from $t=0$ to any later time $t$. Not very magical perhaps, but certainly smart! The basic initial value problem of mechanics is expressed in Newton's 2nd Law $f=ma$ where $f$ is force, $m$ mass and $a(t)=\frac{dv}{dt}=\frac{d^2x}{dt^2}$ is acceleration, $v(t)=\frac{dx}{dt}$ velocity and $x(t)$ position, that is, • $f(t) = m \frac{d^2x}{dt^2}$.           (1) Note that in the formulation of the 2nd Law, it is natural to view position $x(t)$ with acceleration $\frac{d^2x}{dt^2}$ as given, from which force $f(t)$ is derived by (1) . Why? Because position $x(t)$ and acceleration $\frac{d^2x}{dt^2}$ can be observed, from which the presence of force $f(t)$ can be inferred or derived or concluded, while direct observation of force may not really be possible. In this setting the 2nd Law acts simply to define force in terms of mass and acceleration, rather than to make a connection with some other definition of force. Writing Newton's 2nd law in the form $f=ma$, thus defining force in terms of mass and acceleration, is the same as writing Newton's Law of Gravitation: • $\rho = \Delta\phi$,                          (2) thereby defining mass density $\rho (x)$ in terms of gravitational potential $\phi (x)$ by a differential equation.  With this view, Newton's both laws (1) and (2) would have the same form as differential equation, and the solutions $x(t)$ and $\phi (x)$ would result from solving differential equations by integration or summation as a form of emergence.  In particular, this reasoning gives support to an idea of viewing the physics of Newton's Law of Gravitation to express that mass density somehow is "produced from" gravitational potential by the differential equation $\rho =\Delta\phi$.  To solve the differential equation $\Delta\phi =\rho$ by direct integration or summation in the form • $\phi (x) = \frac{1}{4\pi}\int\frac{\rho (y)}{\vert x-y\vert}dy$, would then in physical terms require instant action at distance, which is difficult to explain.  On the other hand, if there was a "smart" way of doing the integration by using some form of Fundamental Theorem of Calculus as above, for example by having a catalog of potentials from which to choose a potential satisfying $\Delta\phi =\rho$ for any given $\rho$, then maybe the requirement of instant action at distance could be avoided. A smart way of solving $\Delta\phi =\rho$ would be to use the knowledge of the solution $\phi (x)$ in the case of a unit point mass at $x=0$ as • $\phi (x)=\frac{1}{4\pi}\frac{1}{\vert x\vert}$  which gives Newton's inverse square law for the force $\nabla\phi$, which is smart in case $\rho$ is a sum of not too many point masses. But the physics would still seem to involve instant action at distance. In any case, from the analogy with the 2nd Law we have gathered an argument supporting an idea to view the physics of gravitation as being expressed by the differential equation $\rho =\Delta\phi$ with mass density $\rho$ derived from gravitational potential $\phi$. Rather than the opposite standard view with the potential $\phi$ resulting from mass density $\rho$ by integration or summation corresponding to instant action at distance. The differential equation $\Delta\phi =\rho$ would thus be valid by an interplay "in perfect harmony" in the spirit of Leibniz, where on the one hand "gravitational potential tells matter where to be how to move" and "matter tells gravitational potential what to be". This would be like a Perfect Parliamentary System where the "Parliament tells People where to be and what to do" and "People tells Parliament what to be". PS There is a fundamental difference between (1) and (2): (1) is an initial value problem in time while (2) is a formally a static problem in space. It is natural to solve an initial value problem by time stepping which represents integration by summation. A static problem like (2) can be solved iteratively by some form of (pseudo) time stepping towards a stationary solution, which in physical terms could correspond to successive propagation of effects with finite speed of propagation. torsdag 26 maj 2016 Fatal Attraction of Fundamental Theorem of Calculus? Calculus books proudly present the Fundamental Theorem of Calculus as the trick of computing an integral • I=$\int_a^b f(x)dx$, not by tedious summation of little pieces as a Riemann sum • $\sum_i f(x_i)h_i$ on a partition $\{x_i\}$ of the interval $(a,b)$ with step size $h_i = x_{i+1} - x_i$, but by the formula • $I = F(b) - F(a)$,  where $F(x)$ is a primitive function to $f(x)$ satisfying $\frac{dF}{dx} = f$, The trick is thus to compute an integral, which by construction is a sum of very many terms, not by doing the summation following the construction, but instead taking just one big leap using a primitive function. On the other hand, to compute a derivative no trick is needed according to the book; you just compute the derivative using simple rules and a catalog of already computed derivatives. In a world of analytical mathematics, computing integrals is thus valued higher than computing derivatives, and this is therefore what fills Calculus books. In a world of computational mathematics, the roles are switched. To compute an integral as a sum can be viewed to be computationally trivial, while computing a derivative $\frac{dF}{dx}$ is a bit more tricky because it involves dividing increments $dF$ by small increments $dx$. This connects to Poisson's equation $\Delta\phi =\rho$ of Newton's theory of gravitation discussed in recent posts. What is here to be viewed as given and what is derived? The standard view is that the mass density $\rho$ is given and the gravitational potential $\phi$ is derived from $\rho$ as an integral seemingly by instant action at distance.  In alternative Newtonian gravitation, as discussed in recent posts, we view instead $\phi$ as primordial and $\rho =\Delta\phi$ as being derived by differentiation, with the advantage of requiring only local action. We thus have two opposing views: • putting together = integration requiring (instant) action at distance with dull tool. • splitting apart = differentiation involving local action with sharp tool.  It is not clear what to prefer? Connection between Neo-Newtonian and Einsteinian Gravitational Theory                                                  Hen laying eggs by local action.  • space-time curvature tells matter to move along geodesics • gravitational potential tells matter to move according to Newton's 2nd Law • space-time curvature connected to matter by Einstein's equation • gravitational potential connected to matter by Poisson's/Newton's equation. where the "telling" goes both ways.  onsdag 25 maj 2016 Newton's Genius and New View on Gravitation Newton computed the gravitational attraction of a planet as a spherically symmetric distribution of matter, to be equal to that of a point mass of the same total mass at the center of the planet, away from the planet.  This made it possible for Newton to model gravitational interaction of planets as gravitational interaction of point masses, as a much simpler problem from computational point of view.  Newton thus could simplify the computationally impossible problem of instant gravitational interaction at distance of all the individual atoms of one planet with all the individual atoms of another planet, to the interaction between two point masses of the same total mass. Genial and absolutely necessary to make the theory useful and thereby credible. But let us reflect a bit about the physics of instant individual interaction at distance of each atom of one planet with each atom of another planet, which we agreed is computationally impossible. We now ask if it is physically possible? Is it thinkable that each atom can instantly at distance exchange details about position and mass with all others atoms by using some form of world wide web? Think about it!  The answer can only be NO. Atoms cannot have access to such technology. It is unthinkable. The result is that we have to view gravitation in a different way, not as individual instant attraction between small pieces of matter at distance, and then why not in the other way as suggested in recent posts:  What is primordial is then a gravitational potential $\phi$ with associated gravitational force $\nabla\phi$, to which matter density $\rho$ is connected by $\rho =\Delta\phi$ through the local operation in space of the Laplacian $\Delta$. With this view there is no instant action at distance between atoms to explain, but instead local production of matter without demand of atomistic resolution into pieces, which at least is thinkable. It would be interesting to listen to Newton's reaction to this idea. PS Business Insider reports that: • Earth's core is 2.5 years younger than its crust due to some eerie physics the eerie physics being Einstein's general theory of relativity claiming that clocks slow down with increasing gravitation. Yes, maybe your feet are a bit younger than your head... The Value of Compulsory (Climate) Science?                                   Sunset over fossil free state without CO2 polluting people and welfare. Tim Ball asks for Compulsory Courses for Any Curriculum; The Science Dilemma: • Science is pervasive directly and indirectly in every phase of modern life.  • This knowledge must be a fundamental part of any school curriculum.  Tim suggests that compulsory science could save people from a meaningless ban on fossil fuel:  • Climate skeptics struggle with getting the majority of people to understand the problems with the UN Intergovernmental Panel on Climate Change’s (IPCC) anthropogenic global warming (AGW) story. The problem is much wider because it relates to the lack of scientific abilities among a majority of the population. Tim is not happy with the present situation: • I was involved in many curricula fights, few of them ever resolved much. •  Ever subject area and discipline considered theirs essential to an education. They failed in achieving curricula useful to the student and society.  • This was because they were controlled by people ensuring what interested them or what ensured their job, rather than what the student needed to become an effective informed citizen. • Students are not given the tools to avoid being exploited. Indeed, sometimes I think the system keeps them ignorant so it can exploit them as adults.  • Peoples of the Rainforest teach their children what they need to survive in the real and dangerous world in which they live. • We don’t do this at any level. For most North American university or college students the experience is a socially acceptable and ridiculously expensive form of unemployment. Most of them learn more about life and themselves in part-time and summer jobs.  I agree with Tim about the incitaments behind curricula, but I am not sure compulsory science would be beneficial. The trouble with anthropogenic global warming is that it is massively backed by scientists and academic institutions, and compulsory science could just mean more backing of fake science. The real Science Dilemma is maybe rather to distinguish real science from fake science. PS Our new social democratic Minister of Climate and Vice Prime Minister Isabella Lövin today proudly announces that tisdag 24 maj 2016 Wallström: Förmån (utan Tjänstesamband) Ingen Muta Chefsåklagare Alf Johansson vid riksenheten mot korruption meddelar till TT: • Det begicks inget mutbrott när utrikesminister Margot Wallström fick en lägenhet av Kommunal. Åklagaren lägger ner förundersökningen. • Det är också uppenbart att Wallström har mottagit en förmån, anser Johansson, men det kan alltså inte bevisas att hon har fått den just i egenskap av minister. • Sammanfattningsvis har jag inte kunnat finna att det föreligger ett så kallat tjänstesamband, det vill säga en förmån i form av ett hyreskontrakt som har lämnats med anledning av statsrådets uppdrag. Lawline gjorde an analys när lägenhetsaffären uppdagades i januari med titeln MARGOT WALLSTRÖM UPPFYLLER REKVISITEN FÖR TAGANDE AV MUTA, med bl följande motivering: • Av kontraktet parterna emellan framgår att hyresförhållandet gäller så länge Margot Wallström har sin nuvarande anställning. Chefåklagaren anser alltså att Wallström tagit emot förmån, men anser, i synbar strid med kontraktet, att denna förmån inte har haft samband med Wallströms anställning som minister.  Eller kanske har tjänsten haft visst samband, men jämfört med det rena vänskapssamband som Johansson vet måste ha funnits (ja, vad annat skulle det ha kunna vara?) mellan Wallström och Kommunal (dock utan att ha hört Wallström), måste väl tjänstesambandet anses som försumbart? Så klart att Wallström säger "Jag älskar Kommunal" och så klart att kärleken är besvarad! Under alla förhållanden är Sverige det land i världen som har minst korruption, om man inte räknar vänskapskorruption förstås, och det är ju inte detsamma som den riktiga rejäla korruption som florerar i alla andra länder... Det rimliga är väl nu att Wallström åter flyttar in i Kommunals lägenhet, eftersom som Wallström säger "inget fel är begånget". Det är ju trots allt en ganska trevlig lägenhet, stor, möblerad, central, låg hyra... Se också tidigare post. Jämför också med SVT: Wallströms lägenhet direkt kopplade till hennes politiska uppdrag.  The Stupid Demand of Absolute Simultaneity which Destroyed Rational Physics                                                 Absolute simultaneity of Tea Time • 2005 marked the centenary of one of the most remarkable publications in the history of science, Albert Einstein’s ‘‘On the Electrodynamics of Moving Bodies,’’ in which he presented a theory that later came to be known as the Special Theory of Relativity (STR).  • This 1905 paper is widely regarded as having destroyed the classical conceptions of absolute time and space, along with absolute simultaneity and absolute length, which had reigned in physics from the times of Galileo and Newton to the dawn of the twentieth century.  Einstein is thus commonly viewed to have destroyed classical Newtonian physics, and to judge if this is to applaud or not, it is necessary to take a look at Einstein's reason for the destruction as presented in the 1905 article. And that is a perceived impossibility of synchronising clocks with different positions and velocities, a perceived impossibility of fulfilling an absolute need of absolute simultaneity which Einstein attributed to classical Newtonian mechanics.  But what says that the world as classical mechanics requires absolute simultaneity to go around? Yes, it is needed for navigation by the Sun or GPS by humans, but birds navigate without synchronised clocks. And wasn't the world going around pretty well before there were any Poincare or Einstein worrying about clock synchronisation and absolute simultaneity?  So is there no need of absolute simultaneity in classical Newtonian mechanics? Yes, the standard idea is that the gravitation from the Sun is pulling the Earth around by instant action at distance, and that seems to require (i) synchronisation of Sun time and Earth time and (ii) a mechanism for instant action at distance.  Since no progress has been made concerning (i) and (ii) over the centuries since Newton, I have in recent posts tested a way to circumvent these difficulties or impossibilities, and that is to view the gravitational potential $\phi$ as primordial from which matter density $\rho =\Delta\phi$ is derived by the local action in space of the Laplacian $\Delta$.  With this view, which is Newtonian mechanics with just a little twist on what comes first, matter or gravitational potential, there is no need for absolute simultaneity and thus no longer any need to destroy a most beautiful and functional Newtonian mechanics.  Einstein thus attributes an unreasonable requirement of absolute simultaneity to Newtonian mechanics, and then proceeds to kill Newton. Of course this can be seen as an example of the all too well known tactics of attributing some evil quality (true or false) to your enemy, and then killing him.   And the book also ventilates such criticism: • Unfortunately for Einstein’s Special Theory, however, its epistemological and ontological assumptions are now seen to be questionable, unjustified, false, perhaps even illogical.  • The precise philosophical arguments for the illogicality, falsity, or unjustifiably of the epis- temological, semantic, and ontological presuppositions of the Special Theory remain, with a few exceptions, unknown among physicists. Pretty tough words, but how to cope with lack of knowledge and ignorance? måndag 23 maj 2016 Neo-Newtonian Cosmology: Progress! We consider a Neo-Newtonian cosmological model in the form of Euler's equations for a compressible gas subject to Newtonian gravitation: Find $(\phi ,m, e,p)$ depending on a Euclidean space coordinate $x$ and time $t$, such that for all $(x,t)$: • $\Delta\dot\phi + \nabla\cdot m =0$                                                           (1) • $\dot m +\nabla\cdot (mu) +\nabla p + \rho\nabla\phi =0$                              (2) • $\dot e +\nabla\cdot (eu) +p\nabla\cdot u +\rho\nabla\cdot m=0$,                       (3) where $\phi$ is gravitational potential, $\rho =\Delta\phi$ is mass density, $m$ is momentum, $u=\frac{m}{\rho}$ is matter velocity, $p$ is pressure, $e$ is internal energy as the sum of heat energy $\rho T$ with $T$ temperature and gravitational energy $\rho\phi$and the dot indicates time differentiation, see Many-Minds Relativity 20.3 and Computational Thermodynamics Chap 32. Here $x$ is space coordinate in a fixed Euclidean coordinate system, and $t$ is a local time coordinate which is not globally synchronized. The primary variables in this model are the gravitational potential $\phi$ and the momentum $m$ connected through (2) expressing conservation of momentum or Newton's 2nd law. We view matter density $\rho =\Delta\phi$ as being derived by local action of the differential operator $\Delta$. The model is complemented by a constitutive equation for the pressure. The essential components of this model are: 1. Newton's law of gravitation $\rho =\Delta\phi$ connecting mass to gravitational potential 2. $\nabla\phi$ as gravitational force  3. Newton's 2nd law (2) connecting motion to force, 4. (1) expressing conservation of mass and (3) conservation of energy, with the following features: • no action at distance with $\phi$ primordial and $\rho =\Delta\phi$ derived quantity • global clock synchronisation not needed because all action is local • equivalence of inertial and gravitational mass by (2) • $\Delta\phi$ of variable sign opens to positive and negative matter • no limit on matter speed • no electro-magnetics or nuclear physics so far included in the model. It may well by that a model of this form is sufficient to describe the mechanics of the universe we can observe, a universe resulting from an interplay of gravitational force and motion of matter. You can test the model in the app Dark Energy at App Store. Try it! Some form of starting values are needed for simulations using the model, but like in weather prediction initial values at a given global time are not known, but have to be constructed from observations over time possibly involving synchronisation of nearby clocks.   The primordial quantity in this Newtonian model is the gravitational potential and gravitational force. It is the opposite of Einstein's model, where gravitational force is eliminated and replaced by "space-time" curvature. It is no wonder that Einstein expressed "Forgive me Newton!!" when taking this big extreme step. A fundamental problem with modern physics is the incompatibility of Einstein's theory of gravitation in "curved space-time" and quantum mechanics in Euclidean space. This big obstacle would disappear if Einstein's gravitation was given up, and Newton's gravitation was resurrected in suitable form.   What is the reason to not take this step and open for progress? Recall that nobody understands what "curved space-time" is, while everybody can understand what a Euclidean coordinate system is and how to measure local time. If we follow Einstein's device of always seeking to "make things as simple as possible, but not simpler", then Newton would have to be preferred before Einstein, or what do you think? The basic force of cosmology is gravitation, and thus it may appear from rationality point of view to be irrational to seek to eliminate gravitational force from the discussion altogether, which is what Einstein did and which maybe paradoxically gave him fame bigger than that of Newton. PS1 What drove Einstein into his extremism? Well, the reception of the special theory of relativity Einstein presented in a short sketchy note in 1905, did not draw any attention the first years and when it did, the reaction was negative. The only thing left for Einstein before getting called and kicked out of academics, was to increase the bet by generalising the special theory, which did not cover gravitation, into a general theory of relativity including gravitation.  The only thing Einstein had in his scientific toolbox was the Lorentz transformation between non-accelerating inertial systems and the only way to bring that in contact with gravitation was to introduce coordinate systems in free fall, which in the presence of gravitation required strange transformations of space and time coordinates. Einstein's "happiest thought" was when he realised that sitting in a freely falling elevator cannot be distinguished from sitting in an elevator at rest assuming no gravitation... until the freely falling elevator hits ground....It was this idea of free fall seemingly without gravitation, which allowed him to keep the Lorentz transformation with all its wonderful effects of the special theory without gravitation, when generalising to include gravitation...but the price was high...and the free fall is going on...Compare with Einstein's Pathway to General Relativity. PS2 Another fact not to suppress is that the special theory of relativity was focussed on propagation of light with the same speed in all inertial coordinate systems if connected by the Lorentz transformation, which gave strange effects for the mechanics of matter (without gravitation) including dilation in time and contraction in space. But the Lorentz transformation was shaped for light propagation and not for  mechanics of matter and so it was no wonder that strange effects came out. Since the Lorentz transformation also underlies the general theory of relativity, it is even less wonder that strange effects come out when adding gravitation to the picture. The lack of scientific logic is clear: If you apply a theory designed to describe a a certain phenomenon (light propagation) to a different type of phenomenon (mechanics of matter), then you must be prepared to get in trouble, even if your name is Einstein...     söndag 22 maj 2016 Equivalence Principle as Definition: Experiment Here is little experiment you can do yourself on the kitchen table supporting the idea that intertial mass is made equal to gravitational mass by definition as a result of a definition of force in terms of gravitational force: Take two identical pieces of material, put one of the pieces on a horisontal table with frictionless surface and connect the other by a weightless rope as indicated in the picture and let go from rest. Record the acceleration of the system. Observe that it is half of the gravitational acceleration of one of the pieces in free fall. Conclude that inertial mass = gravitational mass and that force ultimately is defined in terms of gravitational force as expressed by the green arrows. Understand that what you test with the experiment is if mass = force/acceleration is independent of orientation of Euclidean coordinate system. fredag 20 maj 2016 Gravitational Mass = Inertial Mass by Definition: Hard Thinking                                  Typical illustration of equivalence principle. Get the point? In Newtonian mechanics as already observed and understood by Galieo, inertial and gravitational (heavy) mass are equal, because there is only one form of mass and that is inertial mass as a measure of acceleration vs force per unit of volume. Since Newtonian gravitation is a force per unit of volume, gravitational mass is equal to inertial mass, by definition, as expressed by the fact that the dimension of gravitation is $m/s^2$. See also Chap 18 of Many-Minds Relativity. Let us compare this insight with what modern physics says as told by Nigel Calder in Magic Universe: • A succession of experiments to check the equivalence principle—the crucial proposition that everything falls at the same rate—began with Lorand Eötvös in Budapest in 1889. After a century of further effort, physicists had improved on his accuracy by a factor of 10,000. The advent of spaceflight held out the possibility of a further improvement by a factor of a million. • If another theory of gravity is to replace Einstein’s, the equivalence principle cannot be exactly correct. Even though it’s casually implicit for every high-school student in Newton’s mathematics, Einstein himself thought the equivalence principle deeply mysterious. ‘Mass,’ he wrote, ‘is defined by the resistance that a body opposes to its acceleration (inert mass). It is also measured by the weight of the body (heavy mass). That these two radically different definitions lead to the same value for the mass of a body is, in itself, an astonishing fact.’ • Francis Everitt of Stanford put it more forcibly. ‘In truth, the equivalence principle is the weirdest apparent fact in all of physics,’ he said. ‘Have you noticed that when a physicist calls something a principle, he means something he believes with total conviction but doesn’t in the slightest degree understand.’ • Together with Paul Worden of Stanford and Tim Sumner of Imperial College London, Everitt spent decades prodding space agencies to do something about it. Eventually they got the go-ahead for a satellite called STEP to go into orbit around the Earth in 2007. As a joint US–European project, the Satellite Test of the Equivalence Principle (to unpack the acronym) creates, in effect, a tower of Pisa as big as the Earth. Supersensitive equipment will look for very slight differences in the behaviour of eight test masses made of different materials— niobium, platinum-iridium and beryllium—as they repeatedly fall from one side of the Earth to the other, aboard the satellite. • ‘The intriguing thing,’ Everitt said, ‘is that this advance brings us into new theoretical territory where there are solid reasons for expecting a breakdown of equivalence. A violation would mean the discovery of a new force of Nature. Alternatively, if equivalence still holds at a part in a billion billion, the theorists who are trying to get beyond Einstein will have some more hard thinking to do.’  So Einstein thought to be deeply mysterious, what every high school student directly understands, and was able to imprint his idea into the brains of all modern physicists, who now have some hard thinking to do... Einstein skillfully jumped between definition as a tautology true by construction and physical principle/law, which may be valid/true or not, thereby creating a total confusion. Another aspect is the constancy of the speed of light, which today is used as definition with the meter defined by distance traveled by light in certain time, yet physicists go around and believe that this works because the speed of light is constant. If you cannot distinguish between a definition without content and statement with content, then you may find yourself in trouble and mislead others... PS This previous post may be consulted: The Principal Difference between Principles and Laws in Physics. Note in particular the distinction that a law is typically expressed as a formula, while a principle is expressed in words e.g. as equality of inertial and gravitational mass. New Theory of Flight Presented at KTH The New Theory of Flight developed together with Johan Hoffmann and Johan Jansson will be presented at a KTHX seminar on May 26, see Note that the starting point is the incompressible Euler equation formulated by Euler around 1750 and  presented as follows: • Everything that the theory of fluids contains is embodied in the two equations I have formulated. It is not the laws of mechanics that we lack in order to pursue this research, only the analysis which has not been sufficiently developed for this purpose. What we do is to compute turbulent solutions of the Euler equations after having realised for the first time the development Euler is asking for,  and we then discover as predicted by Euler  "everything that the theory of fluids contains" or at least lots of it.  Sometimes it takes a long time for a correct idea to bear fruit. torsdag 19 maj 2016 Unsolvable Incompatible Equations of Modern Physics = Complete Success! All books on modern physics start out with a praise to Schrödinger's equation of quantum mechanics and Einstein's equation of general relativity as the highest achievement of science. Here is what Stephen Hawking says in The Grand Design (while signaling that something is weird): • The quantum model of nature encompasses principles that contradict not only our everyday experience but our intuitive concept of reality. Those who find those principles weird or difficult to believe are in good company, the company of great physicists such as Einstein and even Feynman, whose description of quantum theory we will soon present. In fact, Feynman once wrote, “I think I can safely say that nobody understands quantum mechanics.” But quantum physics agrees with observation. It has never failed a test, and it has been tested more than any other theory in science. Then comes a little caveat saying that unfortunately the equations are incompatible, and so one of them must be wrong, but in any case both equations are certainly valid/true and since both are written in stone none of them can be wrong, after all. Here is what Wikipedia says about the situation and the  prospects: So string theory is the only hope, but the string hype seems to be fading and so the prospects seem pretty dim. But there is another even more cumbersome problem with these equations: They cannot be solved! Schrödinger's equation for an atom with $N$ electrons is formulated in terms of a wave function which depends on $3N$ space coordinates, which makes numerical solution impossible for N > 10 according to Nobel Laureate Walter Kuhn, and already the case N=2 of Helium is filled with difficulty not to speak about the case of Oxygen with N=8. And analytical solution is known only in the case of Hydrogen with N=1. Einstein's equation is even more difficult to solve and only a few analytical solutions in extreme simplicity are known (e.g. vacuum solution of a spherically symmetric gravitational field for a static mass),  and numerical solution is not really an issue because data such as initial and boundary values and forcing are completely up in the air and the choice of coordinate system is unclear. We quote from Baumgarte Numerical Relativity: Solving Einstein's equations on the Computer: • Chapters 12 and 13 focus on the inspiral and coalescence of binary black holes, one of the most important applications of numerical relativity and a promising source of detectable gravitational radiation. These chapters treat the two-body problem in classical general relativity theory, and its solution represents one of the major triumphs of numerical relativity.  We understand that modern physics is based on two equations which cannot be solved, except in very simplistic cases. Yet the equations are claimed to be true in the sense that solutions of the equations always agree with observations, in all cases including complex cases.  But wait, how can you know that solutions of the equations always agree with observation if you cannot solve the equations and produce solutions to compare with observations? Of course you cannot know that. But this troublesome fact for modern physicists, is twisted into: Since solutions cannot be computed, it is impossible to find any discrepancy between predictions according to the equations and observations! There are no predictions from solving the equations and thus there is no discrepancy! More precisely it works this way: Suppose you have computed a what you view as an approximate solution of Schrödinger's equation for some atom with many electrons, by an ingenious choice of "atomic orbitals" combined with some optimisation to choose a best combination of orbitals, and that the predicted energy is in perfect agreement with observation. Then you congratulate yourself and say that you have produced yet another piece of evidence that solutions of Schrödinger's equation always agree exactly with observations. On the other hand, if your approximate solution does not agree exactly with observation, then you blame the approximation and take that as evidence that without approximation the agreement certainly would be complete, and then you try some other orbitals...until complete agreement...possibly by twisting the observation under the firm conviction that solutions to Schrödinger's equation give an exact description of atomic physics. The net result is that the unsolvable equations of modern physics, which unfortunately are incompatible, anyway both must be valid to an unprecedented precision, since there are no examples of slightest discrepancy between prediction based on solving the equations and observation. In other words the equations serve like oracles who know exact answers to important questions, but are not willing to reveal the full truth. Not very helpful. Do you buy this? Or is there something fishy about solutions to unsolvable mathematical equations, which always give  results in perfect agreement with observations? Doesn't perfect agreement sound little bit too good? Some quotes, among many similar: • When thinking about the new relativity and quantum theories I have felt a homesickness for the paths of physical science where there are ore or less discernible handrails to keep us from the worst morasses of foolishness. (Sir Arthur Stanley Eddington) • Einstein, my upset stomach hates your theory [of General Relativity]—it almost hates you yourself! How am I to' provide for my students? What am I to answer to the philosophers?!!(Paul Ehrenfest) • I count Maxwell and Einstein, Eddington and Dirac, among “real” mathematicians. The great modern achievements of applied mathematics have been in relativity and quantum mechanics, and these subjects are at present at any rate, almost as “useless” as the theory of numbers. (G. H. Hardy) • Quantum field theory, which was born just fifty years ago from the marriage of quantum mechanics with relativity, is a beautiful but not very robust child. (Steven Weinberg) • Niels Bohr brainwashed a whole generation of theorists into thinking that the job of interpreting quantum theory was done 50 years ago. (1969 Nobel Laureate Murray Gell-Mann)  • One might very well be left with the impression that the theory (of general relativity) itself is rather hollow.: What are the postulates of the theory? What are the demonstrations that else follows from these postulates? Where is the theory proven? On what grounds, if any, should one believe the theory? ....One’s mental picture of the theory is this nebolous mass taken as a whole.....One makes no attempt to derive the rest of the theory from the postulates. (What, indeed, could it mean to “derive” somtheing about the physical world?). One makes no attempt to “prove” the theory, or any part of it. (Robert Geroch in General Relativity from A to B) Spiral Galaxy Formation in Extended Newtonian Gravitation 1. Cosmological Model  This is a continuation of previous posts on dark matter and The Universe as Weakly Compressible Gas subject to Pressure and Gravitational Forces, which post we recall: We consider a cosmological model in the form of Euler's equations for a compressible gas subject to Newtonian gravitation: Find $(\rho ,m, e ,\phi ,p)$ depending on a Euclidean space coordinate $x$ and time $t$, such that for all $(x,t)$: • $\dot\rho + \nabla\cdot (\rho u ) =0$       (or $\frac{D\rho}{Dt} = -\rho\nabla\cdot u$) where $\rho$ is mass density, $u=\frac{m}{\rho}$ is matter velocity, $p$ is pressure, $\phi$ is gravitational potential, and $e$ is internal energy as the sum of heat energy $\rho T$ with $T$ temperature and gravitational energy $\rho\phi$and the dot indicates time differentiation and • $\frac{D\rho}{Dt}=\dot\rho +u\cdot\nabla\rho$ is the convective time derivative of $\rho$, see Many-Minds Relativity 20.3 and Computational Thermodynamics Chap 32. These equations express conservation of mass $\rho$, conservation of momentum $m$ with $\nabla p$ pressure force and $-\nabla\phi$ gravitational force, and conservation of internal energy $e$. These laws of conservation are complemented with constitutive laws connection $p$ and $\phi$ to density, of the following form: A1: Weakly compressible gas ($\delta$ small positive constant): • $\Delta p =\frac{\nabla\cdot u}{\delta}= - \frac{1}{\delta\rho}\frac{D\rho}{Dt}$ A2: Compressible perfect gas ($0 < \gamma < 1 $): • $p=\gamma \rho T$. B: Newton's law of gravitation: • $\Delta\phi =\rho$ with $\phi =0$ at infinity.             We observe 1. Similarity of $\nabla p$ and $\nabla\phi$ in momentum equation.  2. Similarity between A1 and B connecting $\Delta p$ to $-\frac{D\rho}{Dt}$ (or $-\rho$) and $\Delta\phi$ to $\rho$. 3. $p \ge 0$ and $\phi \le 0$. Here 1. can be seen as the Equivalence Principle (equality of heavy and inertial mass) expressing that there is no difference between gravitational and other forces (pressure) in Newton's 2nd law expressing conservation of momentum. Further, 2. expresses that the constitutive laws A1 and B both can be viewed as action at distance if $\rho$ is viewed as the cause, but represent local action of differentiation if $\rho$ is viewed as the effect.  For a weakly compressible gas described by A1, there is no need per se to identify a cause-effect relation between $p$ and $\rho$; it is enough to say that $p$ and $\rho$ are connected in a certain way expressing a form of "perfect harmony".  In the same way, there is no need per se to identify a cause-effect relation between $\phi$ and $\rho$; it is enough to say that $\phi$ and $\rho$ are connected in certain way expressing a form of  "perfect harmony" in the spirit of Leibniz. The relation $\Delta\phi =\rho$ is explored in Newtonian Matter and Antimatter with $\Delta\phi > 0$ identifying matter and $\Delta\phi < 0$ antimatter, with dark matter where $\Delta\phi$ is smooth and visible matter where $\Delta\phi$ is singular, typically as a sum of multiples of delta functions representing matter in point form.  We refer to such a model as Extended Newtonian Gravitation.  2. Galaxy Formation We start from a spherical distribution of matter of low density of dark matter (a halo) with $\Delta\phi$ a smooth function, which we assume to be in static equilibrium with the the gravitational force balanced by a weak pressure force with $\nabla p = - \rho\nabla\phi$.  Starting from this halo of low density dark matter, we assume that some visible matter (stars) is formed by concentration of dark matter by gravitational attraction into point masses with $\rho$ becoming large locally with the result that the gravitational force $\rho\nabla\phi$ can no longer be balanced by a weak pressure force $-\nabla p$. This is an effect of the different action of pressure and gravitational force, with pressure scaling with surface and gravitational force with volume. The combined effect of the presence of a halo of dark matter and gravitational collapse of visible matter as a system of point masses, may then create a spiral galaxy of visible matter surrounded by a halo of dark matter, which is the standard view of the nature of a spiral galaxy, with in particular a characteristic distribution of velocity of visible matter as roughly independent of the distance to the galaxy center as an effect of the dark matter halo.  It thus appears that an extended Newtonian model with $\Delta\phi$ of variable sign and concentration may be sufficient to explain essential aspects of galaxy formation, for which Einstein's equation equation is useless.    tisdag 17 maj 2016 Einsteins "Scientific Method": Magic Physics from Definition Einstein "scientific method", which brought him immense fame as the greatest physicist of all times, consists of: • Start from a definition, convention or stipulation/law without physical content, and then draw far-reaching consequences about the physics of the world. It is not hard to understand that such a "method" cannot work: You cannot draw meaningful conclusions about the world simply from a definition empty of physics content. You cannot develop a meaningful scientific theory from a definition that there are 100 centimeters on a meter.  Einstein cleverly covered up by naming his definitions or conventions or stipulations/laws, "principles": 1. Equivalence Principle: Gravitational mass is equal to inertial mass. 2. Relativity Principle: Observations in inertial coordinate systems moving with constant velocity with respect to each other, are to be connected by the Lorentz transformation. 3. Covariance Principle: Physical laws are to have the same form independent of the choice of coordinate system. Here 1. is an empty definition, because there is only one mass, and that is inertial mass, which measures acceleration vs force and gravitational force is a force. Gravitational mass is equal to inertial mass by definition. Attempts to "prove/verify" this experimentally, which are constantly being made with ever increasing precision and always with the same result of equality, are as meaningful as experiments attempting to verify that there are 100 centimeters on a meter, which could very well be the next grand challenge for LHC, in the spirit of Einstein. 2.  stipulates that different physical phenomena are to be viewed to be the same. This is because the Lorentz transformation is not invariant with respect to initial conditions, and thus Einstein stipulates that  two waves satisfying the same form of wave equation, but having different initial conditions, shall be viewed to be the same. No wonder that with this play with identities, all sort of strange effects of time dilation and space contraction can be drawn out of  a magicians hat. It is clear that physical laws in general take different forms in different coordinate systems, and thus 3. is an absurd stipulation. Alternatively, it is trivial and just says that a physical law will have to transform when expressed in different coordinates so that the law has the same physical content. So 3. is either absurd or trivial, in both cases devoid of physics. It is depressing that none of this can be understood by leading modern physicists. Nada. Even more depressing is that the discussion is closed since 100 years. måndag 16 maj 2016 The Blind Space Traveler with Gravitational Potential Meter               Hawking inside a space ship without windows with a Gravitational Potential Meter Imagine you are a space traveler locked into a space ship without windows, or traveling through a  region of invisible dark matter. Imagine that in this difficult situation, you have access to an instrument capable of recording the gravitational potential around the space ship from near to far away, an instrument or sense which we may call a Gravitational Potential Meter. Below I discuss how such an instrument might be designed. Would that allow you to create a normal picture of the distribution of celestial objects/matter around you including your own position, which would be the picture you could see if there were windows or dark matter somehow was made visible, a standard picture/map making it possible to navigate? Yes, it would because the mass distribution $\rho (x)$ depending on a Euclidean space coordinate $x$ at any instant of time, is related to the gravitational potential $\phi (x)$ by Poisson's equation (in normalised form): • $\rho = \Delta\phi$,          (*) where $\Delta$ is the Laplacian with respect to $x$. In this setting you would naturally view the gravitational potential $\phi (x)$ as primordial, because this is what you can record/sense, and you would view the mass distribution $\rho (x)$ as a derived quantity, because this is what you can compute knowing $\phi (x)$ by applying the Laplace operator, which is a differential operator acting locally in space.  In this new setting you would not, as in the classical setting of viewing $\rho (x)$ as primordial and $\phi = \Delta^{-1}\rho$ as derived by the inverse of the Laplacian as a non-local operator, have to explain instant action at distance, only the local action of (*), and you would thus have eliminated the question of the physics of instant action at distance, which does not seem to have an answer, and as such may be the wrong question.  We conclude that depending on what we can see through instruments or senses, we are led to questions, which may have answers or not.  It is natural to think that questions, which may have answers, are better questions than questions which do not have answers. As to the design of a Gravitational Potential Meter or Gravitational Force Meter, imagine a system of little satellites in free fall distributed over the space of interest and connected to a GPS system allowing tracing of the satellites, thus giving information about the Gravitational Force and from that the Gravitational Potential. It is not unthinkable that such a system could cover any space accessible for space travel and beyond.  Simultaneity as Non-Physical Convention along with Special Relativity The book Concepts of Simultaneity: From Antiquity to Einstein and Beyond is presented by: • Max Jammer's Concepts of Simultaneity presents a comprehensive, accessible account of the historical development of an important and controversial concept—which played a critical role in initiating modern theoretical physics—from the days of Egyptian hieroglyphs through to Einstein's work in 1905, and beyond.  • Beginning with the use of the concept of simultaneity in ancient Egypt and in the Bible, the study discusses its role in Greek and medieval philosophy as well as its significance in Newtonian physics and in the ideas of Leibniz, Kant, and other classical philosophers.  • The central theme of Jammer's presentation is a critical analysis of the use of this concept by philosophers of science, like Poincaré, and its significant role in inaugurating modern theoretical physics in Einstein's special theory of relativity.  • Particular attention is paid to the philosophical problem of whether the notion of distant simultaneity presents a factual reality or only a hypothetical convention. The study concludes with an analysis of simultaneity's importance in general relativity and quantum mechanics. In earlier post on I have argued that simultaneity in time at distant points in space is a man-made convention, which is useful to humanity in many ways including GPS, but as convention has no role in describing the physics of material bodies without GPS receivers.  Jammer presents much evidence supporting this view without closing the door to simultaneity as some form of factual reality. Einstein's special relativity came out from an a simple thought experiment showing that agreement on distant simultaneity defined by a certain conventional form of clock synchronization set up by Einstein, cannot be established for different observers moving with speeds comparable to the speed of light with respect to each other.  Einstein thus started from a certain ad hoc man-made convention and from the impossibility of making the convention work for moving observers Einstein jumped to the conclusion that our concepts of the physics of space and time will have to be fundamentally changed. And the world  jumped along. But is it possible to change physics by man-made convention? Can we change physics by changing our man-made conventions to measure time and space, by changing from yard to meter? I think not.  Why believe that special relativity is real physics, when special relativity is based on an impossibility to make a certain man-made convention work? I have stressed that the notion of distant simultaneity is present in the standard form of Newton's law of gravitation as Poisson's equation $\Delta\phi =\rho$, seemingly creating a gravitational potential $\phi (x)$ depending on a Euclidean space coordinate $x$ from instant action at distance by a primordial matter distribution $\rho (y)$ with $y$ different from $x$,  represented as $\phi =\Delta^{-1}\rho$ with the inverse $\Delta^{-1}$ a non-local (integral) operator. On the other hand, viewing the gravitational potential $\phi$ as primordial and $\rho =\Delta\phi$ as derived by local differentiation, there is no need to explain the physics of instant action at distance, which Newton left open under the criticism of Leibniz and which has resisted all attempts after Newton. We conventionally view matter $\rho$ as primordial, since we can see matter at distance if it is sending out light, while we cannot see the gravitational potential $\phi$, only feel that it is there.  But with a different eyes we may be able to see the gravitational potential $\phi$, but not $\rho$, and we would then naturally view $\phi$ to be primordial. With such eyes we might be able to see a gravitational potential of dark matter and dark energy, which we now cannot see, only feel that it is there.    söndag 15 maj 2016 The Quest for the Ultimate Theory of Time: Physical Stability or Empty Probability? The question of the direction of time, or the arrow of time, is still haunting physicists with the physicist and cosmologist Sean Carrol expressing state of art in e.g. the book From Eternity to Here: The Quest for the Ultimate Theory of Time, which is basically to say following old Boltzmann: There is a quantity named entropy, which cannot decrease with time and when strictly increasing sets a direction of time motivated by Carroll as follows in an introduction: • The reason why entropy wants to increase is deceptively simple: • There are more ways to be disorderly than orderly, so an orderly arrangement will naturally tend toward increasing disorder. But Carroll is not very happy with this his explanation: • If everything in the universe evolves toward increasing disorder, it must have started out in an exquisitely ordered arrangeement...a state of very low entropy. • Why were conditions in the early universe set up in a very particular way? That is the question this book sets out to address. • Unfortunately, no one yet knows the right answer. And then follows the rest of the book, without answer. The only attempt to give reason to the tendency of entropy to increase, is to argue following Boltzmann, that things naturally evolve from less probable/low entropy states to more probable/higher entropy states. But of course this is circular: To say that more probable is more probable than less probable is a tautology without actual content. In the book The Clock and the Arrow: A Brief Theory of Time I argue that there is another way of explaining the arrow of time and that is with reference to the physics of stability instead of the non-physics of probability of Boltzmann. The key point is: • A system cannot remain in an unstable state because the inevitable effect of small fluctuations will have a major effect and thus transform the system to either a more stable state of more or less rest or to another unstable state of non-rest.  • The transition from unstable to stable rest is irreversible since the reverse process from stable rest to unstable is impossible without major exterior forcing.  • The transition from unstable is sensitive to small perturbations along with the formally reversed process, and thus cannot be reversed under any form of finite precision physics.     Here is a summary of my view and that of Boltzmann/Carroll: 1. An arrow of time is given by physical stability properties of certain systems making them irreversible, without asking any specific order of an early universe. 2. An arrow of time is motivated by an empty tautology stating that systems evolve from less probable to more probable states, asking for a highly improbable highly ordered early universe.  You may decide yourself between 1. and 2. Which is more probable? Instant Action at Distance and Simultaneity not Needed in New Theory of Gravitation including Dark Energy                            Einstein won the game. But what was the game about? Simultaneity? Einstein's theory of relativity grew out from a question of simultaneity in time of events at different locations in space, which Einstein could not answer in a non-ambiguous way and then jumped to the conclusion that a fundamental revision of our concepts of space and time was necessary. Einstein took so on the responsibility in the service of science and humanity to make the revision and thereby open the door to a modern physics of "curved space-time" with all its wondrous new effects of time dilation and space contraction, albeit too small to be detected. It is clear that simultaneity plays an important role in our society, to set schedules and allow people to meet at the same place and for these purposes we all have clocks synchronized to a reference clock. And to decide which scientist first submitted an article reporting a certain new scientific break-through and to navigate... But what role does simultaneity play in physics? In what sense do distant physical objects care about simultaneity? Do they all have synchronised clocks? Of course not. What they do is to react to local forces acting locally in time, and no simultaneity with the action of distant objects is involved. Or is it? What about gravitation, isn't it supposed to act instantly over distance and thus require a form of exact simultaneity? Yes, it so seems because in Newtonian gravitation the Earth is instantly acted upon by a gravitational force from the Sun directed towards the present position of the Sun, and not towards the position where we see the Sun because of the 8 minute time delay of the light from the Sun. The standard view on gravitation, is thus that the presence of matter instantly generates a gravitational potential/force (Newton) or "curvature of space" (Einstein) at distance. This view comes with the following questions: 1. What is the physics of the instant action at distance? Gravitons? 2. What is the physics of the simultaneity associated with instant action?  Since no progress towards any form of answer has been made over all the centuries since Newton, it is natural to shift and instead view the gravitational potential $\phi$ as primordial from which matter density $\rho$ is obtained by the differential equation acting locally in space and time: • $\Delta\phi =\rho$.    (*)       With this view there is no instant action at distance to explain and no associated simultaneity, since the action of Laplacian $\Delta$ as differential operator is local is space and time.  It may thus be that the questions 1. and 2. are not the right questions, and then also that Einstein's relativity originating from a question about simultaneity, is not the right answer to the right question. More precisely, simultaneity does not appear to be a matter of the physics of the world, since atoms are not equipped with a man-made system of synchronised clocks, and so it is not reasonable to make a complete revision of Newtonian mechanics starting from an ad hoc idea of probably little significance.         The equation (*) further suggests that with $\phi$ primordial there is no reason to insist that $\rho$ as a derived quantity must be non-negative, thus (*) opens to the possible existence of matter density $\rho$ of both signs, that is to both positive and negative matter.  This idea is explored in the app Dark Energy on App Store with in particular a simulation of a universe resulting from a fluctuation of the gravitational potential with associated positive and negative matter, with the negative matter forcing a positive matter world into accelerating expansion, which may be the missing dark energy you are looking for. Try it! onsdag 11 maj 2016 Bergson with History vs Einstein without History: Tragedy of Modern Physics The clash between Bergson and Einstein in 1922 about the physics of special relativity can be described as the clash between the physics of Herakleitos as change and Parmenides as no change. Let us recall Einstein's position of no change with motionless space-time trajectories without beginning and end or "world lines" frozen into a block of space-time, expressed with the typical Einsteinian ambiguity: Einstein's special theory of relativity is defined by the following linear transformation between two space-time coordinate systems $(x,y,z,t)$ and $(x^\prime ,y^\prime ,z^\prime ,t^\prime )$ denoted by $S$ and $S^\prime$, named the Lorentz transformation: • $x^\prime  =\gamma (x - vt)$, • $y^\prime  =y^\prime$ • $z^\prime  =z^\prime$ • $t^\prime  =\gamma (t - vx)$,   where $\gamma = \frac{1}{\sqrt{1-v^2}}$ assuming the speed of light is 1 and $0 < v < 1$. Here $(x,y,z)$ and $(x^\prime ,y^\prime  ,z^\prime)$ are supposed to represent orthogonal space coordinates and the origin $x^\prime = 0$ in $S^\prime$ can be seen to move with velocity $(v,0,0)$ in $S$. Einstein's strike of genius is to claim that the Lorentz transformation represents the coordinate transformation between two orthogonal coordinate systems "moving with velocity $(v,0,0)$ with respect to each other" both describing the same physics of light propagation at speed = 1 according to one and the same wave equation taking the same form (being invariant) in both systems. In the physics of change of Bergson the wave equation in $S$ is combined with an intial condition in the form of position $u(x)$ and velocity $\dot u(x)$ of a wave with extension at a given time instant say $t=0$, which forms the history for subsequent evolution for $t > 0$ of the wave as described in $S$. And the same for a wave described in $S^\prime$. But initial conditions are not invariant under the Lorentz transformation, because $t=0$ translates to $x^\prime = \gamma x$ and $t^\prime =-\gamma vx$, and not $t^\prime =0$ as in a Galilean coordinate transformation.  Two waves connected by the Lorentz transformation satisfying the same wave equation will satisfy different initial conditions and therefore represent different physical phenomena. No wonder that different waves can exhibit what is referred to as time dilation and space contraction if the different waves are identified! Bergson's physics of change describes phenomena with different histories/initial values as different phenomena even if they happen to satisfy the same wave equation in subsequent time,  which is completely rational. In Einstein's physics of no change there are no intial conditions for extended waves, which allows Einstein to claim that there is no way to tell that representations connected by the Lorentz transformation do not describe the same physical phenomenon. This is used by Einstein as negative evidence that indeed the phenomena are the same, which leads to all the strange effects of special relativity in the form of time dilation and space contraction. By covering up history Einstein thus can insist that two different waves with different histories are the same wave, and from this violation of logic strike the world with wonder. But of course Einstein's insistence to cover up initial values, is fully irrational. Einstein circumvents the question of initial value/history by only speaking about space-time events without extension in space recorded by space-time point coordinates $(x,y,z,t)$. By focussing on points in space-time without extension in space, Einstein can cover up the crucial role of initial value/history for a phenomenon with extension in space. But physical objects have extension in space and so Einstein's physics of points is not real physics. Einstein's physics is about "events" as isolated points in space-time, but real physics is not about such "events" but about the position in space and time of physical objects with extension both in space and time. What has existence for Einstein as extended objects are "world lines" as trajectories extended in time of spatial points without extension frozen into a block of space-time, not objects extended in space changing over time. This is so weird and irrational that rational arguments fall short and the tragic result is modern physics without rationality, where only what is weird has a place. In other words, a picture consisting of just one dot carries no history, just presence. A picture with many dots can carry history. It is not rational to identify two different persons arguing that they are the same person because they were born at the same place at the same time and live under the same conditions, while forgetting that they have different ancestors and histories. Or the other way around, if you identify such people, then you obtain a strange new form of parapsychology of shifting  personalities and if you believe this is science then you are fooling yourself. Einstein's special theory of relativity is about measurement of "space-time events" using "measuring rods" and "clocks", without ever telling what instruments these are and without caring about the underlying physics. It is thus a like an ad hoc tax system imposed by the government without caring about the underlying economy. It is now up to you to decide if you think that the point physics of no change/without history of Einstein, is more useful for humanity than the real physics of change/with history of Bergson, or the other way around. Maybe you will then come to the conclusion that it is a tragedy that modern physicists have been seduced by Einstein to believe in point physics without change and history, and even more tragical that no discussion of this tragedy has been allowed after 1922, by a dictate of leading physicists. You can read more about the contradictions of special relativity in Many-Minds Relativity, with the non-invariance of initial conditions under Lorentz transformation observed in section 5.9.
d0a2436de4d09b7b
United Front of Revolutionary Leftists HomeHome  FAQFAQ  SearchSearch  MemberlistMemberlist  UsergroupsUsergroups  RegisterRegister  Log inLog in   Share |   quantum mechanics intro View previous topic View next topic Go down  Posts : 71 Join date : 2008-10-21 Age : 27 PostSubject: quantum mechanics intro   Fri Nov 14, 2008 2:14 pm "The ‘path’ comes into existence only when we observe it" the theory of quantum mechanics states that with every possibility for an event in nature to take place, there is a quantity, called amplitude, associated with each alternative. furthermore, the amplitude associated with the overall event is obtained by adding the amplitudes of each of the alternatives. the probability that the event will happen is equal to the square of the absolute value of the overall amplitude. thus, if f1 and f2 are the amplitudes of the two possibilities for a particular event to take place, the amplitude for the total event is (as follows): f = f1 + f2 and the probability for the event to occur is given by P = | f1 + f2 |2 in the macroscopic world the total probability for an event to take place is given by P = P1 + P2 the sum of the probabilities of each alternative. in quantum mechanics P = | f1 |2 + | f2 |2 + f1f2* +f1*f2 = P1 + P2 + f1f2* + f1*f2 showing that the law of computing probabilities is not that of classical physics. the two additional terms are due to the interference of alternatives. if the event is interrupted before its conclusion, for example by determining if the event takes place through alternative 1, the amplitudes of all other alternatives can no longer be added to the total amplitude. the fact that the total probability follows from the knowledge of the amplitudes of all interfering alternatives forms the basis of what is called the Heisenberg uncertainty principle. the uncertainty principle asserts that there is a natural limit to the accuracy of any measurement. for instance the momentum of a particle cannot be precisely specified without loosing all information about its position, and vice versa. the uncertainty principle demonstrates that there are fundamental limitations to the use of concepts based on every-day experience. however, the uncertain principle can be broken down into 1. Schrödinger equation 2. Simulation method i will explore these methods in a continuation to this little intro. Back to top Go down View user profile quantum mechanics intro View previous topic View next topic Back to top  Page 1 of 1  Similar topics » Paul Young (Mike + The Mechanics) » Video Intro for guests » BND Automotive and Quantum Blue Oil - First Impressions » Firefox Quantum » Criticism of intellectual communities Chomsky Permissions in this forum:You cannot reply to topics in this forum United Front of Revolutionary Leftists :: Study Groups :: Exact Sciences :: Physics :: General discussion- Jump to:
d2d406395b02e4eb
Resume Reading — Haunted by His Brother, He Revolutionized Physics Haunted by His Brother, He Revolutionized Physics To John Archibald Wheeler, the race to explain time was personal. T he postcard contained only two words: “Hurry up.”John Archibald Wheeler, a 33-year-old physicist, was in Hanford, Wash., working…By Amanda Gefter he postcard contained only two words: “Hurry up.” John Archibald Wheeler, a 33-year-old physicist, was in Hanford, Wash., working on the nuclear reactor that was feeding plutonium to Los Alamos, when he received the postcard from his younger brother, Joe. It was late summer, 1944. Joe was fighting on the front lines of World War II in Italy. He had a good idea what his older brother was up to. He knew that five years earlier, Wheeler had sat down with Danish scientist Niels Bohr and worked out the physics of nuclear fission, showing that unstable isotopes of elements like uranium or soon-to-be-discovered plutonium would, when bombarded with neutrons, split down the seams, releasing unimaginable stores of atomic energy. Enough to flatten a city. Enough to end a war. After the postcard’s arrival, Wheeler worked as quickly as he could, and the Manhattan Project completed its construction of the atomic bomb the following summer. Over the Jornada del Muerto Desert in New Mexico, physicists detonated the first nuclear explosion in human history, turning 1,000 feet of desert sand to glass. J. Robert Oppenheimer, the project’s director, watched from the safety of a base camp 10 miles away and silently quoted Hindu scripture from the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” In Hanford, Wheeler was thinking something different: I hope I’m not too late. He didn’t know that on a hillside near Florence, lying in a foxhole, Joe was already dead. When Wheeler learned the news, he was devastated. He blamed himself. “One cannot escape the conclusion that an atomic bomb program started a year earlier and concluded a year sooner would have spared 15 million lives, my brother Joe’s among them,” he wrote in his memoir. “I could—probably—have influenced the decision makers if I had tried.” Time. As a physicist, Wheeler had always been curious to untangle the nature of that mysterious dimension. But now, in the wake of Joe’s death, it was personal. Wheeler would spend the rest of his life struggling against time. His journals, which he always kept at hand (and which today are stashed, unpublished, in the archives of the American Philosophical Society Library in Philadelphia), reveal a stunning portrait of an obsessed thinker, ever-aware of his looming mortality, caught in a race against time to answer not a question, but the question: “How come existence?” “Of all obstacles to a thoroughly penetrating account of existence, none looms up more dismayingly than ‘time,’” Wheeler wrote. “Explain time? Not without explaining existence. Explain existence? Not without explaining time.” As the years raced on, Wheeler’s journal entries about time grew more frequent and urgent, their lines shakier. In one entry, he quoted the Danish scientist and poet Piet Hein:  “I’d like to know what this whole show is all about before it’s out.” Before his curtain came down, Wheeler changed our understanding of time more radically than any thinker before him or since—a change driven by the memory of his brother, a revolution fueled by regret. In 1905, six years before Wheeler was born, Einstein formulated his theory of special relativity. He discovered that time does not flow at a steady pace everywhere for everyone; instead, it’s relative to the motion of an observer. The faster you go, the slower time goes. If you could go as fast as light, you’d see time come to a halt and disappear. But in the years following Einstein’s discovery, the formulation of quantum mechanics led physicists to the opposite conclusion about time. Quantum systems are described by mathematical waves called wavefunctions, which encode the probabilities for finding the system in any given state upon measurement. But the wavefunction isn’t static. It changes. It evolves in time. Time, in other words, is defined outside the quantum system, an external clock that ticks away second after absolute second, in direct defiance of Einstein. That’s where things stood—the two theories in a stalemate, the nature of time up in the air—when Wheeler first came onto the physics scene in the 1930s. As he settled into an academic career at Princeton University, Wheeler was soft-spoken and impossibly polite, donning neatly pressed suits and ties. But behind his conservative demeanor lay a fearlessly radical mind. Raised by a family of librarians, Wheeler was a voracious reader. As he struggled with thorny problems in general relativity and quantum mechanics, he consulted not only Einstein and Bohr but the novels of Henry James and the poetry of Spanish writer Antonio Machado. He lugged a thesaurus in his suitcase when he travelled. Wheeler’s first inkling that time wasn’t quite what it seemed came one night in the spring of 1940 at Princeton. He was thinking about positrons. Positrons are the antiparticle alter egos of electrons: same mass, same spin, opposite charge. But why should such alter egos exist at all? When the idea struck, Wheeler called his student Richard Feynman and announced, “They are all the same particle!” Imagine there’s only one lone electron in the whole universe, Wheeler said, winding its way through space and time, tracing paths so convoluted that this single particle takes on the illusion of countless particles, including positrons. A positron, Wheeler declared, is just an electron moving backwards in time. (A good-natured Feynman, in his acceptance speech for the 1965 Nobel Prize in Physics, said he stole that idea from Wheeler.) The puzzle of existence: “I am not ‘I’ unless I continue to hammer at that nut,” wrote John Archibald Wheeler.Corbis Images After working on the Manhattan Project in the 1940s, Wheeler was eager to get back to Princeton and theoretical physics. Yet his return was delayed. In 1950, still haunted by his failure to act quickly enough to save his brother, he joined physicist Edward Teller in Los Alamos to build a weapon even deadlier than the atomic bomb—the hydrogen bomb. On November 1, 1952, Wheeler was on board the S.S. Curtis, about 35 miles from the island of Elugelab in the Pacific. He watched the U.S. detonate an H-bomb with 700 times the energy of the bomb that destroyed Hiroshima. When the test was over, so was the island of Elugelab. With his work at Los Alamos complete, Wheeler “fell in love with general relativity and gravitation.” Back at Princeton, just down the street from Einstein’s home, he stood at a chalkboard and gave the first course ever taught on the subject. General relativity described how mass could warp spacetime into strange geometries that we call gravity. Wheeler wanted to know just how strange those geometries could get. As he pushed the theory to its limits, he became fascinated by an object that seemed to turn time on its head. It was called an Einstein-Rosen bridge, and it was a kind of tunnel that carves out a cosmic shortcut, connecting distant points in spacetime so that by entering one end and emerging from the other, one could travel faster than light or backward in time. Wheeler, who loved language, knew that one could breathe life into obscure convolutions of mathematics by giving them names; in 1957, he gave this warped bit of reality a name: wormhole. As he pushed further through spacetime, he came upon another gravitational anomaly, a place where mass is so densely packed that gravity grows infinitely strong and spacetime infinitely mangled. This, too, he gave a name: black hole. It was a place where “time” lost all meaning, as if it never existed in the first place. “Every black hole brings an end to time,” Wheeler wrote. As he pushed further through spacetime, Wheeler came upon another gravitational anomaly. This, too, he gave a name: black hole. n the 1960s, as the Vietnam War tore the fabric of American culture, Wheeler struggled to mend a rift in physics between general relativity and quantum mechanics—a rift called time. One day in 1965, while waiting out a layover in North Carolina, Wheeler asked his colleague Bryce DeWitt to keep him company for a few hours at the airport. In the terminal, Wheeler and DeWitt wrote down an equation for a wavefunction, which Wheeler called the Einstein-Schrödinger equation, and which everyone else later called the Wheeler-DeWitt equation. (DeWitt eventually called it “that damned equation.”) Instead of a wavefunction describing some system of particles moving around in a lab, Wheeler and DeWitt’s wavefunction described the whole universe. The only problem was where to put the clock. They couldn’t put it outside the universe, because the universe, by definition, has no outside. So while their equation successfully combined the best of both relativity and quantum theory, it also described a universe that couldn’t evolve—a frozen universe, stuck in a single, eternal instant. Wheeler’s work on wormholes had already shown him that, like electrons and positrons, we too might be capable of bending and breaking time’s arrow. Meanwhile his work on the physics of black holes had led him to suspect that time, deep down, does not exist. Now, at the Raleigh International Airport, that damned equation left Wheeler with a nagging hunch that time couldn’t be a fundamental ingredient of reality. It had to be, as Einstein said, a stubbornly persistent illusion, a result of the fact that we are stuck inside a universe that only has an inside. Wheeler was convinced the central clue to the puzzle of existence—and in turn of time—was quantum measurement. He saw that the profound strangeness of quantum theory lies in the fact that when an observer makes a measurement, he doesn’t measure something that already exists in the world. Instead, his measurement somehow brings that very thing into existence—a bizarre fact that no one in his right mind would have bought, except that it had been proven again and again with a mind-melting experiment known as the double-slit. It was an experiment that Wheeler could not get out of his head. In the experiment, single photons are shot from a laser at a screen with two tiny parallel slits, then land on a photographic plate on the other side, where they leave a dot of light. Each photon has a 50/50 chance of passing through either slit, so after many rounds of this, you’d expect to see two big blobs of light on the plate, one showing the pile of photons that passed through slit A and the other showing the pile that passed through slit B. You don’t. Instead you see a series of black and white stripes—an interference pattern. “Watching this actual experiment in progress makes vivid the quantum behavior,” Wheeler wrote. “Simple though it is in concept, it strikingly brings out the mind-bending strangeness of quantum theory.” As impossible as it sounds, the interference pattern can only mean one thing: each photon went through both slits simultaneously. As the photon heads toward the screen, it is described by a quantum wavefunction. At the screen, the wavefunction splits in two. The two versions of the same photon travel through each slit, and when they emerge on the other side, their wavefunctions recombine—only now they are partially out of phase. Where the waves align, the light is amplified, producing stripes of bright light on the plate. Where they are out of sync, the light cancels itself out, leaving stripes of darkness. Things get even stranger, however, when you try to catch the photons passing through the slits. Place a detector at each slit and run the experiment again, photon after photon. Dot by dot, a pattern begins to emerge. It’s not the stripes. There are two big blobs on the plate, one opposite each slit. Each photon took only one path at a time. As if it knows it’s being watched. Photons, of course, don’t know anything. But by choosing which property of a system to measure, we determine the state of the system. If we don’t ask which path the photon takes, it takes both. Our asking creates the path. Could the same idea be scaled up, Wheeler wondered. Could our asking about the origin of existence, about the Big Bang and 13.8 billion years of cosmic history, could that create the universe? “Quantum principle as tiny tip of giant iceberg, as umbilicus of the world,” Wheeler scrawled in his journal on June 27, 1974. “Past present and future tied more intimately than one realizes.” In his journal, Wheeler drew a picture of a capital-U for “universe,” with a giant eye perched atop the left-hand peak, staring across the letter’s abyss to the tip of the right-hand side: the origin of time. As you follow the swoop of the U from right to left, time marches forward and the universe grows. Stars form and then die, spewing their carbon ashes into the emptiness of space. In a corner of the sky, some carbon lands on a rocky planet, merges into some primordial goo, grows, evolves until … an eye! The universe has created an observer and now, in an act of quantum measurement, the observer looks back and creates the universe. Wheeler scribbled a caption beneath the drawing: “The universe as a self-excited system.” The problem with the picture, Wheeler knew, was that it conflicted with our most basic understanding of time. It was one thing for electrons to zip backward through time, or for wormholes to skirt time’s arrow. It was something else entirely to talk about creation and causation. The past flows to the present and then the present turns around and causes the past? Have to come through to a resolution of these issues, whatever the cost,” Wheeler wrote in his journal. “Nowhere more than here can I try to live up to my responsibilities to mankind living and dead, to [his wife] Janette and my children and grandchildren; to the child that might have been but was not; to Joe…” He glued into the journal a newspaper clipping from The Daily Telegraph. The headline read: “Days are Getting Shorter.” In 1979, Wheeler gave a lecture at the University of Maryland in which he proposed a bold new thought experiment, one that would become the most dramatic application of his ideas about time: the delayed choice. Wheeler had realized that it would be possible to arrange the usual double slit experiment in such a way that the observer can decide whether he wants to see stripes or blobs—that is, he can create a bit of reality—after the photon has already passed through the screen. At the last possible second, he can choose to remove the photographic plate, revealing two small telescopes: one pointed at the left slit, the other at the right. The telescopes can tell which slit the photon has passed through. But if the observer leaves the plate in place, the interference pattern forms. The observer’s delayed choice determines whether the photon has taken one path or two after it has presumably already done one or the other. For Wheeler, this wasn’t a mere curiosity. This was a clue to the universe’s existence. It was the mechanism he needed to get his U-drawing to work, a bending of the rules of time that might allow the universe—one that was born in a Big Bang 13.8 billion years ago—to be created right now. By us. To see the point, Wheeler said, just take the delayed choice experiment and scale it up. Imagine light traveling toward Earth from a quasar a billion light years away. A massive galaxy sits between the quasar and the Earth, diverting the light’s path with its gravitational field like a lens. The light bends around the galaxy, skirting either left or right with equal probability and, for the sake of the thought experiment, arrives on Earth a single photon at a time. Again we are faced with a similar choice: We can center a photographic plate at the light’s arrival spot, where an interference pattern will gradually emerge, or we can point our telescope to the left or right of the galaxy to see which path the light took. Our choice determines which of two mutually exclusive histories the photon lived. We determine its route (or routes) start to finish, right now—despite the fact that it began its journey a billion years ago. Listening intently in the audience was a physicist named Carroll Alley. Alley had known Wheeler in Princeton, where he had studied under the physicist Robert Henry Dicke, whose research group had come up with the idea of putting mirrors on the moon. Dicke and his team were interested in studying general relativity by looking at subtle gravitational interactions between the moon and the Earth, which would require exquisitely accurate measurements of the distance to the moon as it swept along its orbit. They realized if they could put mirrors on the lunar surface, they could bounce lasers off of them and time how long it took the light to return. Alley became the principle investigator of the NASA project and got three mirrors on the moon; the first one was set down in 1969 by Neil Armstrong. Now, as Alley listened to Wheeler speak, it dawned on him that he might be able to use the same techniques he had used for measuring laser light bouncing off the moon to realize Wheeler’s vision in the lab. The light signals returning from the mirrors on the moon had been so weak that Alley and his team had developed sophisticated ways to measure single photons, which was exactly what Wheeler’s delayed choice setup required. In 1984, Alley—along with Oleg Jakubowicz and William Wickes, both of whom had also been in the audience that day—finally got the experiment to run. It worked just as Wheeler had imagined: measurements made in the present can create the past. Time as we once knew it does not exist; past does not come indelibly before future. History, Wheeler discovered—the kind that brews guilt, the kind that lies dormant in foxholes—is never set in stone. Later that year, he wrote, “How come existence? How come the quantum? Is death the penalty for raising such a question?” Still, some fundamental insight eluded Wheeler. He knew that quantum measurement allowed observers in the present to create the past, the universe hoisting itself into existence by its bootstraps. But how did quantum measurement do it? And if time was not a primordial category, why was it so relentless? Wheeler’s journals became a postcard of their own, written again and again to himself. Hurry up. The puzzle of existence taunted him. “I am not ‘I’ unless I continue to hammer at that nut,” he wrote. “Stop and I become a shrunken old man. Continue and I have a gleam in my eye.” In 1988, Wheeler’s health was wavering; he had already undergone cardiac surgery two years before. Now, his doctors gave him an expiration date. They told him he could expect to live for another three to five years. Under the threat of his own mortality, Wheeler grew despondent, worried that he would not solve the mystery of existence in time to even the score for what he saw his personal failure to save his brother. Under the heading “Apology,” he wrote in his journal, “It will take years of work to develop these ideas. I—76—don’t have them.” Luckily, like scientists before them, the doctors had gotten the nature of time all wrong. The gleam in Wheeler’s eye continued to shine, and he hammered away at the mystery of quantum mechanics and the strange loops of time. “Behind the glory of the quantum—shame,” he wrote on June 11, 1999. “Why shame? Because we still don’t understand how come the quantum. Quantum as signal of self-created universe?” Later that year, he wrote, “How come existence? How come the quantum? Is death the penalty for raising such a question—” Although Wheeler’s journals reveal a driven man on a lonely quest, his influence was widespread. In his last years, Stephen Hawking, along with his collaborator Thomas Hertog of the Institute for Theoretical Physics at the KU Leuven in Belgium, developed an approach known as top-down cosmology, a direct descendant of Wheeler’s delayed choice. Just as photons from a distant quasar take multiple paths simultaneously when no one’s looking, the universe, Hawking and Hertog argued, has multiple histories. And just as observers can make measurements that determine a photon’s history stretching back billions of years, the history of the universe only becomes reality when an observer makes a measurement. By applying the laws of quantum mechanics to the universe as a whole, Hawking carried the torch that Wheeler lit that day back at the North Carolina airport, and challenges every intuition we have about time in the process. The top-down approach “leads to a profoundly different view of cosmology,” Hawking wrote, “and the relation between cause and effect.” It’s exactly what Wheeler had been driving at when he drew the eye atop his self-creating universe. In 2003, Wheeler was still chasing the meaning of existence. “I am as far as can be imagined from being able to speak so reasonably about ‘How come existence’!” he wrote in his journal. “Not much time left to find out!” On April 13, 2008, in Hightstown, N.J., at the age of 96, John Archibald Wheeler finally lost his race against time. That stubbornly persistent illusion. Join the Discussion
a0449da54fd1987d
Music and The Nature of Existence In music, notes start out as whole. A whole note is fundamental to music. From there the only possible, probable and necessary disposition is another whole note or half notes, quarter notes etc. Ohm’s Law. The structure is the same but the mass, volume and measure changes. Music, Pascal’s Triangle and Nature are fundamentally about introspection and growth by these means. You learn by jam session. Nature is learning by jam session. Existence is a jam session.  Music, math and nature are singing as they progress in mosaic repertory of emphatic explicate of itself; the quarter note in rhythm signature with the Sun and moon: Time. You are whole necessarily because there exists something called a, “whole note.” This is tacit proof of our own whole existence. A whole note is position positive with variations of itself artistically displayed – Pi. The nature of Music is the nature of existence. Your Face Your face is a convex of plasma. This plasma has always existed. Meaning, it occupies space with no duration. Your specific convection is the result of duration. We live outside of timelessness phasing into a convex of procedure (space) that’s intuitively fusing Duration into cadence with harmony (time) powered by excitement. How thick is cadence? It now has mass because it occupies space through a harmony convector (you). We are explicating volume in mass and measure in cadence with signature: Time. Happening now through the convex of space-time. Duration is the space between cadent ripples produced by frequency that Vexes (spins) with Procedure (space) to be emphasis; to see itself. Or frequency needs space to repeat and experience the pattern. Frequency repeats and plasma convexs accordingly. Or evolution via expatiation. Enter the need for Empiricism and faces. Those faces need shapes. That’s us. That’s why we have space time and evolution in progress. How to read minds With love, all levels of excitement are possible. The process of joy Grist for the Mill The blue represents Plasma not death. That’s the crucial mistake that’s been made. Yellow is Nuclear Radioaction or the entire spectrum of excitement. It’s 100% electrons. The highest Octave E that can be fathomed to the infinite power. So, it’s a little hot. Now for Blue. Plasma is positively charged. It is pure unfiltered Positivity. There is no spectrum. There is just Positivity. Plasma is positivity floating…. Aimlessly…. That’s not excited. That’s weird because we are only used to after it has seen eye-to-eye with its electricity harmonious equal. This symbol is a good way to explain the convection of the floor of Positivity with its equal in the spectrum of excitement. It’s Grist for the mill Positive Excitement 100% of the time. Unless the blue represents death or something like that then you’re bound to the low end of the spectrum of excitement, fear and sorrow, with only reflections of this symbol’s by product (nature). Which is just a fleeting harmony of equilibrium. Then it fades as the dark side of this symbol says death to you. Enter: the western disposition. The dark side is PLASMA. Cheers to you. The reason we exist There are only 2 forms that exist: electricity and plasma. That’s it. Everything is those things. All of it is sentient harmony in all directions. We are this endless possibilities of Harmony at all times forever. As that, we have already experienced the most beautiful euphoria anyone could imagine ever. We have already experienced every emotion that could be had. Maximum euphoria is known and harmonious in all directions. The same goes as well for the lower spectrum of excitement, fear and sorrow. That’s known, too. Likewise, we know all the procedures, notes, the steps, the measure that connects it all together immaculately. That’s known! What we DON’T KNOW is the experience of being BOTH simultaneously. We are expatiating the cadence of Excitement that’s effortlessly ringing out; that’s humming along in the key of E. That Love expatiation, our existence, is a by product of visceral Coherent Resplendent Limitless Joy to the infinite power, that’s Orchestrating coherent excited Pathways that poetically and artistically weave the entire spectrum of electricity (excitement) into a score of notes (emotional tones of excitement) made of Plasma. Yeah,…  🍻. So… enjoy yourself! You are Love expatiating in cadence with Joy. And everything you do and say is informing the environment (plasma) what note that you want backup instrumentation on. Or, how you want it behave in relation to your emphatic state, e. g. a note in the key of E/Truth. The Process of Joy We Think in Musical Color We have an Electricity guitar in our physical brain. The 5 strings of electricity in our brains are identical to each string on the guitar from (E) Delta (.5 to 4 Hz), (A) Theta (4 to 8 Hz), (D) Alpha (8 to 12 hz), (G) Beta (12 to 40 hz), (B) Gamma (40 to 100 hz). That little factoid makes the picture above amazing. This is a new way to understand everything we thought we knew. We think in Musical Colors. We literally have musical rainbows coming out of our heads continuously. This is happening right now.  Wrap your mind around that! Conductors of Emphasis in the key of Truth You are emphasis in the key of Truth. Parallel is the G chord is emphasis in the Key of E among the polyphony of open chord harmony. Everything is in the key of Einstein’s E, which his E of energy – which is Truth. Just like notes inside E can’t leave the Key of E, we are notes that cannot leave the Truth. Your body is the screen that’s the extraction method of distinguishing the G chord from the polyphony of flowing sounds; Schrödinger equation is the math of harmonious sounds that are always on and playing. The Heisenberg uncertainty principle, these notes suspended in potential courses of action. The Heisenberg uncertainty principle is Harmony before procedural execution; before you play the song. In order to emphasize the most exciting chord you can imagine, there needs to be a convection of sound. The G chord needs to be isolated with definition of being. So it can expand in the same space isolated. That’s emphasis. So we need a cogent mask to filter out all the expatiating harmony of E11#9. We need to single out one of the strains of sound so we can enjoy it by itself. So all harmony is altruistically giving of it’s potential and going with your designs to isolate a specific chord. Existence is changing its course of action because of you. Why would we want to isolate the G harmony if all harmony is endlessly resplendently expatiating all over the place? Why do you like some songs and not others? Because we have 5 electric strings in our head and are purely just harmonizing with various songs (people) and reflexively choosing the ones that have the best sound. We are guided by our Intuition of Truth. We know exactly what key we’re in and we are excited to phase in to people’s songs and jam with them. We do this with everyone through Empathy with Emphasis of likeness. Emphasis being “Emotional Phasing.” more specifically, we create an Emotional Pathway with them and then Emotionally Phase in with them. It’s Fusion. It could be called Emfusis. Or Emphasis. I love language. We Emotionally Phase in with the emotional stasis of a person whose chord progression that we like. Phasing in, is fusion of music between two or more points. If we don’t like the song being expanded by a person and can’t leave we will refer to the space as hellish. That’s being forced to listen to terrible out of tune music in a closed space for extended duration. Sometimes this is a “job.” In order to phase in from Schrödinger’s “Phase State” of  polyphonic totality, we need a body to convect strains of notes into a song that is exciting to our personal emphasized chord progression. We need a procedural partition that gives us the ability to access some existing harmony while ordering the totality around it. That’s what your fingers are doing on the fret board. That’s why what your body and mind is doing with the totality of life harmonics. You are always in the key of Truth. You’ll know truth when you see it because you will immediately harmonize with it. You are an Altruist because you volunteer to harmonize with everything, even if the harmony is weak. Even if the song is weak you solider on tenaciously in spite of bad music. You are a unconditional love outlet with tacit knowledge of universal harmony. The suggestion I make is to figure out when the music goes bad or changes key and the having the strength to modulate to your preferred key or just stop playing bad music with them. The goal is always play music that you like. If you get stuck playing music you don’t like with someone and can’t leave, because of your persistent altruism, then stop playing with them and find a better rock band. We are the conductor of emphasis in design harmonics. Embrace the Musical We are Harmony. We have experienced the harmony and excitement of our unlimited imagination and the structure of how it unfolds as a procedure (space), but we have not experienced both simultaneously – Space with duration (time). Enter the convex of Space-Time, ladies and gentlemen. This little number is a mere 14.5 billion years old little fusion apparatus. So precious and youthful. Space-time, where you go if you want to simultaneously experience both the procedure and the harmony, a myriad of harmonious excitations, with all your friends. Yay. Come to space-time and play your life song! Book now and get the gold package of multi harmonious connections and musical fantasia that’s completely full immersion. That’s right, folks, with the new and improved gold package of full immersion, you will be your own stream of music and you won’t even know it. That’s right, you’ll experience your own amazing tune and remain totally convinced it’s not you. We’re proud of this feature. Aaahhh… Space-time. Where all the cool kids jam and riff.  This moment, right now, took 14.5 billion years to grow into what it is. Embrace it. Procedural Mask Your body is a procedural mask(it occupies space). Your mission should you choose to accept it is; harmonize your electric soul with as many contributions from your fellow electrical outlets as you dreamed plausible. Your mask is your sponge. Drink, light sockets, drink! We are Harmony
ed99e060ed055f4e
S03E11: The Maternal Congruence Tonight we learned that Leonard’s mother, Beverly Hofstadter (played by Christine Baranski),  and Sheldon have been collaborating on Quantum Brain Dynamics theory. This theory attempts to explain the origin of consciousness.  If Quantum Brain Dynamics theory is correct,  our brains are not mere  calculating machines, just complex enough to hear, see, taste and feel.  Rather they would rely on the non-deterministic nature of quantum mechanics to generate human consciousness.   If this is truly required for our brains to be conscious, the theory goes, then no conventional computer would ever emulate our human insights and experience. Will computers someday have human consciousness? Such a theory of the brain can be attractive for a couple of reasons.   First, suppose we think of our brains as just a fancy computer with a slightly better operating system than Windows.  (In my case, Windows-67, which fortunately still works better than Vista.)   It begs a disturbing question.  Will our laptops soon become sophisticated enough to become conscious?  And if so, will our own human consciousness start rolling off assembly lines? Second, in the standard textbook treatment of quantum mechanics, observers play a special role.   Schrodinger’s cat may be simultaneously alive and dead until a observer takes a look and “collapses” the cat’s status into either 100% alive or 100% dead.  In quantum mechanics, the probabilities to find the cat alive or dead are precisely calculable, but on a case-by-case basis which you kind of cat you will find is impossible to predict.    But what is an observation?  If an atom bumps into another particle,  it does not seem to make sense to say the atom “observes” the particle; it  makes more sense to just say the atom and particle  just are parts of  a now larger system.   But when do interactions become complex enough to cause the “collapse” into a definite condition: dead or alive.   The Quantum Brain Dynamicists claim that the consciousness of the observer plays the key role in measurement and that consciousness itself is a quantum mechanical process. So Quantum Brain Dynamicists have gone forward to even propose a few quantum mechanical processes might be occurring in a live human’s brain.   In modern laboratories, if extreme care is taken and samples are placed at very low temperatures you may be able to see quantum effects.  Careful laboratory techniques can coax atoms into a new state of matter called a “Bose Einstein condensate”, where many atoms lie in exactly the same quantum state and exhibit quantum behavior on a large scale.   It took 70 years between the time such a state was predicted and when it was finally produced in a laboratory.  It took the researchers’ ability to produce temperatures less than one-millionth of  a degree above absolute zero to accomplish.   Many tried and failed.  Finally the eventual success was recognized by the Nobel Committee as such a great feat that the few who accomplished it were awarded the 2001 Nobel Prize in physics.     Quantum Brain Dynamicists entertain the idea that the same kind of condensate might exist in a living human brain, at normal body temperature. Does that sound pretty unlikely?  It did to me.  So I poked around a bit.  The amount of published material in refereed scientific journals turns out to be small.  Most of what I found about it was published on webpages and small publishers which is a red flag.  But not so fast.   Roger Penrose, a highly respected mathematical physicist, the inventor of quasi-crystals and other important ideas, is an advocate of the theory.  Penrose suggested in his book The Emperor’s New Mind that the “collapse” due to observations is not based on any algorithm and therefore distinct from what any mechanical computer could ever perform.   Because no step-by-step method describes the “collapse” fundamental mathematical difficulties conveniently disappear.  There are a few papers  on these ideas published by Springer, a serious publisher of scientific work.   Usually ideas about how the world works  separate nicely into mainstream (even if speculative) versus crackpot.  Here we find the distinction is not so clear. The writers had put Quantum Brain Dynamics into the script, which made me nervous.   Would millions of viewers balk?   Would they send millions of emails complaining that the show had confused pseudoscience with science?  Would they boycott the sponsors?  But as we’ve seen, the idea, while extreme, could not be fairly rejected out of hand.   The writers figured a way out.  Listen carefully to tonight’s dialogue.  The show’s writers don’t have Sheldon and Beverly merely working together on Quantum Brain Dynamics theory, but disproving Quantum Brain Dynamics theory.  Problem solved. I don’t watch  first-hand  the writers at work, but they sometimes talk to me during their process.   One of the things I’ve learned is that a good part of comedy writing appears to be problem solving.  For example, how do you get two people who are fighting the last time they saw each other to be talking again so you can finish the story?   Likewise, physicists too are often led through their work by a big idea, inevitably finding obstacles to telling a consistent story.  Finding clever solutions seems to be a common part of the work of theoreticians and comedy writers alike.  In an example from physics, one of the biggest problems in theoretical particle physics today is that many models predict that protons decay in less than a second—thereby the Sun, Earth and Human Beings would never exist. Something had to be done. The particle theorists finally solved the problem by inventing (i.e., “making up”) something called “R-parity” that could not change, in order to put the brakes on proton decay.  The quantity now appears in many, if not most, theoretical models in particle physics.   And much like the solutions of comedy writers, “R-parity” may well turn out to be a joke. 29 Responses to “S03E11: The Maternal Congruence” 1. Chris Says: I assumed I’d be reading about Leibnitz vs. Newton – this was much more interesting. Thanks! 2. DJCinSB Says: “Tonight we learned that Sheldon and Leonard’s mother…” I didn’t think that they were related; isn’t Beverly Leonard’s mother, and part of the storyline is about how close she is to Sheldon, who grew up in a fundamentalist household? • Rob Says: I think that parentheses are in order. “Tonight we learned that (Sheldon) and (Leonard’s Mother)…” rather than “Tonight we learned that (Sheldon and Leonard)’s Mother…” • Joshua Says: I think your parsing would be rendered as “Sheldon’s and Leonard’s mother”. That shows that there is one mother for the both of them. If we were talking about their respective mothers it would be “Sheldon’s mother and Leonard’s mother”. You might try to be shorter for the second and write “Sheldon’s and Leonard’s mothers”, but that could be ambiguous. They are brothers and sons of a polygamist? Anyway, I think it was correctly written and incorrectly parsed. 3. General Omar Windbottom Says: Well, you can hide the fundamental inner workings of the brain behind the opaque and slightly mystical quantum curtain. Or you can take Douglas Hofstader’s approach and look at an actual brain and notice that it has (1) a hierarchical network organization, and (2) many linear and nonlinear feedback loops. This second model of brain functioning does not involve tooth fairies and can actually be subjected to scientific testing. It also deals with a real brain, rather than abstract models of a brain. But apparently, most people prefer the excitment of pseudoscience to the clarity of reason. Such is the human condition as we approach 2010. Omar Windbottom 4. Uncle Al Says: Physics demands the universe and its mirror image are fundamentally indistinguishable. Abundance of matter over antimatter is then mysterious. But… increasingly weak interactions are not parity-symmetric. Strong interactions are exceptions not the rule. Big Bang inflation was powered by chiral pseudoscalar background dilution that chose matter and the Weak interaction, and remnant persisted. That is crazy talk! A chiral vacuum background only active in the massed sector renders gravitation measurably divergent given opposite geometric parity atomic mass distributions. Somebody should look. Space groups P3(1) and P3(2) glycine gamma-polymorph also qualify. 5. General Omar Windbottom Says: Somebody did look. Vacuum symmetry violations = null result. Try again, sunshine. 6. ES Says: I suggest you look at Conway & Kochen 2006 (yes, THAT Conway) for an interesting take on the consequences of quantum consciousness. 7. Uncle Al Says: Dear Windbottom: All vacuum isotropy and Lorentz invariance experiments are electromagnetic, arXiv:0706.2031, arxiv:0801.0287, Physics Today 57(7) 40 (2004). Optical rotation must integrate to zero over the EM spectrum (f-sum rule, Thomas-Reiche-Kuhn sum rule). EM does not observe mass distribution. Physics cannot quantitate enantiomorphic mass distributions. Stereograms and optical rotations J. Math. Phys. 40(9) 4587 (1999) Quantitative geometric parity divergence Physics drips parity exceptions: right hand rules, precession, Yang and Lee, teleparallelism… and R-parity mentioned above. Exquisite composition Eötvös experiments continuously run and fail, Gravitation theories ignore composition, being geometries not stockrooms. Parity divergence is likely the basis not the exception. Somebody should look. Biology is homochiral, all L-configuration chiral protein amino acids and all D-configuration chiral sugars. If the vacuum is massed sector trace chiral, there is your fundamental local to global connection outside classical anatomic reductionism. The Big Bang Theory is funny in part for its Profoundly Gifted theorist who cannot see beyond himself and its Severely Gifted experimentalist who cannot see within himself. Incongruence is amusing. Real world science is funny strange not funny ha ha. The way to seize a blue rose is to seek it where it is not. 8. Erick Von Schweber Says: I love this show; that’s why I was greatly disappointed to hear the character of Sheldon taking such a conservative position with respect to quantum brain dynamics and quantum mind theories. At this point in time (Fall 2009) even the conservatively-minded Caltech professor Christopher Koch (protege of Francis Crick) and through and through materialist Dan Dennett have gone on record as admitting that it is possible that brain events may exhibit quantum statistics (and behavior) and that these quantum brain events may not necessarily average out, but be biologically relevant and significant. (Koch went on record at the Toward a Science of Consciousness conference in Tucson in the Spring of 2008; Dennett at a three-night seminar series at Harvard in the Spring of 2009). Each argument against, say, Hameroff-Penrose Orch OR (the leading quantum brain theory) has been refuted in detail. Sheldon’s issue, for example, that the necessary conditions preventing decoherence of a quantum superposition are not possible in the wet (and hot) environment of a living brain have been addressed through methods including ordered water, actin gelation, quantum topological error correcting codes, and others. Beyond this, Douglas Hofstadter’s own student, philosopher of mind Dave Chalmers, is an advocate of panprotopsychim, which sits well in the company of quantum mind theories. More and more evidence is emerging that biological systems have, through evolution, developed the capability to exploit quantum behavior, e.g., in the biochemistry of photosynthesis. Also take a good look at Johnjoe McFadden’s Quantum Evolution. Personally, I have what I believe to be the only defensible approach to the mind-body problem that does not end in either idealism or epiphenomenalism, called Aspect Oriented Quantum Monism. So, Chuck Lorre – if you are listening – WHEN DOING YOUR HOMEWORK FOR EACH SHOW PLEASE DO PERFORM YOUR RESEARCH FROM A MORE EXTENSIVE COMPENDIUM OF EXPERTS! Sheldon in particular is not someone I’d expect to toe the line. 9. Andy Says: Excuse me ,could you explain “the assembly lines in China”, I know nothing about it and sciences, but I’m now interested in it, could you, please. 10. Procyan Says: In science, the conservative position is the responsible position. There, you’ve made a statement and it sounds like you know something and yet tis naught but the hiss of gass. Conservative is ying to my yang! Stir it up. Next, Expecting a digital computer to become self-aware is akin to hoping to extract nutrition from a really good picture of a hamburger. Unless pixels are pixies which they are not. Drawing a link between BE condensates and consciousness is attractive to some because both seem to involve a state of transcendence from the ordinary to the sublime. That may be true but is not a sufficient condition to allow one to pose equality or any other relationship. and finally, lets get past this quaint notion that quantum phenomena are somehow optional features of this or that bit of reality. Regardless of the example, be it a beautiful mind or a chocolate mousse, quanta exist and the same rules apply whether we choose to observe them or not. I recommend Susan Blackwood. Her argument is built on experimental data. Somehow she comes to the wrong, classically reductionist, conclusion but you can learn a lot from her synthesis if you know when to jump. as, obviously, i do. Great show, great blog. thanks! 11. thomas Says: Theory predicts, that protons would decay in less than a second? You seem to be pretty convinced by Supersymmetry 😉 Lets see, what LHC will show, now that its running. And I’m looking forward to see some LHC references in bbt • David Saltzberg Says: Read carefully. The entry says “many models” not “theory”. Whether I or anyone else believes these models or not is irrelevant. Experiments will tell us which models are right, if any. 12. General Omar Windbottom Says: I’ll eat every SUSY partner they find. 13. General Omar Windbottom Says: If you want to get a feeling for how pseudoscientific SUSY really is, read the comments of a theoretical physicst who understands SUSY at a high level, but has not been indoctrinated: T. Doriga, A real eye-opener! Well worth the time spent reading it. 14. tbbtfans Says: Hi David Saltzberg i want to make sure if the chinese newpaper called thebeijingnews really got an interview you. coz i think maybe he copyed from http://the-big-bang-theory.com/saltzberg.interview/ his website add is http://blog.sina.com.cn/s/blog_4b2b7de20100gd8s.html • David Saltzberg Says: I did an interview with Xinhua which is in Beijing. Using Google Translate, this looks like that interview. There is plenty of fresh things in here from that interview. I also told them they could use an old UCLA Today interview (similar to this one) for background. 15. feldfrei Says: Considering the role of consciousness in the context of the “measurement problem” in quantum physics (or more generally spoken: the role of the “observer”) one usually has to distinguish a quantum object from its environment. However, this distinction is to some extent arbitrary. An extreme case would be treating the entire universe as one single very large quantum system without any observer. Interestingly, such a universe lacks what we call “time” and, thus, we come to the question how time shows up in quantum physics (where we have no “time operator”). There is an interesting paper on this topic which aims to derive the time-dependent Schrödinger equation from the time-independent one by separating a small subsystem from a large environment: Following this ansatz, time emerges from the interaction of the subsystem with its environment. I like some analogies of this concept like the phenomenon that we somehow “loose time” when we “separate” ourselves from our environment, e. g. working very concentrated on a problem. 16. Big Blog Theory: Learning science from a sitcom | Give the 'Net credit Says: […] but his posts have gotten me thinking a lot about about science. I especially liked his post about quantum mechanics and the brain. We didn’t go into the same level of detail, but I participated in a similar discussion in an […] 17. Ross McKenzie Says: “Quantum dynamic brain theory” and Penrose’s ideas have been discredited in serious scientific journals. See for example: 18. Tradução: “S03E11: The Maternal Congruence (A Congruência Materna)” « The Big Blog Theory (em Português!) Says: […] feita a partir de texto extraído de The Big Blog Theory, de autoria de David Saltzberg, originalmente publicado em 14 de Dezembro de […] 19. TV Fact-Checker: Dropping Science on The Big Bang Theory « News Hub Today Says: […] amusing one was when Sheldon and Leonard’s mother were working on a scientific problem called quantum brain dynamics theory…. This theory is about how quantum mechanics is important for consciousness in the brain. […] 20. TV Fact-Checker: Dropping Science on The Big Bang Theory | 13 News Says: Comments are closed. %d bloggers like this:
011a9a19ca96e081
We now have two books freely available to view & read on the Quantum Mind website – “Quantum Physics In Consciousness Studies”, &, “Consciousness, Biology And Fundamental Physics”. See below.  Text is always free on the Quantum Mind website. Quantum Physics In Consciousness StudiesConsciousness Our latest book is available to view & download as a PDF by clicking here. Please note that by clicking this link a download dialogue box will open in your browser to download the book to your device. This is a 6MB file so depending on the speed of your connection this should take no more than 20 – 30 seconds to download completely. The first few of chapters of our new book are also available to view directly on our website by clicking here. Consciousness, Biology And Fundamental Physics The sites online book ‘Consciousness, Biology and Fundamental Physics‘ is now available on Amazon both as a paperbackConsciousness, Biology And Fundamental Physics and as a kindle book. New paperbacks currently priced from £9.05 – Click here to Buy Kindle books from £2.71 – Click here to Buy Text remains free on this site. In writing something of this kind, it is difficult know what level to pitch it at and what degree of detail to bring in. On the one hand, experts in particular fields may ridicule the superficial nature of the description and arguments here, while at the other extreme some would-be readers may find even the opening sentences baffling. I have two recommendations for dealing with these problems. Firstly, I would advocate a pick and mix approach to the offerings here. For instance, those not particularly inclined to wade through user-unfriendly material relative to physics, biology and neuroscience might prefer to go straight to the final section, rather arrogantly entitled ‘a theory of consciousness’. This gives the main conclusions as how consciousness arises and its function. If this looks at all interesting it is then possible to go back and see how I have attempted to substantiate the proposals I have made in this section. The same general approach can be applied to the other chapters, in skipping over things that are either too difficult, or are too well known to need revisiting. There is perhaps a word of caution relative to this approach. The section on physics emphasises the problem areas in quantum physics, which may be played down in more discussion. The sections on both quantum biology and neuroscience emphasise research work in very recent years that can be argued to have reversed some assumptions that are still common in science and in consciousness studies. The main inspiration for this attempt at consciousness theory is the ideas of Roger Penrose (1.&.2.) Unfortunately, I have over more than twenty years come to form the opinion that the vast majority of modern consciousness studies is profoundly misguided, and that in time Penrose may come to be seen, as being alone as a deep thinker on the subject, in our rather benighted period. This book attempts an amendment and simplification of the Orch OR scheme, and also to some extent an updating in line with very recent developments in biology. It is tentatively suggested that a less complex approach to the function of consciousness than that provided by the Gödel theorem can be attempted, and similarly that in the brain, quantum consciousness might be based on shorter-lived quantum coherence in individual neurons, rather than the longer-lived and spatially distributed proposal put forward by Hameroff. The possible need to amend the original concepts are the reason from merely commenting on quantum consciousness topics to outlining a version of the theory. Definition:  “Consciousness is defined here as our subjective experience of the external world, our physical bodies, our thinking and our emotions.” Consciousness is also defined in terms of ‘being like something’ to have experiences. It is like something to be alive, like something to have a body and like something to experience the colour red. In contrast, it is assumed to be not like something to be a table or a chair. Further to being like something conscious also gives us the experience of choice. In philosophy, this opens up the controversial topic of freewill, but at a more mundane level we have the something it is like to choose types of beer, or between a small amount of benefit now or a more substantial benefit in the future. A special characteristic of subjective consciousness is privacy, in the sense that we have no way of knowing that our experience of the colour red is the same as someone else’s, and no way of conveying the exact nature of our experience of redness. These subjective experiences are referred to as qualia. The problem of qualia or phenomenal consciousness is here viewed as the sole problem of consciousness and the whole of the problem of consciousness. The problem we have to address here is how consciousness, subjective experience or qualia arise in the physical matter of the brain. Even this simple question raises some queries as to whether consciousness does in fact arise from the brain, although the arguments in favour of this position do in fact look strong. The classic argument is that things done to the brain such as particular injuries or the application of anaesthetics can remove consciousness. The main challenge to the ‘brain produces consciousness’ hypothesis is dualism, or the idea that there is a separation between a spirit stuff and a physical stuff that together make up the universe, with  consciousness being part of the spirit stuff, but inhabiting a physical brain and body. This had probably been the most popular idea since ancient times, but it was formalised by Descartes in the seventeenth century. The idea has a certain beguiling simplicity, since at a stroke it gets rid of the need to worry about how the physical brain can produce consciousness, or all the difficulties this gives rise to in terms of biology and physics. Unfortunately, the problems of dualism appear to be of the serious kind. This is principally the question of how the physical stuff and the spirit stuff can interact. If the spirit stuff is to interact with the physical stuff it would appear to need to have some physical qualities, in which case it would not be true spirit stuff. The same applies in the opposite direction in that the physical stuff would seem to need some spirit qualities to interact with the spirit stuff, and would therefore not conform to conventional physics. We are thus left with the problem of how the physical stuff as described by science can produce consciousness. The philosopher, David Chalmers (3.), labelled this the ‘hard problem’. The problem here is really a problem of specialness. The brains of humans and possibly animals are the only places in the universe where consciousness has been observed, so the question really is as to ‘what is special about the brain’, and the answer to this tends to be that there’s nothing special about the brain, because it’s made of exactly the same type of stuff and obeys the same physical laws as the rest of the universe. The brain comprises the same carbon, oxygen, hydrogen and other atoms that are found in the stars and planets and the objects of the everyday world around us. At first sight this might not seem too much of a problem. The brain is considered to be the most complex thing in the universe, and surely something in such a system can manage to produce consciousness. Unfortunately, this does not appear to be the case. In a conventional neuroscience text book, which will emphasise the fluctuation of electrical potential in the neurons (brain cells) and the resulting movement of neurotransmitters between neurons, we are presented with a causally closed information system, which does not require consciousness in order to function, and offers no physical mechanism by which consciousness could be produced. Since consciousness ceased to be a taboo subject for academic research twenty-or-so years ago, several theories that seek to explain consciousness arising within the confines of classical/macroscopic physics have been advanced. It would take many hundreds of pages to discuss these adequately, so I will here summarise the main ideas, and where they look to fail. For those who find any of them plausible, there is a huge and expanding literature out there working to reinforce these theories. Possibly the most plausible attempt to explain how the brain could produce consciousness within the concepts of classical and macroscopic physics is the idea that consciousness is an emergent property of the brain’s physical matter. Emergent properties are an established concept in physics. The classic example is that liquidity is an emergent property of water. The individual hydrogen and oxygen atoms or their sub-atomic components do not have the property of liquidity. However when the atoms are combined into molecules and a sufficient number are brought together, within a particular temperature range, the property of liquidity emerges. The problem with the emergent property idea, when it comes to dealing with consciousness, is that where emergent properties do arise in nature, physics can trace them back to the component particles and the forces that bind them. Thus the liquidity of water can be explained by the electromagnetic force acting on hydrogen and oxygen atoms. But in many years of the emergent property idea being promoted by parts of the conventional consciousness studies community, no one has been able to propose a micro scale emergent mechanism in the brain comparable to the explanation of how liquidity emerges from water. In much of the late twentieth century consciousness studies was dominated by functionalism. This theory proposes that consciousness is a function of the brain’s information processing system, and that the biological matter of the brain is irrelevant to consciousness. This means that any system that processes information in the same way as the brain will be conscious regardless of what it is made of. Therefore a silicon computer of sufficient complexity would flip into consciousness at some point, and future systems using still other materials would do likewise. This is because the system, rather than the stuff from which it is made, is seen as being the thing that produces consciousness. The underlying weakness of functionalism is that it does not actually explain the mechanism of how consciousness arises in the brain’s systems in the first place, nor how it might physically arise from silicon computers or other machines. This is a crucial problem regardless of whether the brain or system in question is made of biological tissue, silicon or anything else. It is generally agreed that the computer on the desk is not conscious, but that brains are conscious. The question we are left with is what changes between the computer on the desk and the brain, and similarly between the computer on the desk and any future super computer that might actually become conscious. There may be a vague assumption that more and more of the same initial complexity does it. But the physical world doesn’t work like that. The problem of butter not cutting steel is not resolved by adding lots more butter, but by finding something with different properties from butter. Identity theory is similar in tone to functionalism. It says that consciousness is identical to the brain or at least parts of it. The problem with an identity theory is that it needs to specify a particular object, or more plausibly a particular process in the brain that is physically identical to consciousness. It is not enough to show that the axons of neurons spike, or that there is a gamma oscillation between the cortex and the thalamus, when conscious processing occurs. These things are correlated to consciousness, but that is another thing from saying they are identical to consciousness. The distinction between identity and correlation is crucial here. Thunder and lightning are correlated, but they are not the same physical process, even though they have the same ultimate cause. In contrast, the morning star and the evening star are identical, because they are both names given to the planet Venus, a single physical object. Astronomy has conclusively demonstrated this identity, because the behaviour of a point of light in the morning and evening sky can be completely explained by the behaviour of the planet Venus. However, neuroscience has not demonstrated that the behaviour of any particular physical process in the brain that is identical to, or can completely explain the behaviour of consciousness, as opposed to being merely correlated to it. In addition to this, more recent neuroscience has at least qualified identity theory. Expositions of identity theory tended to be rather simplistic in applying to the whole brain, while recent neuroscience has demonstrated consciousness as correlated to both particular neuronal assemblies and to single neurons, albeit on a temporary basis with activity correlated to consciousness shifting from place to place in the brain. The basic idea here appears to be that a level or perhaps levels of the brain observe another level or levels, and the interaction of the two somehow generates consciousness. We are also asked to believe that because one system monitors another it will become conscious. This suggestion bears little relation to the technological world where it is common place for one non-conscious automated system to monitor another and have some automatic response to changes in it, without any requirement for or evidence of consciousness. In the present century, the concept of conscious embodiment has come to the fore. It is suggested that a brain or a computer by itself cannot be consciousness, but the brain and possibly the computer can become conscious when attached to a body or some comparable extension. The recognition of the fact that brain and body are interactive was in itself an advance on twentieth century notions of the brain as an isolated computer, and the body as an automaton incapable of being uninfluenced by the mind. That said, there appear to be two problems with this approach as an explanation of consciousness. Firstly, it carries the rather implausible notion that the body has some consciousness generating process that does not exist in the brain. There is a complete absence of explanation as to what this might actually be. Admittedly, most touch and pain are transmitted from the body to the brain, and visual and auditory inputs to the brain are fed forward to the viscera, but this does not explain why signals going through the body should generate something different from incoming signals through the brain. This theory looks difficult to square with what has now become known about the organisation of brain processing. While the bodily touch and pain can certainly be seen to play a role, it is hard to see why all visual, auditory inputs, and the results of cognitive processing should have to wait on the laborious responses of the viscera, especially as it is the reward assessment areas of the brain that signal the viscera in the first place. If bodily generated emotion were the whole story, the emotional evaluation regions of the orbitofrontal and amygdala would seem to be in a state of suspended activity between sending a signal to the autonomic system and getting signals back from the viscera. In the specific case of rapid phobic reactions in the amygdala, the idea fails completely. Recent expositions of the theory indicate an over emphasis on the body’s movement and relations to the external world, perhaps because they are more compatible with this theory, at the expense of the other senses and more especially at the expense of thinking and emotion-related evaluation. A further objection to this theory is that bodily arousal does not provide a sufficient range of responses to match the range of human emotional responses. Emotional research, which often means animal research, has tended to focus on the easy target of fear, which produces very definite bodily responses, whereas cognitive processing or visual and auditory sensations, not related to immediate danger, can produce a much less marked bodily response, and a wider and subtler range of emotional responses. The more plausible view is that visceral responses are one aspect of many responses that are integrated in the orbitofrontal and other evaluation processes. Further to this, evolution seems to have altered the response system to visceral inputs when it came to primates. The visceral inputs no longer go via the pons structure in the brain stem, and this is argued to suggest a less automatic response to visceral inputs in humans and primates. It seems more likely that in line with most brain processes, there is a complex feed-forward and feedback between all parts of the system including the viscera and the orbitofrontal. The body-only theory looks to depend on a simple feed-forward mechanism, which is alien to how brain processing functions. Attempts to classify consciousness as a form of information can be seen as another attempt to explain consciousness in classical terms. This idea also looks to encounter insuperable problems. There are innumerable examples of information processing and communication that does not involve consciousness, especially when we look at modern technologies. Further to this, we lack a description of a physical process that would distinguish conscious information from non-conscious information. There is a core difference between information and reality, in that information involves only what we happen to know about something, while a knowledge of reality requires a full description of its make up and a full explanation of its behaviour. The only information available to a hunter gatherer in ancient Africa glancing up at the sun is the intensity of glare and heat and changes in position in the sky of light. It required the complexities of modern science to unravel everything that is involved in the sun producing light, the light getting to our eyes, and the brain states this produces. This theory proposes that consciousness is a by-product of neural processing, which has no function or significance. There are three main problems here. In the first place like some other modern consciousness theories, it is actually a non-explanation. Even if consciousness has no function, we still need to know how it is physically instantiated, and this is never attempted when this theory is proposed. The suspicion is that the proponents of this theory are unconsciously closet Cartesians, with an underlying assumption that consciousness is ‘non-physical’ or ‘immaterial.’ If it can be categorised in this surprising manner it can be dismissed as non-functional, and relegated to a smallest possible footnote in any scientific study. This is contradictory in that the proponents are invariably non-dualists, who believe that there is no such thing as the non-physical. The second problem is that consciousness has to be linked to the rest of the brain and the physical universe, because the very fact of conscious experience indicates that we are dealing with the reception of some form of incoming signal, and anything receiving incoming signals is likely to be able to emit them in some form of response, which will have physical consequences. Some writers have suggested an escape route here, which allows consciousness a trivial influence. This is feasible up to a point, but hints at problems in defining what is trivial, and would erode the position of the modern orthodoxy that argues for complete determinism and no freewill at all in behaviour. A further problem for epiphenomenalism is that it conflicts with evolutionary theory. If this by-product consciousness is physical as the scientific paradigm demands, it needs energy to produce and maintain it, and given that the brain is very energy intensive, this could involve quite a large amount of energy. It would be maladaptive for evolution to select for something that ties up energy with no benefit to the organism. It might be argued that neural processing was such that some by-product was essential, but this would require a demonstration that neural processing produces this something else. However, in the physical description of the matter and energy involved in the brain, as described by standard neuroscience, there is no sign of such a process. New mysterians or sometimes just mysterians take the view that just as dogs cannot understand calculus, humans will never be able to understand consciousness. This may in the end turn out to be true, but to accept this view as final at this stage in the proceedings seems unduly defeatist. The human mind has proved capable of understanding the mechanisms of the physical universe so far, and it is reasonable to hope that the rather narrow scope of thinking in conventional consciousness studies may not have exhausted all possible explanatory routes. Where the mysterian approach is advocated there is usually a ‘no nonsense’ implication that having established this point, consciousness is no longer a threat to a view of the mind that is dominated by classical physics and slightly old fashioned text book neuroscience. On further reflection however the exact opposite is true. Humans have been able to understand the physical law. If they cannot understand consciousness, then consciousness lies outside the physical law or any logical extension to it. This, if anything ever does, opens the sluice gates to the dark tide of the occult, necessitating that consciousness is something akin to a spirit stuff lying outside of, and able to act outside of the physical law. Much of the scientific, philosophical and psychological community never internalised the revolution in physics that produced quantum theory early in the last century. There seems to be an assurance that this was an abstruse special case that need not bother day-to-day thinking. The theory was more or less censored out of general education and even basic scientific education. In mainstream consciousness studies, there is an apparent determination not to move beyond nineteenth century macroscopic physics, which proposes a billiard ball world, where everything is explained in terms of objects bumping into one another. This is despite the fact that it has been known for a century that this is a convenient approximation for studying the human-scale world, but is not how the underlying physics works. Neuroscience’s approach to consciousness is even more mired in nineteenth century concepts. The discovery of individual neurons and their connections at the end of that century allowed the idea of the neuron as a simple switch with no further complications to become entrenched. Not long after this discovery, what is sometimes called ‘the long night of behaviourism’ descended on consciousness studies, decreeing that consciousness was irrelevant to behaviour and not a proper subject of study. Although behaviourism as such dropped out of favour in the latter decades of the twentieth century, subsequent theories have sought to justify the same general conclusion by marginalising consciousness. Behaviourism is dead. Long live behaviourism. In fact, one curious consequence of the functionalist and identity approaches is that much of consciousness studies has paid remarkably little attention to the brain or to advances in neuroscience in recent decades. The assumption has been that all that was needed was a particular system that could run on any material, and there was no need to inquire any further into the detailed biology of the brain. Information about binding and the gamma synchrony or consciousness in individual neurons and the distinction between conscious and non-conscious neural area are footnotes, while the functional role of subjectivity in orbitofrontal valuation is never mentioned, or perhaps not even known about. 1.10:  Why 21st century consciousness studies will fail Consciousness studies has gone off in a different direction from neuroscience. Much of it is dominated by philosophers or psychologists who deal more in abstractions than what is going in the physical brain. In addition, they have tended to see themselves as under-labourers supporting a nineteenth century Newtonian world view, while at the same time discussing consciousness in very abstract terms that take limited account of advances in neuroscience research. Neuroscientists, meanwhile, seem to have been persuaded to treat consciousness as not really part of their remit, and defer to philosophers whenever they felt it necessary to mention consciousness, even when the views of the philosophers appeared to conflict with the neuroscientists recent findings. For this reason, it seems possible to predict that consciousness studies will come to the end of the 21st century without having achieved consensus on a theory that has any useful explanatory value. The above discussions might seem to bring us to an impasse, where we don’t think that consciousness can derive either from separate spirit stuff nor from the material that comprises the brain, the body and the universe. Luckily, there is an escape route from this.Physics does not explain everything. The arrow of explanation heads for ever downwards, but it does at last strike bottom. There is a level beyond which there is no further reduction or explanation. The quanta or sub-atomic particles have properties of mass, charge and spin and are bound by the particular strengths of the forces of nature. These are fundamentals, primitives or given properties of the universe that have to simply be accepted. If we ask what is the charge on the electron, not what does it do, but what is it, the answer is a resounding silence, because it is a given property of the universe, and comes without explanation. If we had a scientific culture that did not accept that quanta could be electrically charged, and that other quanta could intermediate the electromagnetic force, this might develop into another hard  problem like the one we have with consciousness. We would go round and round trying to stick electrical charge on to other and probably macroscopic physical features, or we might even decide, as happens sometimes in consciousness studies, that charge did not really exist, that it was a product of something else or an illusion. No doubt experimental psychologists could devise cunning tests that showed how subjects confabulated the idea of electrical charge. If we accept that fundamental properties do exist, and that they cannot be explained by other means, and also that it is impossible to explain consciousness in terms of classical physics, then it would seem reasonable to suggest that consciousness is one of this small group of fundamentals. Thinkers such as David Bohm (4.) and Roger Penrose have made such proposals, but the response has been generally hostile, although the reasons for this may be cultural or even metaphysical rather than scientific. Just having a concept of consciousness as a fundamental of physics is not by any means enough. Fundamental physics may be a possible gate to consciousness, but to substantiate this we need some concept of how consciousness might be integrated into what is known about fundamental physics. In the first place, it might help to have at least a very simplified idea of quantum theory and some recent ideas about spacetime. Quantum theory is the fundamental theory of energy and matter as it exists behind the appearances of the classical or macroscopic world. Suppose one were to ask for a scientific description of your hand. Biology could describe it in terms of skin, bone, muscles, nerves, blood etc., and this might seem a completely satisfactory description. However, if you were just a bit more curious, you might ask what the muscle and blood etc. were made of. Here you would descend to a chemical explanation in terms of molecules of protein, water etc. and the reactions and relations between these. The fundamental particles are bound together by the four forces of nature, which are  electromagnetism, the strong and the weak nuclear forces and gravity. The quanta can be divided into two main classes, the fermions, which possess mass and the bosons which convey energy or the forces of nature. In contrast to the nuclear forces, gravity and electromagnetism are conceived of as extending over infinite distance, but with their strength diminishing according to the inverse square law. That is, if you double your distance from an object, its gravitational attraction will be four times as weak. The strong nuclear force binds together the particles in the nucleus of the atom, and acts only over the very short range. Gravity is a long-range force that mediates the mutual attraction of all objects possessing mass. The electromagnetic force, also a long-range force, is perhaps the force most apparent in everyday life. We are familiar with it in the form of light, radio, microwaves and X-rays. It holds together the atom through the attraction of the opposite electrical charges of the electron and the proton. It also governs the interactions between molecules. Van der Waals forces, a weak form of the electromagnetic force is vital to the conformation of protein and thus to the process of life itself. 2.2:  Quantum waves, superpositions and a problem of the serious kind. The quantum particles or quanta are unlike any particles or objects that are encountered in the large-scale world. When isolated from their environment, they are conceived as having the property of waves, but when they are brought into contact with the environment, there is a process referred to as decoherence or wave function collapse, in which the wave form collapses into a particle located in a specific position. The wave form of the quanta is different from waves of matter in the large- scale world, such as the familiar waves in the sea. These involve energy passing through matter. By contrast, the quantum wave can be viewed as a wave of the probability for finding a particle in a specific position. This probability wave also applies to other states of the quanta such as momentum. While the quanta remain in wave form, they are described as being in a superposition of all the possible positions that the particle could occupy. At the peak of the wave, where the amplitude is greatest, there is the highest probability of finding a particle. However, the choice of position for each individual particle is completely random, representing an effect without a cause. This acausal result comprises the first serious conceptual problem in quantum theory. The physicist, Richard Feynman, said that the two-slit experiment contained all the problems of quantum theory. In the early nineteenth century, an experiment by Thomas Young showed that when a light source shone through two slits in a screen, and then onto a further screen, then a pattern of light and dark bands appeared on the further screen, indicating that the light was in some places intensified and in others reduced or eliminated. Where two waves of ordinary matter, for instances waves in water, come into contact an interference pattern forms, by which the waves are either doubled in size or cancelled out. The appearance of this phenomenon in Young’s experiment demonstrated that light had the characteristics of a wave. 2.4:  The experiment refined It could seem that the best way to understand what was happening here was to place photon counters at the two slits in order to monitor what the photons were up to. However, as soon as a photon is registered by a counter, it collapses from being a wave into being a particle, and the wave-related interference pattern is lost from the further screen. The most plausible way to look at it may be to say that the wave of the photon passes through both slits, or possibly that it tries out both routes. 2.5:  There was worse to come The wave/particle duality was shocking enough, but there was worse to come. Technology advanced to the point where photons could be emitted one-at-a-time, and therefore impacted the screen one-at-a time. What is remarkable is that with two slits open, but the photons emitted one-at-a-time, the pattern on the screen formed itself into the light and dark bands of an interference pattern. The question arose as to how the photons emitted later in time ‘knew’ how to arrange themselves relative to the earlier photons in such a way that there was a pattern of light and dark bands. The ability of quanta to arrange themselves in this non-random way over time, despite initially choosing random positions, could be considered to be the second big problem of quantum theory. Einstein disliked the inherent randomness involved in the collapse of the wave function. This was despite himself having contributed to the foundation of the quantum theory. He sought repeatedly to show that quantum theory was flawed, and in 1935 he seemed to have delivered a masterstroke in the form of the EPR (Einstein, Podolsky, Rosen) experiment. At the time this was only a ‘thought experiment’, a mental simulation of how a real experiment might proceed, but in recent decades, it has been possible to perform this as a real experiment. Two quanta that have been closely connected can be in a state where they will always have a particular relationship to one another. This is known as being entangled. For instance, electrons have a property of spin, and can have a state of spin-up or spin-down. Two entangled electrons can be in a state where their spin will always be opposite. This applies however distant they become from one another. However while the electrons (or other quanta) are in the form of the wave both electrons are superpositions of spin-up and spin-down, so entanglement only really manifests itself when there is decoherence or wave function collapse. The EPR experiment proposed that two quanta, which have remained sufficiently isolated from their environments to be conceived as waves or superpositions, are moved apart from one another. This could be a few metres along a laboratory bench or to the other side of the universe. The relevant consideration is that the two locations should be out-of-range of a signal travelling at the speed of light, within the timescales of any readings that are taken. Both particles are a superposition of two possible states, but if an observation is made on one of the particles, its wave function collapses, and it acquires a defined spin, let’s say spin-up in this case. Now when an observation is made on the other particle, it will always be found to have the opposite spin. This defies the normal expectation of classical physics that a random choice of spins would produce approximately 50% the same spin and 50% different. Therefore, there is seen to be some non-local connection between the two particles, although it is not possible to describe or detect this in terms of a physical transfer of energy or matter. In fact, the entanglement influence is shown to be instantaneous, whereas energy and matter are thought to be constrained by the speed of light. This quantum relationship between particles is called entanglement, and can be regarded as the third big problem in quantum theory. Recent debate suggests that the different interpretations of quantum theory are becoming more distinct and more entrenched, rather than showing any sign of moving towards any kind of consensus (5.). In particular, six types of approach are distinguished, [1.] Everett many-world theories [2.]  Post-Copenhagen theories based only on our information about quantum states.[3.] Coherence remains with hidden superposition within macroscopic objects. [4.]  Bohmian type pilot-wave theories, [5.] wave function collapse theories. [6.] The suggestion that none of these are satisfactory, and that quantum theory will only be explained in terms of a deeper level of physics. The interpretation of quantum theory has an unhappy history. In the 1920s there was for a short time a unity of purpose in trying to both understand and apply quantum theory. Thereafter a premature notion that the interpretative debates had been settled took hold, and in the period after World War II academic institutions discouraged foundational research. The physicist, AntonyValentini, argues that quantum theory got off to this bad start, because it was philosophically influenced by Kant’s idea that physics reflected the structure of human thought rather than the structure of the world. The introduction of the observer into physics allowed a drift away from the idea of finding out about what existed and also how what existed behaved. It was not until the 1970s and 1980s that new interpretations of quantum theory started to become academically acceptable. The philosopher, Tim Maudlin, contrasts two intellectual attitudes in the approach to quantum theory. Einstein, Schrödinger and Bell wanted to understand what existed and how it worked, while many who came after them were more incurious, and happy with a calculational system that worked, giving the so-called ‘shut up and calculate’ approach. Maudlin suggests that what is traditionally referred to as the ‘measurement problem’ in quantum theory is really the problem of what is reality. He sees the aim of physics as being to tell us what exists and the laws governing the behaviour of what exists. Maudlin argues that quantum theory describes the movement of existing objects in spacetime, while the wave function plays a role in determining how objects behave locally. He suggests that there are many problems for theories that deny the existence of real objects or the reality of the wave function. Lee Smolin, a physicist at the Perimeter Institute, remarks that bundles of ideas in quantum theory and related areas tend to go together. Believers in Everett many worlds tended to also support strong artificial intelligence, allowing classical computers to become conscious, and also support the anthropic principle. Disagreement with these three ideas also seems to go together. 2.8:  Everett many-worlds The philosopher, Arthur Fine, puts the fashionable ‘many worlds’ theory, originally proposed by Everett, at the bottom of ‘anyone’s list of what is sensible’. He criticises proponents of the theory for concentrating on narrow technical issues rather than thinking about what it means for universes to split. I think the difficulty of many worlds is even greater than Fine’s suggests. The splitting of worlds demands that huge number of new universes are coming into existence all the time, thus apparently suggests that the energy of entire new universes is being created the whole time. Explanations never seem to go beyond asserting that this is, for some reason, not a problem. Christopher Fuchs, another Perimeter Institute physicist, criticises philosophers who support the Everett theory for not looking for some physical explanation. The theory did not receive much support when it was originally propounded in the 1960s. The current popularity may look like an attempt to preserve classical assumptions, even at the cost of asserting a fantastical sci-fi idea. Shelley-Goldstein, who spans maths, physics and philosophy, criticises information-based theories for their failure to deal with the two-slit experiment. It is asked how the different paths of the wave function in the two-slit experiment could lead to a wave interference pattern if nothing physical was involved. It is considered that the wave function is objective and physical, and that it is neither some form of subjective experience, nor something that is simply the information that we happen to have. He sees the notion of information more in terms of a brain state connected to human needs and desires, rather than as an objective aspect of the external world. Goldstein discusses a refined version of the double-slit experiment, in which the quanta are sent into the system one-by-one, and an interference pattern gradually emerges. The emerging pattern is seen as an entirely objective phenomena not resulting from any limitation of our knowledge of the system. Tim Maudlin appears to agree with this, arguing that in the two-slit experiment, the sensitivity to whether one or two slits are open indicates the response of something physical, rather than the experimenters ignorance about the location of the particle. Maudlin points to the holistic nature of the two-slit experiment, and suggests that the same thing is apparent in non-locality. The philosopher, David Wallace, also takes the view that states in physics are facts about the physical world, and not just our knowledge of the physical world. He rejects the view sees the quantum as a mixture of our information and our ignorance, because in practice physicists measure particular physical processes. The physicist, Antony Valentini, poses the question as to how the definite states of classical physics arise from the indefinite state of quantum physics. He argues that it is impossible to have a continous transition or emergent process moving from one to the other. The problem of measurement or reality at the qauntum level is therefore argued to be a real problem, and requires some physical theory such as pilot waves or collapse theories to explain it. The physicist, Ghirardi, a member of the trio of physicists responsible for the GRW collapse theory, views information theory as having played a negative role in terms of evading the need to deal with foundational problems in quantum theory. He sees it as a backward step, to go from being concerned about what exists, to merely considering our limited information. John Bell, whose inequalities theorem sparked off the modern interest in entanglement, asked in response to this approach what information was about. Proponents of information theory denounce this as a metaphysical question, which seems illogical for physcists who are thus apparently withdrawing from the attempt to produce a physical description of nature. As in some reaches of consciousness studies, we seem to be seeing the modern mind retreating into a mysterian view, possibly as a last ditch way of defending classical physics, or perhaps we should say metaphysics. Tim Maudlin similarly finds the notion of information theory puzzling. In his view, the physical reality exists before we start to get information about it, and it is meaningful to reverse the process. Decoherence theory:  Tim Maudlin uses reductio ad absurdum to argue against decoherence theory. Buckyballs (a molecule of 60 carbon atoms) have been put into superposition, and from there is has been suggested that larger and larger superpositions are possible without limit, so that decoherence never actually occurs, and therefore superpositions are possible without limit, and can be hidden within macroscopic objects. From this, Maudlin argues that solid macroscopic objects such as bowling balls would be capable of being put through a two-slit experiment and producing an interference pattern. Similarly, Ghirardi says that he would be willing to give up his collapse interpretation of quantum theory if macroscopic superpositions could be demonstrated. Some philosophers object to residual approximation and lack of explanation for superposition in the decoherence approach. Ghirardi is also critical of the theoretical basis of decoherence theory, because the claimed superpositions of macroscopic objects cannot be detected by existing technology. Collapse model:  Wave function collapse models developed by Ghirardi and others are yet another interpretation of quantum theory. These theories require a modification of the Schrodinger equation, so that the evolution of the wave function described by the Schrodinger equation can collapse to the outcome in the form of a particle with a particular position and other properties. In Ghirardi’s theory of wave function collapse, the wave function can be viewed as the quantity that determines the nature of our physical world and the spatial arrangements of objects. The wave function governs the space and time localisation of individual particles. He prefers collapse theories that assume a process for random localisation of particles alongside of the standard Schrodinger quantum evolution. Such localisation occurs only rarely for quanta, but the process of localisation is seen as defining the difference between quantum and classical processes. As of November 2011 collapse theories look to have received a degree of support from researchers, Pusey et al, at Imperial College London, who have devised a theorem claiming to prove the physical reality of wave function collpase (arXiv: 1111.3328vl [quant-ph] 14 Nov 2011. The authors claim to have shown that the view that quantum states are only mathematical abstractions is inconsistent with the predictions of quantum theory. The theorem indicates that quantum states in an experiment must be physical systems, or an experiment will have a results not predicted by quantum mechanics. They also claim that the theorem can be tested by experiment. The physicist, Lee Smolin, thinks that space and the related concept of locality should be thought of as emerging from a network of relationships between the quanta that are regarded as fundamental. He argues that the existence of non-locality shows that spacetime is not fundamental , in that non-locality does not accord with the conventional view of spacetime. He proposes that spacetime emerges from a more connected structure. The approaches of many modern physicists tend to view spacetime as a discrete network or or web, and this in turn hints at a structure which could support some form of pattern or code that could support a fundamental code for conscious experience. A century ago, Einstein showed that spacetime was not a fixed absolute background or rigid theatre against which life could be acted out. Instead it could be conceived of as dynamic in response to changes in matter. Relativity describes the behaviour of the universe on the large scale, while quantum theory describes a scale at which gravity can be ignored. Although both theories have been exhaustively tested over the last century, they are not compatible. The smooth continous curvature of spacetime in relativity conflicts with the discreteness of the quanta, while the dynamism of spacetime in relativity contrasts with a fixed background in quantum theory. Loop Quantum Gravity:  Loop quantum gravity is one attempt to reconcile relativity and quantum theory. Space is viewed as an emergent property based on something that is discrete rather than continous. Field lines are viewed as describing the geometry of spacetime. Areas and volumes come in discrete units. There is a suggestion that knots and links in the network code for particles, with different knotting of the network coding for different particles. This suggests that space may have the energy transmitting properties of a superconductor based on the quantum vaccum being full of oscillating particles. The vacuum fluctuations are here seen as transmitters of force. An alternative is that the quantised force binding together the quarks which make up the protons of the nucleus are themselves fundamental entities. Here again there is the idea of a non-continous structure in spacetime. In loop quantum gravity the geometry of spacetime is expressed in loops which may be the loops of the colour force binding the quarks and whose interrelation defines space. The area of any surface comes in discrete multiples of units, the smallest unit being the Planck or the square of the Planck length. The geometry of spacetime changes as a result of the movement of matter. The geometry is here the relationships of the edges, areas and volumes of the the network, while of physics govern how the geometry evolves. Most of the information needed to construct the geometry of spacetime comprises information about its causal structure. The fact that the universe is seen as a causal structure means that even terms such as ‘things’ or ‘objects’ are not strictly correct because they are really processes, and as such causal structures that are creating spacetime. Penrose spin network:  The discrete units of loop quantum gravity relate to the spin network concept earlier developed by Roger Penrose as a version of quantum geometry. The network is a graph labelled with integers, representing the spins that particles have in quantum theory. The spin networks provide a possible quantum state for the geometry of space. The edges of the network correspond to units of area, while the nodes where edges of the spin network meet correspond to unnits of volume. The spin network is suggested to follow from combining quantum theory and realtivity. The network can evolve in response to changes and relates to the development of light cones. Penrose sees understanding and consciousness as being embedded in the geometry of spacetime. Black holes and their significance:  Light cannot escape from black holes thus creating a hidden region behind the horizon of the black hole. The entropy of the black hole is proportional not to its volume but to the area of the event horizon and is given as a quarter of the area of the horizon divided by h bar x the gravitational constant. The horizon can be conceived as a computer screen with one pixel for even four Planck squares, which gives the amount of information hidden in the black hole. In fact, all observers are seen as having hidden regions bounded by horizons. The horizon marks the boundary from beyond which they will never receive light signals. The situation of an observer on a spaceship accelerating close to the speed of light is considered in relation to this. There is a region behind the ship from which light will never catch up constituting a hidden region for the observer. At the same time, the observer on the spaceship would see heat radiation coming towards from in front. Uncertainty principle determines that space is filled by virtual particles jumping in and out of existence, but the energy of an accelerating spaceship or its equivalent in the form of extreme gravity close to the event horizon of a black hole would convert these virtual particles into real particles. Experimental evidence:  A recent experiment by Chris Wilson serves to substantiate this prediction that energy in the form of photons could be created out of empty space. In this the kinetic energy of an electron accelerated to a quarter of the speed of light was sufficient for the kinetic energy of the electon to turn virtual photons into real photons. The significance of this is to suggest that spacetime is not an abstraction but something real that is capable of producing particles, and also possibly capable of containing the configurations of consciousness and understanding. A further suggestion here is that quantum randomness is not really on the basis of the whole universe but a measure of the information about particles which lie beyond the event horizon, but with which a particle is non-locally correlated.  In contrast to Smolin, some other physicists regard spacetime as the fundamental aspect of the universe, with the quanta seen as merely disturbances of this underlying spacetime. In this section, we discuss recent research indicating the existence of quantum coherence in organic matter and the implications of this for neurons. First, however, we take an excursion into some well established organic chemistry which is relevant to the systems discussed later. Pi electons: We start by discussing the role of electrons round atoms. The overlap of the atomic orbitals forms bonds between atoms, and thus creates molecules, and also determines the shape of the molecules. The term ‘n’ is used to describe the energy level of each orbital. Each value of ‘n’ can represent a number of orbitals at different energy levels and this is known as a shell. The first shell, n=1, can contain only one orbital, the second shell, n=2 can contain two orbital and so on. Angular momentum:  Another quantum number ‘L’ relates to the angular momentum of an electron in an orbital. The value of ‘L’ is at least one less than the value of ‘n’. For the first two shells the values of ‘L’ are therefor 0 and 1, conventionally given as ‘s’ and ‘p’. So an electron can be labelled as 2p, denoting an orbital energy of 2 and an angular momentum of 1. Electron wave function:  The electron orbital is viewed as being a wave function. The wave length or its reciprocal, the frequency, is related to the energy level of the individual electron. Spheres and lobes:  The probability of an electron being at a particular point in space can be referred to as its density plot. For an ‘s’ orbital, the density plot is spherical, but with ‘p’ electrons, the shape of the density plot is two lobes with a nodal area in between, where there is no electron density. The wave function of these two lobes is out of phase. A further quantum number ‘ml’ relates to the spatial orientation of the orbital angular momentum. The ‘s’ orbitals have 0 because a sphere does not have an orientation in space. For ‘p’ orbitals there are three possibilities of -1, 0 and +1, written as px, py and p1,  that can be related to the mutually perpindicular ‘x’, ‘y’ and ‘z’ axes in geometry. Structure of an atom:  The structure of an atom involves having electrons in the lowest energy orbital and working up from there. Hydrogen has one electron located in the lowest energy orbital, and helium has two electrons both in the lowest orbital. Two electrons renders an orbital full. An orbital can be full with two electrons, half full with one electron or empty. With lithium which has three electrons, the third electron has to be placed in the second orbital. With carbon there are six electrons, two in the ‘n’ = 1, first shell, while in the second ‘n’=2 shell, there is one full orbital with two ‘s’ electrons and two half full orbitals, each with one ‘p’ electron. Structure of molecules:  Atoms are the basis for molecules. The orbitals or atoms are wave functions and if these waves are in phase their amplitudes are added together. When this happens the increased amplitude works against the repulsive force acting between the psotively charged nuclei of neighbouring atoms, and thus acts to bind the atoms together. This is referred to as a bonding molecular orbital. When the orbitals are out of phase, they are on the far side of the atomic nuclei, which continue to repel one another. This is known as the anti-bonding molecular orbital. The two are collectively known as MOs. The anti-bonding MOs usually ahve higher energy than the bonding MOs. Energy applied to an atom can promote a low-energy bonding orbital to a higher-energy anti-bonding orbital, and this can break the bond between the atoms. When ‘s’ orbitals combine the MOs are symmetrical and this is referred to as sigma symmetry, and is desribed as a sigma bond. When there is a combination of ‘p’ orbitals, there is a possibility of three different ‘p’ orbitals on axes that are perpindicular to one another. One of these can overlap end-on with an orbital in another atom, and these two orbitals are described as 2psigma and 2psigma*. Two further types of orbital can overlap with those in other atoms side-on, and these will not be symmetrical about the nuclear axis. These are described as pi orbitals and they form pi bonds. Further to that ‘p’ electrons must have theright orientation, so px electrons can only interact with other px electrons and so on. In discussing bonding, only the electrons in the outermost shell of the atom are usually relevant. For example, in a nitrogen molocule only the electrons in the second shell are involved in bonding. Nitrogen atoms have seven electrons, but only five in the outer shell are involved in bonding. A further two elecrons in the outer shell are ‘s’ electrons leaving the bonding work to the three ‘p’ electrons in the outer shell. When two nitrogen atoms are bound into a nitrogen molecule they form two pi bonds and one sigma bond. Molecular bonding can also occur between different types of atoms, but there is a requirement that the energy difference is not too great. Hybridisation:  Hybridisation is an important factor in the formation of molecular bonds. The ‘s’ and ‘p’ bonds are the most importantant in organic chemistry, as with the bonding of carbon, oxygen, nitrogen, sulphur and phosphorous. Hybridised orbitals are viewed as ‘s’ and ‘p’ orbitals superimposed on one another. 3.2:  In its ground state, the carbon atom has two electrons in the first shell, and this is not normally involved in bonding. In its second and outer shell it has two ‘s’ electrons filling an orbital, and two ‘p’ electrons, one px and one py, each in a half-filled orbital. If the carbon atom is excited, say by the positive charge attraction of the nucleus of a nearby hydrogen atom, an ‘s’ electron in the outer shell can be excited into a ‘p’ orbital, so that the outer shell now has one ‘s’ electron and three ‘p’ electrons, one each in an x, y and z orientation. The four outer shell electrons are now deemed to be not distinct ‘s’ and ‘p’ electrons but four ‘sp’ electrons, here described as sp3, because the configuration is one quarter ‘s’ electron and three-quarters ‘p’ electrons. The arrangement allows the formation of four σ covalent bonds. Carbon atoms can use sp2 hybridisation where one ‘s’ electron and two ‘p’ electrons in the outer shell are hybridised. There is also ‘sp’ hybridisation where the ‘s’ orbital mixes with just one of the ‘p’ orbitals. P. With the C=O double bond, the two atoms in the double bond are sp2 hybridised. The carbon atom uses all three orbitals in the sp2 arrangement to form σ bonds with other orbitals, but the oxygen atoms use only one of these. In addition a ‘p’ electron from each atom forms a π bond. 3.3:  Delocalisation and conjugation The joining together or conjugation of double bonds is important for organic structures.  π bonds can form into a framework over a large number of atoms, and are seen to account for the stability of some compounds. The structure of benzene is relevant in this respect. Benzene is based on a ring of six carbon atoms. The carbon atoms are sp2 hybridised, leaving one ‘p’ electron per carbon atom free, or six electrons altogether. These six electrons are spread equally over the six carbon atoms of the ring. These are π bonds delocalised over all six atoms in the carbon ring, rather than being localised in particular double bonds. Delocalisation emphasises the spatial spread of the electron waves, and occurs over the whole of the conjugated system. This is sometimes referred to as resonance. Sequences of double and single bonds also occur as chains rather than rings. Conjugation refers to the sequence of single and double bonds that form either a ring or a chain. Double bonds between carbon and oxygen can be conjugated in the same way as double bonds between carbon atoms. Conjugation involves there being only one single bond between each double bond. Two double bonds together also do not permit conjugation. These ‘rules’ relate to the need to have ‘p’ orbitals available to delocalise over the system. In both rings and chains every carbon atom is sp2 hybridised leaving a third ‘p’ electron to overlap with its neighbours, and form an uninterrupted chain. The double bonds that are conjugated with single bonds are seen to have different properties from double bonds not arranged in this way. Here again conjugation leads to a significantly different chemical behaviour. P. Chlorophyll, the pigment molecule in plants, is a good example of a conjugated ring of single and double bonds, and the colour of all pigments and dyes depends on conjugation. The colour involved depends on the length of the conjugated chain. Each bond increases the wavelength of the light absorbed. With less than eight bonds light is absorbed in the ultra-violet. The colours of objects and materials around us are a function of the interaction of light with pigments. Pigments are characterised by having a large number of double bonds between atoms. The pigment, lycopene, responsible for the red in tomatoes and some berries, comprises a long chain of alternating double and single bonds, allowing the molecule to form π bonds. An extensive network of π bonds across a large number of atoms is involved in the chemistry of many compounds. It is responsible for the high degree of stability in aromatic compounds such as benzene. The compound ethylene (CH2=CH2) has all its atoms in the same plane, and is therefore described as planar. In this molecule, the two carbon atoms are joined by a double bond. Hybridisation involves mixing the 2s orbital on each carbon atom with two out of the three ‘p’ orbital on each carbon atom to give three sp2 orbitals. The third ‘p’ orbital on each atom overlaps with the ‘p’ orbital of the other atom to form a π bond. The ‘p’ orbitals of the two atoms also have to be parallel to one another in order to form a π bond. This bond prevents the rotation of the double bond between the carbon atoms. However, sufficient energy, such as that of ultra violet light, can break the π bond, and thus allow the double bond to rotate. An important feature of benzene is the ability to preserve its ring structure through a variety of chemical reactions. Benzene and other compounds that have this property are termed aromatic. In looking at these structures, the important feature is not the number of conjugated atoms, but the number of electrons involved in the π system The six π electrons of  benzene leave all its molecular orbitals fully occupied in a closed shell, and account for its stability. A closed shell of electrons in bonding orbitals is a definition of aromacity. In benzene, the lowest energy ‘p’ orbitals comprise electron density above and below the plane of the molecule. These electron orbitals are spread over, delocalised over or conjugated over all six carbon molecules in the benzene ring. The delocalised ‘p’ orbitals can themselves be thought of as a ring. Expressed another way, this type of delocalisation is an uninterrupted sequence of double and single bonds, and it is this which is described as conjugation. The properties of this type of system are seen to be different from its component parts. Benzene has six π electrons, and in consequence all its bonding orbitals are full, giving the molecule a closed structure, which is often not the case for quite similar molecules with a lot of double bonds. This is referred to as a molecule being aromatic. The general rule is that there has to be a low energy bonding orbital with the ‘p’ orbitals in-phase. There is a closed shell giving greater stability in aromatic systems, where there are two ‘p’ orbitals forming a π bond and four other electrons. Carbon and oxygen bonds It is not essential in these systems to have carbon-to-carbon bonds. Carbon and oxygen also often form double bonds, separated by just one single bond. Here to the behaviour of the double-bonded system is quite different from the behaviour of the component parts. These structures are special in the sense of only arising where there are ‘p’ orbitals on different atoms available to overlap with one another. In many other molecules, there is a similarity in terms of a large number of double bonds, but they are insulated from one another by the lack of ‘p’ orbitals available to overlap with one another. Amide groups, amino acids and protein P. The amide group is crucial to protein, and therefore to living systems as a whole, in that it forms the links between amino acid molecules that in turn make up protein, the basic building blocks of life. The amino group on one amino acid molecule combines with the carboxylic group on another amino acid molecule to give an amide group. When a chain of this kind forms it is a peptide or polypeptide, and longer chains are classed as proteins. Conjugation arises from the bonding of a lone pair of ‘p’ orbitals, and this is vital in stabilising the link between the amino acids, and making it relatively difficult to disrupt the amino acid chains that make up protein. 3.4: Structure of molecules The structure of the individual atom is also the basis for the structure of molecules. Atomic orbitals are wave functions, and the orbital wave functions of different atoms are like waves, in that if they are in phase, their amplitudes are added together. When this happens, the increased amplitude of the wave function works against the mutual repulsion of the positively charged atomic nuclei of different atoms, and works to bond the atoms together. This is referred to as a bonding molecular orbital. When the orbitals are out-of-phase, they are on the far sides of the atomic nuclei, which continue to repel one another due to like positive electric charges, and this arrangement is known as the anti-bonding molecular orbital. Collectively the two types of molecular orbital are referred to as MOs. The antibonding MOs usually have higher energy than the bonding MOs. Energy applied to an atom can promote a low-energy bonding orbital to a higher-energy anti-bonding orbital, and this process can break the bond between two atoms. When ‘s’ orbitals combine, the MOs are symmetrical, and this type of orbital overlap has sigma (σ) symmetry, and is described as a sigma (σ) bond. When there is a combination of ‘p’ orbitals, there is a possibility of three different ‘p’ orbitals on axes that are perpendicular to one another. One of these can overlap end-on with an orbital in another atom, and these two orbitals are described as 2pσ and 2pσ*. Two other orbitals can overlap with those on other atoms side-on, and will not be symmetrical about the nuclear axis. These are described as π orbitals and form π bonds. P. In discussing bonding, only the electrons in the outermost shell of the atoms are usually relevant. For example, in a nitrogen molecule formed by the bonding of two nitrogen atoms, only the electrons in the second, ‘n’ = 2, shell are involved in bonding. The nitrogen atom has seven electrons, so there are fourteen on the two atoms that bond to form a nitrogen molecule. Two electrons in the inner shell of each atom are not involved, leaving five on each atom and ten altogether in the second shells. The 2s electrons on each atom cancel out, and are described as lone pairs. The bonding work thus devolves on three electrons in each atom, or six in the whole molecule. These form one σ bond and two π bonds. This is described as a triple-bonded structure. Orbitals overlap better when they are in the same shell of their respective atoms. So electrons in the second shell will overlap more readily with other second shell electrons than with third or fourth shell electrons. Further to that ‘p’ electrons must have the right orientation and px electrons can only interact with other px electrons and so on, because the x, y and z electrons are perpendicular or orthogonal to one another. Molecular bonding also applies to molecules that are formed out of different types of atoms, as distinct from molecules formed from atoms of the same element such as the nitrogen molecule discussed above. If the atomic orbitals of different atoms are very different, they cannot combine, and the atom cannot form covalent bonds (sharing the electron between two atoms). Instead an electron can transfer from one atom to another, transforming the first atom into a negative ion, and the second atom into a positive ion, with the molecule now held together by the attraction between the oppositely charged ions. This is known as ionic bonding. Covalent bonds with overlapping orbitals can only be formed when the difference in energy is not too great. P. Hybridisation Hybridisation is an important factor in the formation of molecular bonds. The ‘s’ and ‘p’ orbitals are those most important for organic chemistry, and for the bonding of atoms such carbon, oxygen, nitrogen, sulphur and phosphorous. Hybridised orbitals are viewed as ‘s’ and ‘p’ orbitals superimposed on one another. The key argument against quantum states having a practical role in neural processing is that in the conditions of the brain quantum decoherence would happen too rapidly for the states to be relevant. This view was crystallised by the (9. Tegmark, 2000) paper published in the prestigious journal, Physical Review E. The paper itself was not remarkable. For reasons that have never been properly explained, it used a model of quantum processing that has never been proposed elsewhere, and it failed to discuss or even mention arguments for the shielding of quantum processing in the brain. Nevertheless, it succeeded in confirming in a prestigious way the views of the numerous opponents of quantum consciousness. The situation remained like that between 2000 and 2007, after which the debate over quantum states in biological systems was moved to a new stage by the discovery that quantum coherence has a functional role in the transfer of energy within photosynthetic organisms (10. Engel et al, 2007). This moved the discussion of what sort of coherent biological features could support consciousness on from a phase of pure theorising, to a phase, in which ideas can be related to features that have been shown to exist in biological matter. 3.6:  The Engel study The Engel et al paper studied photosynthesis in green sulphur bacteria. The photosynthetic complexes (chromophores) in the bacteria are tuned to capturing light and transmitting its energy to long-term storage areas. It should be stressed that in this system, photons (the light quanta) only provide the initial excitation, and the coherence and entanglement discussed here involves electrons in biological systems. The Engel study documented the dependence of energy transport on the spatially extended properties of the wave function of the photosynthetic complexes. In particular, the timescale of the quantum coherence observed was much longer than would normally be predicted for a biological environment, with a duration of at least 660 femtoseconds (femtosecond=10-15 seconds), nearly three times as long as the classically predicted times of 250 femtoseconds. In the latter case, rapid destruction of coherence would prevent it from influencing the system. The wavelike process noted by Engel was suggested to account for the efficiency of the system, at 98% compared to the 60-70% predicted for a classical system. 3.7:  Limited dephasing Another researcher in this area, Martin Plenio, argues that where temperatures are relatively high, there is likely to be some dephasing of the quanta, but contrary to the popular view that this would be the end of quantum processing, the efficiency of energy transportation could actually be enhanced by this limited dephasing. Referring to a quantum experiment with beam splitters and detectors, he suggests that partial dephasing might actually allow the wider and therefore more efficient exploration of the system. 3.8:  Cheng & Fleming: – the protein environment In a paper by Cheng & Fleming published in ‘Science’ (11.) a study of long-lived quantum coherence in photosynthetic bacteria, demonstrates strong correlations between chromophore molecules. One experiment looked at two chromophore molecules. The system provided near unity efficiency of energy transfer, and also demonstrates energy transfer between the chromophores. The experiment also shows that the time for dephasing of these molecules is substantially longer than would have been traditionally estimated. The traditional approach in particular ignored the coherence between donor and acceptor states. The adaptive advantages of this lie in the efficiency of the search for the electron donor. The longer time to dephasing of one as compared to the other of the experimental chromophores was taken to indicate a strong correlation of the energy fluctuations of the two molecules. This meant that the two molecules were embedded in the same protein environment. Another study by Fleming et al that also observed long-lasting coherence in a photosynthetic indicated that this could be explained by correlations between protein motions that modulate the transition energies of neighbouring chromophores. This suggests that protein environments works to preserve electronic coherence in photosynthetic complexes, and thus optimise excitatory energy transfer. Chains of polymers Elizabetta Collini and Gregory Scholes conducted an experiment also reported in ‘Science’ (12.) that observed quantum coherence dynamics in relation to electronic energy transfer. The experiment examined polymer samples with different chain conformations at room temperature, and recorded intrachain, but not interchain, coherent electronic energy transfer. It is pointed out that natural photosynthetic proteins and artificial polymers organise light absorbing molecules (chromophores) to channel photon energy. The excitation energy from the absorbed light can be shared quantum mechanically among the chromophores. Where this happens, electronic coupling predominates over the tendency towards quantum decoherence, (loss of coherence due to interaction with the environment), and is viewed as comprising a standing wave connecting donor and acceptor paths, with the evolution of the system entangled in a single quantum state. Within chains of polymers there can be conformational subunits 2 to 12 repeat units long, which are the primary absorbing units or chromophores. Neighbouring chromophores along the backbone of a polymer have quite a strong electronic coupling, and electronic transfer between these is coherent at room temperature. 3.9:  Quantum entanglement considered – Sarovar et al (2009) In a 2009 paper, Sarovar et al (13.) examined the subject of possible quantum entanglement in photosynthetic complexes. The paper starts by discussing quantum coherence between the spatially separated chromophore molecules found in these systems. Modelling of the system showed that entanglement would rapidly decrease to zero, but then resurge after about 600 femtoseconds. Entanglement could in fact survive for considerably longer than coherence, with a duration of five picoseconds at 77K, falling to two picoseconds at room temperature. The entanglement examined here is the non-local correlation between the electronic states of spatially separated chromophores. Coherence is a necessary and sufficient state for entanglement to exist. Ishizaki and Fleming (2009) This paper (14.) developed an equation that allows modelling of the photosynthetic systems discussed above. Where this deals with the sites to be excited by the light energy, the initial entanglement rapidly decreases to zero, but then increases again after about 600 femtoseconds. This is thought to be a function of the entanglement of the initial sites being transported and localised at other sites, but remaining coherent at these other sites, from which further entanglement can subsequently resurge. Other studies appear to confirm the existence of picosecond timescales for entanglement in chromophores. It is not clear to the authors that entanglement is actually functional in chromophores. Coherence appears to be sufficient for very efficient transport of energy, and entanglement may be only a by-product of coherence. This looks to remain an area of scientific debate. Earlier studies such as Engel’s were performed at low temperatures, whereas quantum coherence becomes more fragile at higher temperatures, because of the higher amplitude of environmental fluctuations. In the Ishizaki and Fleming paper, the equation supplied by the authors suggest that coherence could persist for several hundred femtoseconds even at physiological temperatures of 300 Kelvin. This study deals with the Fenna-Matthews-Olson (FMO) pigment-protein complex found in low light-adapted green sulphur bacteria. The FMO is situated between the chlorosome antenna and the reaction centre, and its function is to transport energy  harvested from sunlight by the antenna to the reaction centre. The FMO complex is a trimer of identical sub-units, each comprised of seven bacteriochlorphyl (BChl) molecules. This structure has been extensively studied. Each unit of the FMO comprises 7 BChl molecules. BChl 1 and 6 are orientated towards the chlorosome antenna, and are the initially excited pigment, and BChl 3 and 4 are orientated towards the reaction centre. Even at the physiological temperatures, quantum coherence can be observed for up to 350 femtoseconds in this structure. This suggests that long-lived electronic coherence is sustained among the BChls, even at physiological temperatures, and may play a role in the high efficiency of EET in photosynthetic proteins. P. BChl 1 and 6 are seen as capturing and conveying onward the initial electronic energy excitation. Quantum coherence is suggested to allow rapid sampling of pathways to BChl 3 that connects to the reaction centre. If the process was entirely classical, trapping of energy in subsidiary minima would be inevitable, whereas quantum delocalisation can avoid such traps, and aid the capture of excitation by pigments BChl 3 and 4. BChl 6 is strongly coupled to BChl 5 and 7, which are in turn stongly coupled to BChl 4, ensuring transfer of excitation energy. Delocalisation of energy over several of the molecules allows exploration of the lowest energy site in BChl 3. The study predicts that quantum coherence could be sustained for 350 femtoseconds, but if the calculation is adjusted for a possible longer phonon relaxation time, this could extend to 550 femtoseconds, still at physiological temperatures. Cia et al (2008): – resetting entanglement A 2008 paper from Cia et al (15.) also looked at the possibility of quantum entanglement in the type of system studied in the Engel paper. Cia takes the view that entanglement can exist in hot biological environments. Cia says traditional thinking on biological systems is based on the assumption of thermal equilibrium, whereas biological systems are far from thermal equilibrium. He points out that the conformation of protein involves interactions at the quantum level. These are usually treated classically, but Cia wonders whether a proper understanding of protein dynamics does not require quantum mechanics. It is said not to be clear, whether or not entanglement is generated during the motions of protein, but that it is possible that entanglement could have important implications for the functioning of protein. The model studied by the Cia et al paper suggests that while a noisy environment, such as that found in biological matter, can destroy entanglement, it can also set up fresh entanglement. It is argued that entanglement can recur in the case of an oscillating molecule, in a way that would not be possible in the absence of this oscillation. P. The molecule has to oscillate at a certain rate relative to the environment to become entangled. This process allows for entanglement to emerge, but this would normally also disappear quickly. Something extra is needed for entanglement to recur or persist. It is suggested here that the environment, which is normally viewed as the source of decoherence, can play a constructive role in resetting entanglement, when combined with classical molecules. Environmental noise in combination with molecular motion provides a reset mechanism for entanglement. According to the author’s calculations entanglement can persistently recur in an oscillating molecule, even if the environment is too hot for static entanglement. The oscillation of the molecule combined with the noise of the environment may repeatedly reset entanglement. 3.10:  The FMO complex and entanglement A paper by K. Birgitta Whaley, Mohan Sarovar and Akihito Ishizaki published in 2010 (16.) discusses recent studies of photosynthetic light harvesting complexes. The studies are seen as having established the existence of quantum entanglement in biologically functional systems that are not in thermal equilibrium. However, this does not necessarily mean that entanglement has a biological function. The authors point out that the modern discussion of entanglement has moved and from simple arrangements of particles to entanglement in larger scale systems. Measurements of excitonic energy transport in photosynthetic light harvesting complexes show evidence of quantum coherence in these systems. A particular focus of research has been the Fenna-Matthew-Olson (FMO) complex in green sulphur bacteria. The FMO serves to transport electronic energy from the light harvesting antenna to the photosynthetic reaction centre. Coherence is present here at up to 300K. The authors draw attention to the relationship between electronic excitations in the chromophores and those in the surrounding protein. The electronic excitations in the chromophores are coupled to the vibrational modes of the surrounding protein scaffolding. One study (Scholak et al, 2010) shows a correlation between the extent of entanglement and the efficiency of the energy transport. The study went on to claim that efficient transport requires entanglement, although the authors of this paper query such a definite assertion. The pigment-protein dynamics generates entanglement across the entire FMO complex in only 100 femtoseconds, but followed by oscillations that damp out over several hundred femtoseconds, with a subsequent longer contribution continuing beyond that for up to about five picoseconds. This more persistent entanglement can be at between a third and a half of the initial value and 15% of the maximum possible value. Long-lived entanglement takes place between four or five of the existing seven chromophores. The most extended entanglement is between chromophores one and three, and these are also two of the most widely separated chromophores. Studies also show that this entanglement is quite resistant to temperature increase, with only a 25% reduction when the temperature rises from 77K to 300K. Overall studies indicate long-lived entanglement of as much as five picoseconds between numbers of excitations on spatially separated pigment molecules. This is described here as long-lived coherence because energy transfer through the FMO complex is on a time span of a few picoseconds meaning that the up to five picoseconds of entanglement seen between the chromophores represents a functional timescale. However, the authors do not consider this by itself to be a conclusive argument for entanglement being functional in the FMO. Light-harvesting complex II (LNCII) P. This paper also looks at light harvesting complex II (LHCII), which is also shown to have long-lived electronic coherence. LHCII is the most common light harvesting complex in plants. The system comprises three subunits each of which contains eight chlorophyll ‘a’ molecules and six chlorophyll ‘b’ molecules. A study by two of the authors (Ishizaki & Fleming, 2010) indicates that only one out of chlorophyll molecules would be initially excited by photons, and this molecule would then become entangled with other chlorophyll molecules. Entanglement decreases at first, but then persists at a significant proportion of the maximum possible value. This is also an important feature of the FMO complex. In both these complexes entanglement is seen to be generated by the passage of electronic excitation through the light harvesting complexes, and to be distributed over a number of chromophores. Entanglement persists over a longer time and is more resistant to temperature increase than might have been previously expected. A functional biological role is suggested by the persistence of entanglement over the same timescale as the energy transfer within the light harvesting complexes. Light harvesting complexes (LHCs) are densely packed molecular structures involved in the initial stages of photosynthesis. These complexes capture light, and the resulting excitation energy is transferred to reaction centres, where chemical reactions are initiated. LHCs are particularly efficient at transporting excitation energy in disordered environments. Simulations of the dynamics of particular LHCs predict that quantum entanglement will persist over observable timescales. Entanglement here would mean that there are non-local correlations between spatially separated molecules in the LHCs. The molecules in the LHCs, referred to as chromophores, are close enough together for considerable dipole coupling leading to coherent interaction over observable timescales. The existence of coherence between molecules in these systems has been recognised for a decade or more. This condition is seen as the basis for entanglement. Coherence in this area, known as the site basis, is necessary and sufficient for entanglement, and any coherence in the area will lead to entanglement, and can be viewed in experiments as a signature of entanglement. The authors base part of their study on the description of the dynamics of a molecule in a protein in an LHC. This model indicates the coupling of some pairs of molecules due to proximity and favourable dipole orientation, thus effectively forming dimers. The wave function of the system is delocalised across these dimers. Using this equation, the interface of the LHC with light energy leads to a rapid increase in entanglement for a short time, followed by a decay punctuated by varying amounts of oscillation. The initial rapid increase reflects the coherent coupling of some parts of the LHC system. This entanglement decreases again as the excitation comes into contact with other parts of the protein. Some of the entanglement seen is not between immediately neighbouring molecules, but between more distant parts of the LHC. Entanglement in LHC is estimated to continue until the excitation reaches the reaction centre. The authors view this as a remarkable conclusion, since it shows that entanglement between several particles can persist in a non-equilibrium condition, despite being in a decoherent environment. 3.11:  Entanglement and efficiency A paper by Francesca Fassioli and Alexandra Olaya-Castro (17.) suggests that electronic quantum coherence amongst distance donors could allow precise modulation of the light harvesting function. Photosynthesis is remarkable for the near 100% efficiency of energy transfer. The spatial arrangement of the pigment molecules and their electronic interaction is known to relate to this efficiency. Recent experimental studies of photosynthetic protein have shown that it can sustain quantum coherence for longer than previously expected, and that this can happen at the normal temperature of biological processes. This has been taken to imply that quantum coherence may affect light harvesting processes. In photosynthesis, the energy of sunlight is transferred to a reaction centre with near 100% efficiency. The spatial arrangement of pigment molecules and their electronic interactions is known to be involved with this high efficiency. There is an implication that quantum coherence may affect the light harvesting process. Some studies point to very efficient energy transport as the optimal result of the interplay of quantum coherent with decoherent mechanisms. Roles proposed for quantum coherence vary between avoidance of energy traps that are not at the overall lowest energy level, and actual searches for the overall lowest energy level. In this paper, it is suggested that the function of quantum coherence goes beyond efficiency of energy transport, and includes the modulation of the photosynthetic antennae complexes to deal with variations in the environment. 3.12:  The role of quantum entanglement There is some debate as to whether quantum entanglement plays a role in the functioning of the light-harvesting complexes, or is just a by-product of quantum states. The authors here argue that entanglement may be involved in the efficiency of the system, and they use the FMO protein in green sulphur bacteria as the basis of their study. They suggest that entanglement could play a role in light-harvesting by allowing precise control of the rate at which excitations are transferred to the reaction centre. Long-range quantum correlations have been suggested to be important as a mechanism helping quantum coherence to survive at the high temperatures sustained in light harvesting antennae. This paper claims to show that in the FMO complex long-lived quantum coherence is spatially distributed in such a way that entanglement between pairs of molecules controls the efficiency profile needed to cope with variations in the environment. The ability to control energy transport under varying environmental conditions is seen as crucial for the robustness of photosynthetic systems. A mechanism involving quantum coherence and entanglement might be effective in controlling the response to different light intensities. 3.13:  Room temperature: Moving the debate forward A paper by Elizabetta Collini et al published in ‘Nature’ in 2010 (18.) moved the debate forward in an important way by demonstrating the existence of room-temperature quantum coherence in organic matter. This paper describes X-ray crystallography studies of two types of marine cryptophyte algae that have long-lasting excitation oscillations and correlations and anti-correlations, symptomatic of quantum coherence even at ambient temperature. Distant molecules within the photosynthetic protein are thought to be connected to quantum coherence, and to produce efficient light-harvesting as a result. The cryptophytes can photosynthesise in low-light conditions suggesting a particularly efficient transfer of energy within protein. According to the traditional theory, this would imply only small separation between chromophores, whereas the actual separation is unusually large. In this study, performed at room temperature, the antenna protein received a laser pulse,  resulting in a coherent superposition. The experimental data of the study shows that the superposition persists for 400 femtoseconds and over a distance of 2.5 nanometres. Quantum coherence occurs in a complex mix of quantum interference between electronic resonances, and decoherence is caused by interaction with the environment. The authors think that long-lived quantum coherence facilitates efficient energy transfer across protein units. The authors remains uncertain, as to how quantum coherence can persist for hundreds of femtoseconds in biological matter. One suggestion is that the expected rate of decoherence is slowed by shared or correlated motions in the surrounding environment. Where light-harvesting chromophores are covalently bound to the protein backbone, it is suggested that this may strengthen correlated motions between the chromophores and the protein. Covalent binding to the protein backbone is speculated to make coherence longer lasting. 3.14:  Widespread in nature In addition to the discovery of quantum coherence in biological systems at room temperature, studies now also show that coherence is present in multicellular green plants. Calhoun et al, 2009 (19.) studied this kind of organism. These two discoveries, coherence at room temperature and coherence in green plants have removed the initial possibility that coherence in organism was an outlier confined to extreme conditions rather than something that was widespread in nature. The question arises as to whether quantum coherence and entanglement in plants has any relevance to animal life and in particularly to brains. A brief talk by Travis Craddock of the University of Alberta at a 2011 consciousness conference suggested that it could. P. Craddock stressed that light absorbing chromophore molecules involved in light harvesting use dipoles to provide 99% efficiency in energy transfer from the light harvesting antennae to the reaction centre. The studies show that instead of quantum coherence being destroyed by the environment within the organism, a limited amount of noise in the environment acts to drive the system. 3.16:  Tryptophan Craddock indicates that any system of dipoles could work like this. He is particularly interested in the role of the amino acid, tryptophan. Similar models can be used for chromophores in photosynthetic systems and for tryptophan, an aromatic amino acid that is one of the 20 standard amino acids making up protein, including the microtubular protein, tubulin. Tryptophan has eight molecules extending over the length of the tubulin protein dimer, and it possesses strong transition dipoles. Excitons over this network are not localised, but are shared between all the tryptophan molecules, in the same way that excitons are delocalised in the photosynthetic light-harvesting structures. Photosynthesis absorbs light in the red and infra red. These forms of light are not available to tryptophan in proteins, but tryptophan is able to use ultra violet light emitted by the mitochondria. In fact, Tryptophan is sometimes referred to as chromophoric because of its ability to absorb UV light. Craddock implies that the same system that gives rise to quantum coherence in light-harvesting complexes could also give rise to it within the protein of neurons. Functional quantum states in the brain:  Following the recent papers discussed above, the debate on quantum coherence in living tissues has moved to a new stage. We now have definite evidence of functional quantum coherence in living matter, and also the existence of quantum entanglement, which may also be functional. When this evidence is added to the similarities between the coherent structures in photosynthetic organisms and tryptophan, an amino acid that is common within neurons, we look to be moving into a zone where functional quantum states in the brain begin to look perfectly feasible. 3.17:  Quantum and classical interaction The biologist, Stuart Kauffman, based at University of Vermont and & Tampere University, Finland (20.) is sceptical about ideas of consciousness based on classical and macroscopic physics. He proposes instead that consciousness is related to the border area between quantum and classical processing, where the non-algorithmic aspect of the quantum and the non-random aspect of the classical may be mixed. This is termed the ‘poised realm’, and is seen as applying to systems that include biomolecules and by extension brain systems. P.The poised realm. In rejecting the classical basis of mainstream consciousness studies, Kauffman instead proposes the idea of the ‘poised realm’, essentially the border of quantum and classical rules, which he suggests may support processing that is non-algorithmic, but at the same time non-random. This resembles the earlier non-algorithmic scheme proposed by Penrose. Kauffman puts forward the notion of a distinction between ‘res potentia’, the realm of the possible, or the quantum world, and ‘res extensa’ the realm of what actually exists, or the classical world. His proposal examines the meaning of the unmeasured or uncollapsed Schrödinger wave, and the question as to whether consciousness can participate at this level Kauffman discusses the modern quantum theory approach that distinguishes between an open quantum system and its environment. The open quantum system can be seen as the superposition of many possible quantum particles oscillating in phase. The information of the in-phase quanta can be lost through interaction with the environment, in the process known as decoherence. The information about the peaks and troughs of the Schrödinger wave, and the familiar interference pattern disappears, leading towards a classical system. The process of decoherence takes time, on a scale of one femtosecond. There is a problem regarding the physics of this, because while the mathematical description of the Schrödinger wave is time- reversible, decoherence has traditionally been treated as a time-irreversible dissipative process. Recoherence:  However, it is has in recent years become apparent that recoherence and the creation of a new coherence state is possible, with systems decohering to the point of being effectively classical, and then recohering. Classical information can itself produce recoherence. The Shor quantum error correction theorem shows that in a quantum computer with partially decoherent qubits, a measurement that injects information can bring the qubits back to coherence. Kauffman, in collaboration with Gabor Vattay, a physicist at Eotvos University Budapest, and Samuli Niiranen, a computer scientist at Tampere University worked out the concept of the ‘poised realm’ between quantum coherence and classical behaviour. It is in this poised region that Kaufmann suggests non-random, but also non-deterministic processes could arise. Between the open quantum system of the Schrödinger wave and classicality, there is an area that is neither algorithmic nor deterministic, and which is also acausal, and therefore unlike a classical computer.  It is suggested that systems can hover between quantum and classical behaviour, this state being what Kaufmann refers to as the ‘poised realm’. The non-deterministic processing in the ‘poised realm’ influences the otherwise deterministic processing of the classical sphere, which can in its turn alter the remaining quantum sphere. There is a two-way interaction between the quantum and classical region. The fact that this process deriving from the classical region is non-random introduces a non-random element into any remaining decoherence in the quantum system. Further, classical parts of the system can recohere, and inject classical information into the quantum system, thus introducing a degree of control into the superpositions of the quanta. In particular, the decision on which amplitudes reach the higher amplitudes, and thus have the greatest probability of decohering can be altered, thus altering the nature of particular classical outcomes. This leads Kauffman on to discuss the recent discoveries in quantum biology, where quantum coherence and entanglement have been demonstrated in living photosynthetic organisms. The suggestion is that biomolecules are included in the systems that can hover between the quantum and the classical region, and further that this could apply not only to photosynthetic biomolecules, but also to biomolecules within neurons. Thus brain systems could be allowed to recohere to introduce further acausality into the system. Kaufmann views consciousness as a participation in res potentia and its possibilities. The presence of consciousness in the res potentia is also suggested to explain the lack of an apparent spatial location for consciousness. Qualia are suggested to be related to quantum measurement in which the possible becomes actual. However, Kaufmann admits that all this still contains no real explanation of sensory experience. Kaufmann acknowledges that he is looking for something similar to Penrose, but thinks it may be located in the poised realm rather than in Penrose’s objective reduction. Where the earlier scheme of Penrose still has the advantage is in the rounding off proposition that his objective reduction gives access to consciousness at the level of the fundamental spacetime geometry. Presumably Kaufmann assumes something of the kind. There is no particular reason why either quanta or classical structures or some mixture of them should be conscious, but we know that the quanta relate to fundamental properties such as charge and spin and to spacetime, and it seems reasonable on the same basis to look for consciousness as a fundamental property at this level. Roger Penrose is one of the very few thinkers to consider how consciousness could arise from first principles rather than merely trying to shoe horn it into nineteenth century physics, and his ideas appear to be a good starting point from which to try to understand consciousness as a fundamental. Gödel’s theorem:. Penrose’s  approach was a counter attack on the functionalism of the late 20th century, which claimed that computers and robots could be conscious. He approached the question of consciousness from the direction of mathematics. The centre piece of his argument is a discussion of Gödel’s theorem.  Gödel demonstrated that any formal system or any significant system of axioms, such as elementary arithmetic, cannot be both consistent and complete. There will be statements that are undecidable, because although they are seen to be true, but are not provable in terms of the axioms. Penrose’s controversial claim:  The Gödel theorem as such is not controversial in relation to modern logic and mathematics, but the argument that Penrose derived from it has proved to be highly controversial. Penrose claimed that the fact that human mathematicians can see the truth of a statement that is not demonstrated by the axioms means that the human mind contains some function that is not based on algorithms, and therefore could not be replicated by a computer. This is because the functioning of computers is based solely on algorithms (a system of calculations). Penrose therefore claimed that Gödel had demonstrated that human brains could do something that no computer was able to do. Arguments against Penrose’s position:  Some critics of Penrose have suggested that while mathematicians could go beyond the axioms, they were in fact using a knowable algorithm present in their brains. Penrose contests this, arguing that all possible algorithms are defeated by the Gödel problem. In respect to arguments as to whether computers could be programmed to deal with Gödel propositions, Penrose accepts that a computer could be instructed as to the non-stopping property of Turing’s halting problem. Here, a proposition that goes beyond the original axioms of the system is put into a computation. However, this proposition is not part of the original formal system, but instead relies on the computer being fed with human insights, so as to break out of the difficulty. So the apparently non-algorithmic insights are required to supplement the functioning of the computer in this instance. An unknowable algorithm:  Penrose further discusses the suggestion of an unknowable algorithm that enables mathematicians to perceive the truth of statements. He argues that there is no escape from the knowability of algorithms. An unknowable algorithm means an algorithm, whose specification could not be achieved. But any algorithm is in principle knowable, because it depends on the natural numbers, which are knowable. Further, it is possible to specify natural numbers that are larger than any number needed to specify the algorithmic action of an organism, such as a human or a human brain. Mathematical robots:  Penrose says that with a mathematical robot, it would not be practical to encode all the possible insights of mathematicians. The robot would have to learn certain truths by studying the environment, which in its turn is assumed to be based on algorithms. But to be a creative mathematician, the robot will need a concept of unassailable truth, that is a concept that some things are obviously true. This involves the mathematical robot having to perceive that a formal system ‘H’ implies the truth of its Gödel proposition, and at the same time perceiving that the Gödel proposition cannot be proved by the formal system ‘H’. It would perceive that the truth of the proposition follows from the soundness of the formal system, but the fact that the proposition cannot be proved by the axioms also derives from the formal system. This would involve a contradiction for the robot, since it would have to believe something outside the formal system that encapsulated its beliefs. 4.2:  Solomon Feferman: Amongst experts in this area who do not entirely reject Penrose’s argument, Solomon Feferman (21.) has criticised Penrose’s detailed argument, but is much closer to his position than to that of mainstream consciousness studies. Feferman makes common cause with Penrose in opposing the computational model of the mind, and considering that human thought, and in particular mathematical thought, is not achieved by the mechanical application of algorithms, but rather by trial-and-error, insight and inspiration, in a process that machines will never share with humans. Feferman finds numerous flaws in Penrose’s work, but at the end he informs his readers that Penrose’s case would not be altered by putting right the logical flaws that Feferman has spent much time discovering. Feferman says that it is ridiculous to think that mathematics is performed in this way. Trial-and-error reasoning, insight and inspiration, based on prior experience, but not on general rules, are seen as the basis of mathematical success. A more mechanical approach is only appropriate, after an initial proof has been arrived at. Then this approach can be used for mechanical checking of something initially arrived at by trial-and-error and insight. He  views mathematical thought as being non-mechanical. He says that he agrees with Penrose that understanding is essential to mathematical thought, and that it is just this area of mathematical thought that machines cannot share with us. Penrose’s search for a non-algorithmic feature:  Penrose went on to ask, what it was in the human brain that was not based on algorithms. The physical law is described by mathematics, so it is not easy to come up with a process that is not governed by algorithms. The only plausible candidate that Penrose could find was the collapse of the quantum wave function, where the choice of the position of a particle is random, and therefore not the product of an algorithm. However, he considered that the very randomness of the wave collapse disqualifies it as a useful basis for the mathematical judgement or understanding in which he was initially interested. The wave function:  In respect of consciousness, it is Penrose’s attitude to the reality of the quantum wave function collapse that is the important area. In particular, he disagrees with the traditional Copenhagen interpretation, which says that the theory is just an abstract calculational procedure, and that the quanta only achieve objective reality when a measurement has been made. Thus in the Copenhagen approach reality somehow arises from the unreal or from abstraction, giving a dualist quality to the theory. The discussion of quantum theory repeatedly comes back to the theme that Penrose regards the quantum world and the uncollapsed wave function as having objective existence. In Penrose’s view, the objective reality of the quantum world allows it to play a role in consciousness. Penrose emphasises that the evolution of the wave function portrayed by the Schrödinger equation is both deterministic and linear. This aspect of quantum theory is not random. Randomness only emerges when the wave function collapses, and gives the choice of a particular position or other properties for a particle. P. Penrose discusses the various takes made on wave function collapse by physicists. Some would like everything to depend on the Schrödinger equation, but Penrose rejects this idea, because it is impossible to see how the mechanism of this equation could produce the transformation from the superposition of alternatives, as found in the quantum wave, to the random choice of a single alternative. He also discusses the suggestion that the probabilities of the quantum wave that emerges into macroscopic existence arise from uncertainties in the initial conditions and that the system is analogous to chaos in macroscopic physics. This does not satisfy Penrose, who points out that chaos is based on non-linear developments, whereas the Schrödinger equation is linear. 4.3:  Important distinction between Penrose and Wigner Penrose also disagrees with Eugene Wigner’s suggestion that it is consciousness that collapses the wave function, on the basis that consciousness is only manifest in special corners of spacetime. Penrose himself advances the exact opposite proposal that the collapse of a special (objective) type of wave function produces consciousness. It is important to stress this difference between the Penrose and the Wigner position, as some commentators mix up Wigner’s idea with Penrose’s propositions on quantum consciousness, and then advance a refutation of Wigner, wrongly believing it to be a refutation of Penrose. Penrose is also dismissive of the ‘many worlds’ version of quantum theory, which would have an endless splitting into different universes with, for instance, Schrödinger’s cat alive in one universe and dead in another universe. Penrose objects to the lack of economy and the multitude of problems that might arise from attempting such a solution, and in addition argues that the theory does not explain why the splitting has to take place, and why it is not possible to be conscious of superpositions. Penrose instead argues for some new physics, and in particular an additional form of wave function collapse. If the superpositions described by the quantum wave extended into the macroscopic world, we would in fact see superpositions of large-scale objects. As this does not happen, it is argued that something that is part of objective reality must take place to produce the reality that we actually see. This requirement for new physics is often criticised as unjustified. However, these criticisms tend to ignore the fact that while quantum theory provides many accurate predictions, there has never been satisfactory agreement about its interpretation, nor has its conflict with relativity been resolved. 4.5:  Consciousness, spacetime, the second law & gravity Penrose sees consciousness as not only related to the quantum level but also to spacetime. He discusses the spacetime curvature described in general relativity. He looks at the effect of singularities relative to two spacetime curvature tensors, Weyl and Ricci. Weyl represents the tidal effect of gravity, by which the part of a body nearest to the gravitational source falls fastest creating a tidal distortion in the body. Ricci represents the inward pull on a sphere surrounding the gravitational force. In a black hole singularity, the tidal distortion of Weyl would predominate over Ricci, and Weyl goes to infinity at the singularity. However, in the early universe expanding from the Big Bang, the inward tidal distortion is absent, so Weyl=0, while it is the inward pressure of Ricci that predominates. So the early universe is seen to have had low entropy with Weyl close to zero. Weyl is related to gravitational distortions, and Weyl close to zero indicates a lack of gravitational clumping, just as Weyl at infinity indicated the gravitational collapse into a black hole. Weyl close to zero and low gravitational clumping therefore indicate low entropy at the beginning of the universe. The fact the Weyl is constrained to zero is seen by Penrose as a function of quantum gravity. The whole theory is referred to as the Weyl curvature hypothesis. The question that Penrose now asks is as to why initial spacetime singularities have this structure. He thinks that quantum theory has to help with the problem of the infinity of singularities. This would be a quantum theory of the structure of spacetime, or in other words a theory of quantum gravity. Penrose regards the problems of quantum theory in respect of the disjuncture between the Schrödinger equations deterministic evolution and the randomness in wave function collapse as fundamental. He thinks in terms of a time-asymmetrical quantum gravity, because the universe is time-asymmetric from low to high entropy. He argues that the conventional process of collapse of the wave function is time-asymmetric. He describes an experiment where light is emitted from a source and strikes a half-silvered mirror with a resulting 50% probability that the light reaches a detector and 50% that it hits a darkened wall. This experiment cannot be time reversed, because if the original emitter now detects an incoming photon, there is not a 50% probability that it was emitted by the wall, but instead 100% probability that it was emitted by the other detecting/emitting device. Penrose relates the loss of information that occurs in black holes to the quantum mechanical effects of the black hole radiation described by Stephen Hawking. This relates the Weyl curvature that is seen to apply in black holes and the quantum wave collapse. As Weyl curvature is related to the second law of thermodynamics, this is taken to show that the quantum wave reduction is related to the second law and to gravity. He proposes that in certain circumstances there could be an alternative form of wave function collapse. He called this objective reduction (OR). He suggests that as a result of the evolution of the Schrodinger wave, the superpositions of the quanta grow further apart. According to Penrose’s interpretation of general relativity, each superposition of the quanta is conceived to have its own spacetime geometry. The separation of the superpositions, each with its own spacetime geometry constitutes a form of blister in space-time. However once the blister or separation grows to more than the Planck length of 10-35 metres, the separations begin to be affected by the gravitational force, the superposition becomes unstable, and it soon collapses under the pressure of its gravitational self-energy. As it does so, it chooses one of the possible spacetime geometries for the particle. This form of wave function collapse is proposed to exist in addition to the more conventional forms of collapse. Evidence for non-computational spacetime:. In support of this, he points out that when the physicists, Geroch and Hartle, studied quantum gravity, they ran up against a problem in deciding whether two spacetimes were the same. The problem was solvable in two dimensions, but intractable in the four dimensions that accord with the four dimensional spacetime, in which the superposition of quantum particles needs to be modelled. It has been shown that there is no algorithm for solving this problem in four dimensions. Earlier the mathematician, A. Markov, had shown there was no algorithm for such a problem, and that if such an algorithm did exist, it could solve the Turing halting, for which it had already been shown that there was no algorithm. The possibly non-computable nature of the structure of four-dimensional space-time is deemed to open up the possibility that wave function collapses could give access to this non-computable feature of fundamental space-time. Testing Penrose’s objective reduction:. A long-term experiment is underway to test Penrose’s hypothesis of objective reduction. This experiment is being run by Dirk Bouwmeester at the University of California, Santa Barbara and involves mirrors only ten micrometres across and weighing only a few trillionths of a kilo, and the measurement of their deflection by a photon. The experiment is expected to take ten years to complete. This means that theories of consciousness based on objective reduction are likely to remain speculative for at least that length of time. However, the ability to run an experiment that could look to falsify objective reduction, at least qualifies it as a scientific theory. 4.6:  Significance for consciousness The significance of this for the study of consciousness is that, in contrast to the conventional idea of wave function collapse, this form of collapse is suggested to be non-random, and instead driven by a non-computable function at the most fundamental level of spacetime. Penrose argues that, in contrast to the conventional wave function form of collapse, there are indications that in this case, there is a decision process that is neither random nor computationally/algorithmically based, but is more akin to the ‘understanding’ by which Penrose claims the human brain goes beyond what can be achieved by a computer. When Penrose first proposed his ideas on consciousness, he had no significant suggestion as to how this could be physically instantiated in the brain. Subsequent to this, Stuart Hameroff proposed a scheme by which Penrose’s concept of objective reduction might be instantiated in neurons, giving rise to the theory of orchestrated objective reduction (Orch OR). 4.8:  Single-cells organisms, neurons Hameroff emphasises that single-cell organisms have no nervous system, but can perform complicated tasks, which could only be achieved by means of some form of internal processing. He surmised that the same form of processing could exist in brain cells. Thus Hameroff viewed each neuron as a computer. Within the neuron, a number of areas such as the ion channels and parts of the synapses were considered as possible sites for information processing and ultimately consciousness. However, another candidate, the cytoskeleton, came to be viewed as the component of the neuron best suited to information processing. The cytoskeleton comprises a protein scaffolding that provides a structural support for all living cells including neurons. 4.9:  Microtubules Microtubules are the major element of the cytoskeleton. As well as providing structural support for the cell, they are important for internal transport, including the transport of neurotransmitter vesicles to synapses in neurons. Hameroff suggested that microtubules were suitable for information processing, and in addition to this that they could support quantum coherence and the objective reduction looked for in Penrose’s theory. The microtubules are comprised of the protein tubulin, which is made up of an alpha and beta tubulin dimer. The microtubules are formed of 13 filamentous tubulin chains skewed so that the filaments run down the cylinder of the microtubule in a helical form, and hexagonal in that each tubulin dimer has six neighbours. Each turn of this helix is formed by thirteen dimers, and creates a slightly skewed hexagonal lattice, considered to be suitable for information processing. The intersections of the windings of the protofilaments are also the attachment sites for microtubular associated proteins (MAPs) that help to bind the cytoskeleton together. The nature and activity of microtubules in neurons is markedly different from that in other body cells. Neuron microtubules are denser and more stable than those in other cells. In neurons microtubules are also more important for linking parts of the cell, such as taking synaptic vesicles from the Golgi apparatus in the cell body down to the axon terminal, and carrying protein and RNA to the dendritic spines. 4.10:  Suitability for information processing It is the geometry of this lattice based on tubulin sub-units that is considered to have a potential for information processing. Within the cylindrical lattice of the microtubule, each tubulin is in a hexagonal relationship, by virtue of being in contact with six other neighbouring tubulins. Each dimer would be influenced by the polarisation of six of its neighbours, giving rise to effective rules for the conformation of the tubulins, which in turn makes them suitable for the transmission of signals. Tubulin switches between two conformations. It is suggested that tubulin conformational states could interact with neighbouring tubulin by means of dipole interactions. The dipole-coupled conformation for each tubulin could be determined by the six surrounding tubulins The geometry of a quantum computing lattice could be suitable for quantum error correction, which along with pumping of energy and quantum error correction might delay decoherence. This latter view is consistent with the recent studies of photosynthetic systems. The intersections of the windings of the protofilaments are also the attachment sites for microtubular associated proteins (MAPs) that help to bind the cytoskeleton together. Hameroff describes protein conformation as a delicate balance between contervailing forces. Proteins are chains of amino-acids that fold into three dimensional conformations. Folding is driven by van der Waals forces between hydrophobic amino-acid groups. These groups can form hydrophobic pockets in some proteins. These pockets are critical to the folding and regulation of protein. Amino acid side groups in these pockets interact by van der Waals forces. 4.11:  Dendrites and consciousness Hameroff related consciousness not to the axons of neurons that allow forward communication with other neurons, but the dendrites that receive inputs from other neurons. The cytoskeleton of the dendrites is distinct both from that found in cells outside the brain and also from the cytoskeleton found in the axons of neurons. The microtubules in dendrites are shorter than those in axons and have mixed as opposed uniform polarity. This appears a sub-optimal arrangement from a structural point of view, and it is suggested that in conjunction with microtubule associated proteins (MAPs), this arrangement may be optimal for information processing. These microtubule/MAP arrangements are connected to synaptic receptors on the dendrite membrane by a variety of calcium and sodium influxes, actin and other inputs. Alterations in the microtubule/MAPs network in the dendrites correlate with the rearrangement of dendrite synapatic receptors. Hameroff points out that changes in dendrites can lead to increased synaptic activity. The changes in dendrites involve the number and arrangement of receptors and the arrangement of dendritic spines and dendrite-to-dendrite connections. The main function of dendrites is seen to be the handling of signal input into the neuron, which may eventually result in an axon spike. 4.12:  Dendritic spines, the dendritic cytoskeleton & information transmission Neurons receive inputs through dendrites and dispatch signals through axons. Dendritic spines are the points at which signals from other neurons enter the dendrites. There is evidence for interactivity between dendritic spines and the dendritic cytoskeleton. The connection between the membrane and the cytoskeleton has tended to be ignored. Actin filaments are concentrated in dendritic spines and near to axon terminals.  These bind to scaffolding proteins and interact with signalling molecules. There are also interactions between ion channels and the cytoskeleton, especially actin filaments. Experimental work suggests that the cytoskeleton and actin filaments in particular can regulate ion channels that are part of basic neural processing. Recent studies indicate cross-linker proteins between actin filaments and microtubules, in additions to MAP 2 and tau which are known to bind to actin filaments. The dendritic spines can be modulated by actin indicating that cytoskeletal proteins can influence synaptic plasticity. The spines receive glutamate inputs by means of NMDA and AMPA receptors.  Actin holds signal transduction molecules close to the NMDA receptors, and this links these receptors to signal cascades within the neuron. Actin is also important for anchoring ion channels, and congregating them in clusters. Actin filaments are known to control the excitability of some ion channels, such as the K+ channel, and it also binds to the Na+ and Ca2+ ion channels. Scaffolding proteins such as the post-synaptic density protein, PSD95 and gephyrin a GABA receptor scaffolding protein, secure the membrane receptors in the dendritic spine, and attach them to protein kinases and also to actin filaments that constitute part of the cytoskeleton. Gephyrin concentrates GABA receptors at post synaptic sites, while actin filaments support the movements of geyphrin complexes. Actin filaments are concentrated immediately below the neuronal membrane, but also penetrate into the rest of the cytoskeleton and are heavily concentrated in dendritic spines. The actin filaments are shown to be involved in the reorganisation of dendritic spines following stimulation. They also hold in place receptors, ion channels and transduction molecules. Where Hameroff moves on to discussing π electron clouds, he comes closer to the type of functional quantum coherence identified in photosynthetic systems. There has been much discussion over the last two decades as to how microtubules or any other structure in the neurons could sustain quantum states for long enough for them to be relevant to neural processing. In the light of more recent studies of quantum coherence in photosynthetic systems, it looks most likely that any quantum coherence in microtubules would relate to π electron clouds. Mainstream research moved away from the idea of quantum processes in living organisms during the second half of the 20th century, although a few physicists such as Fröhlich kept the idea alive. Fröhlich proposed that biochemical energy could pump quantum coherent dipole states in geometrical arrays of non-polar π electron delocalised clouds. Such electron clouds are now known to be isolated from water and ions, and present in cells within membranes, microtubules and organelles. These electron clouds can use London forces, involving interaction between instantly forming dipoles in different electron clouds, to govern the conformation of biomolecules, including proteins. 4.14:  Aromatic rings Life is based on carbon chemistry and notably carbon ring molecules, such as benzene, which has delocalised electron clouds in which London forces are active. Carbon has four atoms in its outer shell, able to form four covalent bonds with other atoms. In some cases two of the electrons form a double bond with another atom, and the remaining two outer electrons remain mobile and are known as π electrons. In benzene, there are three double bonds between six carbon atoms, such that all six carbon atoms are involved in a bond. The ring structure, into which these atoms are formed, famously came to its discoverer, Friedrich von Kekule, in a dream of a snake biting its tail. There are varying configurations of the bonds and the π electrons and the molecule is delocalised between these configurations. Benzene rings and the more complex indole rings are referred to as aromatic rings, and make up several of the amino acid side groups that are attached to proteins. Protein folding and π electron clouds:. Proteins constitute the driving machinery of living systems, since it is they which open and close ion channels, grasp molecules to enzymes and receptors, make alterations within cells, and govern the bending and sliding of muscle filaments. The organisation of protein is still poorly understood. Proteins are formed from 20 different amino acids with an enormous number of possible sequences. Van der Waals forces are involved in the proteins folding into different conformations, with a huge number of possible patterns of attraction and repulsion between the side groups of the protein. During the protein folding process there are non-local interactions between aromatic rings, which has been seen as suggestive of quantum mechanical sampling of possible foldings. Once formed a protein structure can be stabilised by outwardly facing polar groups and by regulation from non-polar regions within. The coalescence of non-polar amino acid side groups such as two aromatic rings can result in extended electron clouds constituting hydrophobic pockets. Protein conformation represents a delicate balance between forces such as chemical and ionic bonds, and as a result London forces driven by π electrons in hydrophobic pockets can tip the balance and thus govern conformations of protein. 4.15:  Hydrophobic pockets and entanglement The more solid parts of cells include protein structures, and these have within them hydrophobic areas containing hydrophobic or oil-like molecules with delocalised π electron clouds. In water, non-polar oily molecules such as benzene, which are hydrophobic are pushed together, attracting each other by London forces, and eventually aggregate into stable regions shielded from interaction with water. London forces can govern the configurations of protein in these regions. Such regions occur as pockets in proteins. In the repetitive structures of the tubulin dimer, π electrons clouds may be separated by less than two nanometres, and this is seen as conducive to entanglement, electron tunnelling or exciton hopping between dimers and connections between the electron clouds extending down the length of the neuron. Tubulin has a dimer form with an alpha and beta monomer joined by a ‘hinge’. The tubulin has a large non-polar region in the beta monomer just below the ‘hinge’. Other smaller non-polar regions with π electron rich indole rings, are distributed throughout the tubulin with distances of about two nanometres between them. The positioning of π electron clouds within about two nanometres of one another is suggested to allow the electrons to become entangled. This entanglement could spread through the microtubule and to other microtubules in the same dendrite. Following on recent research, it has become possible to compare the situation in microtubules to quantum coherence and entanglement in photosynthetic organisms, something unknown when researchers such as Tegmark argued against the possibility of functionally relevant quantum coherence in the brain. In photosynthetic systems light-harvesting chromophore molecules use dipoles to provide 99% efficiency in energy transfer from the light harvesting antennae to the reaction centre. The studies show that instead of quantum coherence being destroyed by the environment within the organism, a limited amount of noise in the environment acts to drive the system. 4.16:  Tryptophan Photosynthesis absorbs light in the red and infra red. These forms of light are not available to tryptophan in proteins, but tryptophan is able to use ultra violet light emitted by the mitochondria. In fact Tryptophan is sometimes referred to as chromophoric because of its ability to absorb UV light. It is becoming feasible to suggest that the same system that gives rise to quantum coherence in light-harvesting complexes could also give rise to it within the protein of neurons. 4.17:  Penrose & Hameroff 2011 In their latest joint paper published as a chapter in Consciousness and the Universe (2011) (22.) Penrose and Hameroff deal with aromatic rings and proposed hydrophobic channels within microtubules that could be crucial for a quantum theory of consciousness. They point to unexpected discoveries in biology. The most important change since Penrose and Hameroff first propounded their ideas in the 1980s and 1990s is the recent discoveries in biology relative to higher temperature quantum activity. In 2003 Ouyang & Awschalom showed that quantum spin transfer in phenyl rings (an aromatic ring molecule like those found in protein hydrophobic pockets) increases at higher temperatures. In 2005 Bernroider and Roy (23.) researched the possibility of quantum coherence in K+ neuronal ion channels. A more crucial discovery came in 2007 when it was demonstrated that quantum coherence was functional in efficiently transferring energy within photosynthetic organisms (Engel et al, 2007). Subsequent papers showed functional quantum coherence in multicellular plants and also at room temperature. In 2011 papers by Gauger et al  and Luo and Lu dealt with higher temperature coherence in bird brain navigation and in protein folding. Work by Anirban Bandyopadhyay with single animal microtubules showed eight resonance peaks correlated with helical pathways round the cylindrical microtubule lattice. This allowed ‘lossless’ electrical conductance. Tubulin & aromatic rings: building blocks of consciousness?  Each tubulin protein contains the amino acids tryptophan and phenylalanine with aromatic rings. Each hydrophobic pocket in the tubulin is suggested to be composed of four such aromatic rings, with the hydrophobic pockets being arranged in channels. Van der Waals London forces operate in the hydrophobic pockets in tubulin, based on the π electron rings of tryptophan and phenylaline. This concept derives originally from Fröhlich, who suggested that proteins are synchronised by the oscillations of dipoles in the electron clouds of these amino acids. Anaesthetic gases are similarly suggested to work through their action on aromatic amino acids in hydrophobic pockets in neuronal proteins, including membrane proteins. Hydrophobic channels and long-range van der Waals:. A paper published in 1998 (Nogales et al, 8.) described the structure of the tubulin protein and identified the existence and location of the non-polar aromatic amino acids tryptophan and phenylamine in tubulin. These are located in hydrophobic pockets, but these pockets are within 2 nanometres of one another, and collectively they can be interpreted as hydrophobic channels or pathways rather than mere pockets. This is suggested to allow linear arrays of electron clouds capable of supporting long-range van der Waals London forces. The quantum channels in individual tubulins are seen as being aligned with those in neighbouring tubulins within the microtubule lattice, and these provide helical winding patterns. The authors also make a direct reply to one critic in particular (McKemmish et al, 2010) McKemmish claimed that switching between two states of the tubulin protein in the microtubules would involve conformational changes requiring GTP hydrolysis which in turn would involve an impossible energy requirement. The authors however claim that electron cloud dipoles (van der Waals London forces) are sufficient to achieve switching without large conformational changes. Where the Hameroff version of quantum consciousness remains ambitious relative to existing scientific knowledge is in the proposed link to the global gamma synchrony, the brain’s most obvious correlate of consciousness. He proposes that coherence within dendrites connects via gap junctions to other neurons and thus to the neuronal assemblies involved in the global gamma synchrony. He thus proposes the existence of quantum coherence over large areas of the brain, sometimes including multiple cortical areas and both hemispheres of the brain. Hameroff pointed to gap junctions as an alternative to synapses for connections between neurons. Neurons that are connected by gap junctions depolarise synchronously. Cortical inhibitory neurons are heavily studded with gap junctions, possibly connecting each cell to 20 to 50 other. The axons of these neurons form inhibitory GABA chemical synapses on the dendrites of other interneurons. Studies show that gap junctions mediate the gamma synchrony. On this basis, Hameroff suggested that cells connected by gap junctions may in fact constitute a cell assembly, with the added advantage of synchronous excitation. In this scheme computations are suggested to persist for 25 ms, thus linking them to the 40Hz gamma synchrony. The attempt to extend a proposal for quantum features from single neurons out to neuronal assemblies of millions of neurons resurrects the nay-sayer objections about time to decoherence. The photosynthetic states that have been demonstrated persist for only over femtosecond and picosecond timescales. Where the decoherence argument still stands up is in dealing with a system that needs to be sustained for 25 milliseconds. Further to this the Hameroff’s gamma wide theory involves difficult arguments about the ability of coherence to pass from neuron to neuron via the gap junctions. Danko Georgiev, a researcher at Kanazawa University also criticises Hameroff’s  requirement for microtubules to be quantum coherent for 25 ms. This has been generally regarded as an ambitious timescale for quantum coherence, and Georgiev objects on the grounds that enzymatric functions in proteins take place on a very much quicker 10-15 picosecond timescale. Georgiev wants to base his version of OR consciousness on this 10-15 picosecond timescale. Such a rapid form of objective reduction would also remove the necessity for the gel-sol cycle to screen microtubules from decoherence, as it does in the Hameroff version of objective reduction. Axons, dendrites and synapses:  Georgiev also criticises Hameroff’s emphasis on conscious processing as being concentrated in the dendrites. He claims that Hameroff’s does not allow any consciousness in axons, and this creates a problem in explaining the problematic firing of synapses. Only 15-30% of axon spikes result in a synapse firing, and it is not clear what determines whether or not a synapse fires. He discusses the probabilistic nature of neurotransmitter release at the synapses, and the possible connection this has with quantum activity in the brain. The probability of the synapse firing in response to an electrical signal is estimated at only around 25%. Georgiev points out that an axon forms synapses with hundreds of other neurons, and that if the firing of all these synapses was random, the operation of the brain could prove chaotic. He suggests instead the choice of which synapses will fire is connected to consciousness, and that consciousness acts within neurons. Each synapse has about 40 vesicles holding neurotransmitters, but only one vesicle fires at any one time. Again the choice of vesicle seems to require some form of ordering. The structure of the grid in which the vesicles are held is claimed to be suitable to support vibrationally assisted quantum tunnelling. P. Georgiev’s emphasises the onward influence of solitons (quanta propagating as solitary waves) from the microtubules to the presynaptic scaffold protein, from where, via quantum tunnelling, they are suggested to influence whether or not synapses fire in response to axon spikes. Jack et al (1981) suggested an activation barrier, restricting the docking of vesicles and the release of neurotransmitters. The control of presynaptic proteins is suggested to overcome this barrier, and to regulate the vesicles that hold neurotransmitters in the axon terminals. This is suggested to be the process that decides whether a synapse will fire in response to an axon spike (a probability of only about 25%), and if it does, which of a choice of 40 or so vesicles will release its neurotransmitters. The system he describes involves the neuronal cytoskeleton, and particularly the pre and post-synaptic scaffold proteins. Here, it is suggested that consciousness arises from the objective reduction of the wave function within these structures. The timescale of the system is argued to be defined by changes in tubulin conformations within the cytoskeleton and by the enzyme action in the scaffold proteins, which involves a timescale of 10-15 picoseconds, and thus implies a decoherence time on the same scale. Georgiev points out that it is much easier to suppose a decoherence time of this length in the brain than the 25 ms demanded by the Hameroff proposals. 4.19:  Ion Channels and Consciousness – Gustav Bernroider As an aside from microtubules Gustav Bernroider at Salzburg University has proposed a quantum information system in the brain that is driven by the entangled ion states in the voltage-gated ion channels of the membranes of neurons. These ion channels, situated in the neuron’s membrane are a crucial component of the conventional neuroscience description of axon spiking leading to neural transmitter release at the synapses. The ion channels allow the influx and outflux of ions from the cell driving the fluctuation of electrical potential along the axon, which in turn provides the necessary signal to the synapse. The work concentrates attention on the potassium (K+) channel and in particular the configuration of this channel when it is in the closed state. This channel is traditionally seen as having the function of resetting the membrane potential from a firing to a resting state. This is achieved by positively charged potassium (K+) ions flowing out of the neuron through the channel. Recent progress in atomic-level spectroscopy of the membrane proteins that constitute the ion channels and the accompanying molecular dynamic simulations indicate that the organisation of the membrane proteins carries a logical coding potency, and also implies quantum entanglement within ion channels and possibly also between different ion channels. An increasing number of studies show that proteins surrounding membrane lipids are associated with the probabilistic nature of the gating of the ion channels (Doyle, 1998, Zhou, 2001, Kuyucak, 2001). This work draws particularly on the work of MacKinnon and his group, notably his crystallographic X-ray work. The study shows that ions are coordinated by carboxyl based oxygen atoms or by water molecules. An ion channel can be in either a closed or an open state, and in the closed state there are two ions in the permeation path that are confined there. This closed gate arrangement is regarded as the essential feature with regard to their research work. The open gate presents very little resistance to the flow of potassium ions, but the closed gate is a stable ion-protein configuration. The ion channel serves two functions, selecting K+ ions as the ones that will be given access through the membrane, and then voltage-gating the flow of the permitted K+ ions. In the authors’ view, recent studies also require a change in views both of the ion permeation and of the voltage-gating process. A charge transfer carried by amino acids is involved in the gating process. In the traditional model the charges were completely independent, whereas in the new model there is coupling with the lipids that lie next to the channel proteins. This view, which came originally from MacKinnon, is now supported by other more recent studies. The authors think that the new gating models are more likely to support computational activity, than were the traditional models. Three potassium ions are involved in the ion channel’s closed configuration. Two of these are trapped in the permeation path of the protein, when the channel gate is closed. The filter region of the ion channel is indicated by the recent studies to have five binding pockets in the form of five sets of four carboxyl related oxygen atoms. Each of the two trapped potassium ion are bound to eight of the oxygen atoms, i.e. each of them are bound to two out of the five binding pockets. The author’s calculations predict that the trapped ions will oscillate many times before the channel re-opens, and the calculations also suggest an entangled state between the potassium ions and the binding oxygen atoms. This structure is seen as being delicately balanced and sensitive to small fluctuations in the external field. This sensitivity is viewed as possibly being able to account for the observed variations in cortical responses. The theory also relates the results of recent studies of the potassium channel and its electrical properties to the requirements for quantum computing. There have been schemes for quantum computers involving ion traps, based on electostatic interactions between ions held in microscopic traps, that have a resemblance to Bernroider’s interpretation of the possible quantum state of the K+ channel. P. The authors deny that the rapid decoherence of quantum states in the brain calculated by Tegmark applies to their model. They argue that the ions are not freely moving in the ion filter area of the closed potassium channel, but are held in place by the surrounding electrical charges and the external field. The ions are particularly insulated within the carboxyl binding pockets, and it is suggested the decoherence could be avoided for the whole of the gating period of the channel, which is in the range of 10-13 seconds. The authors also raise the question of whether given quantum coherence in the ion channel, it is possible for the channel states to be communicated to the rest of the cell membrane. This could include connections to other ion channels in the same membrane, possibly by means of quantum entanglement. P. Bernroider’s work might not be considered to be a fully fledged separate quantum consciousness theory. In the early part of the decade, Bernroider seemed to associate himself with David Bohm’s implicate order, but the lack of much specific neuroscience in Bohm’s version makes it hard to make any definite connection between it and the type of detailed neuroscientific argument offered by Bernroider. In the light of the advances in biology, and the potential for coherence to be supported by aromatic molecules within microtubules, it might be feasible to suggest that quantum coherence in the ion channels works together with coherence in other parts of the neuron. We have looked at the question of consciousness as a fundamental in terms of the quanta and spacetime, and we have looked at the possibility of quantum states in the brain. This brings us to the further question of how consciousness is related to larger-scale brain processing. Twentieth century consciousness studies tended to be very insistent that consciousness was the product of neural systems as described in text books that made no distinction between conscious and non-conscious processing. Any system that did what neurons did would produce consciousness, or alternatively consciousness was what it was like to have a brain, without any distinction between different parts of the brain. More recent research is indicative of consciousness arising in specific areas of the brain, and that on a transient basis. While this cannot be said to disprove the twentieth century claims, it does at least suggest a more careful and discriminating approach to understanding what gives rise to consciousness. Researchers, Goodale and Milner (24.) point to a ventral stream in the brain that produces conscious visual perception and a separate dorsal stream supporting non-conscious visuomotor orientations and movements. Goodale and Milner refer to a patient, who had suffered brain damage as a result of an accident. She had difficulty in separating an object from its background, which is a crucial step in the process of visual perception. She could manipulate images in her mind, but could not perceive them directly on the basis of signals from the external world, thus demonstrating that images generated by thought use a different process from direct perceptions of the external world. In modern neuroscience, perception is not viewed as a purely bottom-up process resulting from analysis of patterns of light, but is seen as also requiring a top-down analysis based on what we already know about the world. The patient’s visual problems seemed paradoxical. If a pencil was held out in front of her, she couldn’t tell what it was, but she could reach out and position her hand correctly to grasp it. This contrast between what she could perceive and what she could do was apparent in a number of other instances. The researchers saw the patient’s state as indicative of the existence of two partly independent visual systems in the brain, one producing conscious perception, and the other producing unconscious control of actions. They point to instances of patients with the opposite of their patient’s problems, who are able to perceive objects, their size and location, but are unable to translate this into effective action. These patients may be able to accurately estimate the size of an object, but are unable to scale their grip in taking hold of it, despite having no underlying problem in their movement ability. These patients with exactly the opposite problems from the first patient are taken to suggest partly-independent brain systems, one supporting perception, and the other supporting vision based action. It has been found that even in more primitive organisms, there can be separate systems for catching prey, and for negotiating obstacles, with distinct input and output paths. This modularity is also found in the visual systems of mammals. The retina projects to a number of different regions in the brain. In humans and other mammals, the two most important pathways are to the lateral geniculate nucleus in the thalamus and the superior collicus in the midbrain. The path to the superior collicus is the more ancient in evolutionary terms, and is already present in more primitive organisms. The pathway to the geniculate nucleus is more prominent in humans. The geniculate nucleus projects in turn to the primary visual cortex or V1. The mechanisms that generate conscious visual representations are recent in evolutionary terms, and are seen as distinct from the visuomotor systems, which are the only system available to more primitive organisms. The perceptual system is not seen as being specifically linked to motor outputs. The perceptual representation may be additionally shaped by emotions and memories, as well as the immediate light signals from the environment. By contrast, visuomotor activity may be largely bottom up, drawing on the analysis of light signals, and is not accessible to conscious report. As such, this appears little different from the systems used by primitive organisms, while perception is a product of later evolution. Perceptual representations of the external world have meaning, and can be used for planning ahead, but they do not have a direct connection to the motor system. Earlier research by Ungerleider & Mishkin argued for two separate pathways within the cortex. The dorsal visual pathway leads to the posterior parietal region, while the ventral visual pathway leads to the inferior temporal region. The authors relate these two basic streams to the concepts of vision for action and vision for perception respectively. Studies have shown that damage to the dorsal stream results in deficits in actions such as reaching, while the ability to distinguish perceived visual images remains intact. Other studies have shown that damage to the ventral stream creates difficulties with recognising objects, but does not impair vision-based actions such as grasping objects. An interesting study shows that attempts of patients with dorsal damage to point to images actually improved, if they delayed pointing until after the image had been removed. It was surmised that with the image gone, patients started to rely on a memory based on the intact ventral stream. Neurons in the primary visual cortex fire in response to the position and orientation of particular edges, colours or directions of movement. Beyond the primary visual cortex, neurons code for more complicated features, and in the inferior temporal cortex they can code for something as specific as faces or hands. However, while neurons may respond only to quite specific features, they can respond to these with respect to a variety of viewpoints or lighting conditions. By contrast neurons in the dorsal stream usually fire only when the subject responds to the visual signal, such as when they reach out to grasp an object. The ventral stream neurons appear to be moving the signals towards perception, whereas the dorsal stream neurons are moving signals towards producing action. The visuomotor areas of the parietal cortex are closely linked to the motor cortex. The authors suggest that the dorsal stream may also be responsible for shifts in attention at least those made by the eyes. The ventral stream has no direct connections to the motor region, but has close connections with regions related to memory and emotions, and providing information as to the function of objects Even within the ventral/perception stream there are separate visual modules. Damage related to any one module can result in localised deficits such as recognition of faces or of landmarks in the spatial environment. A cluster of visual areas in the inferior temporal lobe are responsible for most of these modules. This aspect of the research looks to point to the importance of a changing population of a small number of neurons or even single neurons in producing consciousness. 5.2:  Blindsight With the phenomenon of ‘blindsight’ patients have damage in the primary visual cortex V1. If V1 is not functioning, the relevant visual cells in the inferior temporal cortex remain silent regardless of what is presented to the eye. However, Larry Weiskrantz (25.), an Oxford neuropsychologist, showed that patients with this conditions could nonetheless move their gaze towards objects that they could not consciously see, and later studies showed they could scale their grip, and rotate their wrist to grasp such objects These blindsight abilities are mainly visuomotor. It is suggested that in these cases, signals from the eyes could go direct to the superior collicus, a midbrain structure that predates the evolution of the cortex, and thence to the dorsal stream. It is suggested that while the ventral stream depends entirely on activity in V1, there may be an alternative route for the dorsal stream. Studies have shown the dorsal stream to be active even when V1 is inactive. The patient discussed earlier is considered to be similar to blindsight patients, although in her case V1 is still active, which accounts for her still having conscious vision, albeit with impairments. In this research visual perception provided by the ventral stream is seen as allowing us to plan and envisage the consequences of actions, and to file representations in long-term memory for future use. Motor control on the other hand requires immediate accurate information using a metric that correlates with the external world. In perception the metric is relative rather than absolute. Thus in a picture or film the actual size of the image doesn’t matter, and we judge scale by the relative size of objects such as people or buildings. Thus the computation used for the absolute external accuracy of the motor system needs to be different from the relative computation of visual perception. Probably the most established correlate of consciousness is the global gamma synchrony (53.&.54.). Here it is important to repeat the distinction between correlation and identity. The fact that the gamma is correlated to consciousness, and that we understand how the synchrony arises does not of itself mean that we have explained consciousness. However, it seems reasonable to think that exploration of the gamma synchrony and its role might lead us towards an understanding of consciousness. The binding problem:.  One important problem in consciousness studies is the so-called binding problem, as to why processing in spatially separated parts of the brain and in different modalities is experienced as a single unified consciousness. One the one hand, there is the unity of consciousness and on the other, the fact that the brain comprises a number of specialised although connected processing areas. In order for consciousness to become unified, it has to overcome the problem of being represented in different modalities. Further it is generally agreed that there is no single central processing area in the brain. Only a small part of the brain’s total processing is conscious, and also much of the brain supports both conscious and unconscious processing. The studies of the gamma synchrony discussed below accord with the idea that it is this synchrony that creates the unity of consciousness. The processing of neurons uses a fluctuation in electrical potential referred to as firing, spiking or axon potentials. The electrical potentials in individual neurons reach an axon terminal, which releases a neurotransmitter to a receptor on a neighbouring neuron. Axon spiking oscillates at particular frequencies. The gamma frequency of about 30-90 Hz, but mostly in the lower half of this region, is the most important frequency so far as consciousness is concerned. Numbers of neuronal assemblies can become synchronised to oscillate at this frequency. Studies suggest that local gamma processing is unconscious, whereas large-scale activity, referred to as global, such as reciprocal signalling between spatially separate neural assemblies is a correlate of consciousness. Research indicates a close correlation between global gamma synchronisation and conscious processing (26. Lucia Melloni et al, 2007). Activity related to conscious responses is more synchronised, but not more vigorous. In human subjects, conscious processing has been related to phase-locked gamma oscillations in widely distributed cortical areas, whereas unconscious processing produces only local gamma activity. This is argued to be a so-called ‘small worlds’ system, where there is a coexistence between local and long range networks. In the brain it is suggested that the local networks are between neurons only a few hundred micrometers apart within layers of the cortex, while the long distance networks run mainly through the white matter, and link spatially separated areas of the cortex. It is these latter that can establish a global synchrony that is correlated to consciousness. 5.4:  Experimentation Melloni et al suggest that masking is a good way of studying consciousness, because this allows the same stimuli to be either conscious or unconscious. In a study run by the authors, words could be perceived in some trials but not in others. Local synchronisation was similar in both cases, but with consciously perceived words there was a burst of long-distance gamma synchrony between the occipital, parietal and frontal cortices. Also subsequent to this burst, there was activity that could have indicated a transfer of information to working memory, while an increase in frontal theta oscillations may have indicated material being held in working memory. Words processed at the unconscious level could lead to an increase in power in the gamma frequency range, but only conscious stimuli produced increases in long-distance synchronisation. This, plus possibly the theta oscillation, looks to be a requirement for consciousness. In another study, long distance synchronisation in the beta as well as the gamma range was observed. Recent studies suggest a nesting of different frequencies of theta and gamma oscillations when there is conscious processing. Therefore long distance synchronisation looks to be a requirement for consciousness, and conscious stimuli are seen to be associated with phase-locking of gamma oscillations across spatially distributed regions of the cortex, and also with increases in synchrony without increases in the rate of neuronal firing rate. A further study, (27. Singer, Wolf, 2010) discusses the rhythmic modulation of neuronal activity. During processing in the cortex, the brain increasingly selects for the relationship between objects. This involves interactions between different parts of the cortex. There is a requirement to cope with the ambiguity of the external world. The environment may contain objects with contours that overlap, or are partly hidden, and these conflicting signals have to be resolved in the cortex. Further to this, some objects are encoded in different sensory modalities. Evidence suggests that this process involves not only individual neurons but also assemblies of neurons (28. Singer, 1999) (29. Tsunoda et al, 2001). The possible conjunctions in perception are too large to be dealt with by individual neurons, but instead utilise assemblies of neurons, with each neuron relating to particular aspects of the unified consciousness. There appear to be two stages to this process. There is a signal to indicate that certain features are present. This operates on a ‘rate code’ basis, where a higher discharge frequency codes for a greater probability of a particular feature being present. The cortex is organised into neuronal columns extending vertically through the layers of the cortex. Synchronisation is related to connections linking these neuronal columns, which are thought to encode for linked features. The inferior temporal cortex is regarded as the likely site for the production of visual objects, and object-related assemblies are associated with synchronisation. Oscillations are driven by inhibitory neurons through both synapses and gap junctions (30. Kopell et al, 2000) (31. Whittington et al, 2001) Inhibitory inputs to pyramidal cells favour discharges at depolarising peaks, and this allows synchrony in firing. Locally synchronised oscillations can become phase-locked with others that are spatially separated. Synchronisation also allows better control of interactions between neurons. Excitatory inputs are effective if they arrive at the depolarising slope of an oscillation cycle and ineffective at other times. This means that groups of neurons that oscillate in synchrony will be able to signal to one another, and groups that are out of synchrony will be ignored. This mechanism can function both within neural assemblies or between separated assemblies. The frequency and phase of oscillation can alter so as to influence signalling. Facial recognition:   In one study, neurons responding to eyes, noses and faces were shown to synchronise to recognise a face. If the individual components were scrambled into a non-face arrangement then synchrony did not arise. However, the scrambling into non-face did not alter the discharge rate, only the synchrony. Focus of attention on objects also caused increased synchrony in the beta and gamma bands. Here again synchronisation does not necessarily relate to increased discharge rates. A further point of interest is the relationship between the global gamma synchrony and consciousness-related firing in single neurons. There is a tension at present between evidence relating subjective perception to the activity of large neuronal assemblies linked by the global gamma synchrony and other studies relating it to the activity of much smaller numbers of neurons. Since the correlations with consciousness appear strong in both cases, it seems likely that consciousness will be found to involve both types of process. With studies related to small numbers of neurons, it is shown that neurons are selective for particular images or categories of image, and that most neurons will be inactive in relation to most objects. Face recognition and localised hot spots:. A recent study Rafael Malach (32.-35.) indicates that while perception involves widespread cortical processing, the emergence of an actual percept sometimes involves only a small number of localised hot spots, in which there is intense and persistent gamma activity. Malach used the area of face recognition to clarify this concept. Studies indicate the existence of so-called totem cells (a reference to totem poles with carved faces) that are able to recognise a number of faces. The hot spots are suggested to involve intense activity between several of these totem neurons resulting in a sort of vote. If the same face is recognised by a majority or most of the neurons, the face is consciously recognised. The presumption seems to be that this would apply for most forms of perception and not just face recognition. Malach’s studies hint at important possibilities. Firstly, they raise the game for the individual neuron, from a simple switch, to something probably involved in more sophisticated processing. If we accept a conscious processing role for the individual neuron, it puts a different light on the global gamma synchrony, as a possibly classical structure that simply coordinates the activity of a number of hot-spot neurons, in order to produce the unity of consciousness. Thus in Malach’s example, face-recognition is not the end of the problem, because we do not usually perceive faces in isolation but as part of an environment. This suggests that the gamma synchrony could ensure that face recognition is coordinated with other hot-spot neurons that recognise clothing, furniture, a room or a surrounding landscape. 5.6:  ‘All or nothing’ neurons A further study also involving Malach looked at the response of single neurons in the medial temporal lobe, while subjects looked at pictures of familiar faces or landmarks. The response of the neurons studied correlated with conscious perceptions reported by the subjects of the study. Visual perception is processed by the ventral visual pathway, which goes from the primary visual cortex to the medial temporal lobe. Recent studies have shown that neurons in the medial temporal lobe fire selectively to images of individual people. In some trials, the duration of stimuli was right on the boundary of the time needed for conscious recognition of an object, so that it was possible to compare the behaviour of the neurons when an object was recognised and not recognised by the subject. One finding of this study was the ‘all-or-nothing’ nature of the neuronal response. There was no spectrum involved. Either the neuron fired strongly, in correlation with the subject reporting recognition, or there was very little activity. The responses were not correlated with the duration of the stimuli, because the responses of the neurons lasted considerable longer than the stimuli. In one trial, a single neuron was shown to respond selectively to a picture of the subject’s brother, but not to other people well known to the subject. Particularly noted is the marked difference in the firing of the neuron when the subject’s brother was recognised and not recognised. The stimulus duration of 33 ms meant that half the time the image was recognised, and half the time not recognised. The neuron was nearly silent when the image was not recognised, but fired at nearly 50 Hz when there was conscious recognition, indicating an ‘all-or-nothing’ response from the neuron, correlated to subjective report of recognition. The response exceeded the duration of the stimulus, and it was shown that the range of signal duration had little influence on the neuron’s response. In another test, a single neuron went from baseline to 10 spikes per second when the subject recognised a picture of the World Trade Centre, but showed little response to all other images that were presented. Again the neuron fired in an ‘all-or-nothing’ fashion, depending on whether there was conscious recognition. In five trials not resulting in conscious recognition, this neuron did not fire a single spike. In yet another trial, the firing of a single neuron jumped from 0.05 Hz to 50 Hz when the subject reported recognition of an individual. The overall conclusion from these trials is that there is a significant relationship between the firing of neurons in the medial temporal region and the conscious perceptions of subjects. Further to this, the activity of the neurons lasted for substantially longer than the stimuli, and had only a marginal correlation with the stimuli. In particular, it is noted that with stimuli, at a duration where exactly the same image was recognised in some cases, but not in others, there was an entirely different (all-or-nothing) response from the neuron, according to whether or not the subject consciously recognised the image. Other neurons near to the medial temporal neurons studied were shown to respond to different stimuli from those that activated the studied neurons. These findings are stated to agree with earlier single-cell studies, including studies involving the inferior temporal cortex and the superior temporal sulcus. This study serves to refute one of the popular arguments of twentieth century consciousness studies to the effect that consciousness was ‘just what it was like to have a brain or neural processing’. The study demonstrates that neural processing is completely distinct for  exactly the same signal, with a duration that placed it on the boundary of being consciously recognised or not recognised, produced almost no response, if it was not consciously recognised, but a vigorous response if it was consciously recognised. A study by Rafael Malach et al shows a correlation between consciousness and a jump from baseline to 50 Hz spiking in single neurons. Rather similar experiments show a correlation between global gamma synchrony and conscious experience. The problem here is to discover the link, if any, between these two correlations. The authors ask to what extent the spiking activity of individual neurons is related to the gamma local field potential. Earlier studies had shown a confusing variation in the degree of correlation between neuronal spiking and gamma activity, with some studies showing a strong correlation and others showing only a weak correlation. The authors here think that they have a resolution to the arguments that have arisen around this confusing data. Their study demonstrates that most of the variability in the data can be explained in terms of whether or not the activity of individual neurons is correlated to the activity of neighbouring neurons. A relationship with gamma synchrony is apparent where there is correlated activity in neighbouring neurons. The link between individual neurons that are associated with other active neurons and the gamma synchrony is apparent, both when the brain is receiving sensory stimulation, and when activity is more introspective. The gamma synchrony is considered to arise from the dendritic activity of a large number of neurons over an extensive area of the cortex. This study shows that the relation between the activity of individual neurons and gamma correlates with the extent to which the activity of the neuron is linked to the firing rate of its neighbouring neurons. This establishes a relationship between gamma activity and a large number of individual neurons distributed over a region of the cortex. In this study discussed here, subjects watched a film. During this, scanning showed a high correlation between the spiking of individual neurons and gamma activity that arose at the same time. But this did not happen in all cases. It was found that the main factor relating to whether or not neuronal spiking related to gamma activity was the degree of correlation in spiking between neighbouring neurons. This study was based on recording the activity of several individual neurons. It was shown that the correlation between the spiking of the individual neuron and gamma synchrony could be predicted from the level of correlations between the activity of neighbouring neurons. When neurons were not correlated with their neighbours gamma activity was at a low level. Consciousness and the sensory cortex:  Rafael Malach again argues that, at least in some cases, conscious perception does not require any form of ‘observer’ in the prefrontal area, but needs only activation in the sensory cortex. This claim is based on fMRI studies performed by Malach and colleagues. In one study where subjects had their brains scanned while watching a film, there was a wide spread activation of the sensory cortex in the rear of the brain, coinciding with relatively little activity in the frontal areas, where a significant degree of inhibition was apparent. Malach pointed out that the use of a film contrasted with the more normal brain scanning procedure in which stationary objects are presented in isolation without being embedded in a background and without other modalities such as sound. Malach argued that the use of a film contrasted with the more normal brain scanning procedure in which stationary objects are presented in isolation without being embedded in a background and without other modalities such as sound. With subects watching a film there was synchronisation across 30% of the cortical surface. This synchronisation extended beyond both visual and auditory sensory cortices into association and limbic areas. Emotional scenes in the film were correlated with widespread cortical activation. The study appears interesting in terms of the ability to synchronise more than one sensory modality plus the emotional areas of the brain. It was further shown that the more engaging the film, the less activity there was in the frontal areas. Malach suggests that the role of the frontal areas is not to create perceptual consciousness but to deliberate on the significance of the sensory experience and to make it reportable. When introspective or deliberative activity is in process, it is accepted that both sensory and prefrontal areas may be activated. If we accept this approach it becomes impossible to explain consciousness entirely in terms of the self, and the easy let out of deconstructing the self, and then claiming to have explained consciousness is closed off. One study (Hasson et al, 2004) also scanned brain activation in subjects viewing a film. In general, the rear part of the brain, which is orientated towards the external environment, demonstrated widespread activation. In contrast, the front of the brain and some areas of the rear brain showed little activation. These less active areas are referred to as the ‘intrinsic system’ that deals with introspection, and the ‘first person’ or ‘self’ aspects of the mind. Reportability is presumed to arise in this part of the brain. This network shows a major reduction in activation at the times that perception is most absorbing. This observation is exactly the reverse of any notion that perception and reporting should work in tandem. Malach suggests that conceptually, there could be an axis running from, firstly, introspective activity in the prefrontal, through, secondly, attention to external world material such a film, which can activate much of the sensory cortex, while inhibiting prefrontal activity, to thirdly and finally experiences such as Zen meditation, which can be seen as pure perception without any residual awareness of the self. This type of pure perceptual/absence of self experience is reported as being associated with other forms of altered states of consciousness. On a more everyday level, Malach suggests that when subjects are sufficiently absorbed by their sensory perceptions, they ‘lose themselves’ in the sense of not having any introspection about what they are perceiving. A typical example is an interesting film in which the viewer is absorbed by the drama and suspends any personal introspection or attempts to report what they are experiencing. It is also stressed here that consciousness arises of its own accord in the sensory cortex, without being dependent on frontal cortices supposed to be related to the sense of self. This looks to undermine attempts to dismiss the problem of consciousness by conflating it with the self, and then after that deconstructing the self. On the other hand, it would probably be going too far in the other direction to say that consciousness does not arise at all in the frontal areas. In particular some activity in the orbitofrontal cortex can be correlated to conscious perception rather than the strength of the signal, in much the same way that Malach has indicated occurs with visual perceptions. Malach speculates that these experimental findings support the idea that subjective experience arises in the areas where sensory processing occurs, rather than having to be referred on to any type of higher-order read out or some form of separate ‘self’. In this view, sensory perceptions are seen as arising in a group of neurons. Studies show that high neuronal firing rates over an extended duration and dense local connectivity of neurons is associated with consciousness. Malach thinks that studies of brain processing can differentiate conscious perception from the process of reporting the perceptions, and that conscious perception does not require some higher-order read-out system or some form of self, but can be handled by groups of neurons, within which individual neurons provide the perceptual read-out or subjective experience. He also argues that this supports the view that consciousness arises in each of a number of single neurons in a network, rather than having to refer to some higher structure. The perception arises when all the neurons in a particular network are informed about the state of the others in the network. Thus the perception is suggested to be both assembled by, and read-out or subjectively experienced by the same set of neurons. Each active neuron is suggested to be involved in both creating and experiencing the perception. This view of conscious perception has some important implications for consciousness theory as a whole. In the first place, it makes it possible to consider looking for the process by which consciousness arises in individual neurons rather than brain wide assemblies. This is more easily consistent with the recent findings that quantum coherence and possibly entanglement is functional in individual living cells. A further point is that the idea of consciousness in neurons or small high density areas undermines the attempt by some consciousness theorists to try and conflate consciousness and self-consciousness, and then claim that a deconstruction of the self has explained consciousness. Ambigous images: Similar evidence emerges from studies of the well-known Rubin ambiguous vase-face illusion. High fMRI activity correlates with the emergence of a face perception, although this emergence into consciousness does not involve any alteration in the external signal (Hasson et al, 2001). This is another demonstration that brain activity can correlate to conscious perceptions rather than the nature of external signals. The authors consider that consciousness is correlated with non-linear increases in neural activity, here described as ‘neuronal explosions’ and occurring in sensory areas. Other fMRI studies have distinguished two types of fMRI reading. Sensory activity is marked by rapid but short bursts of neuronal firing, while rest activity in neurons involves slow, low amplitude activity. 5.7:  Further selective response studies Quiroga, Q. et al (36.) emphasise that studies over the last decade have shown that some neurons in the medial temporal lobe respond selectively to complex visual stimuli. The studies suggest a hierarchical organisation along the ventral visual pathway. Neurons in V1 code for basic visual features, whereas at the stage of the inferior temporal cortex neurons can code selectively for complex shapes or even faces. The inferior temporal cortex projects to the medial temporal cortex where neurons are found to be selectively responsive to categories such as animals, faces and houses, as well as the degree of novelty of images. Activity in the medial temporal lobe is thought to be linked to creating memories rather than actual recognition, a process that seems to be more closely linked to the inferior temporal lobe. In a study by the authors, a hippocampal neuron fired in response to the image of a particular actor. Recording of the activity of a handful of neurons could be used to predict which of a number of images a subject was viewing at an accuracy far above chance. About 40% of medial temporal lobe neurons were found to be selective in this way, although some could fire selectively in response to more than one image. However, when this was case the images were often connected, such as two actresses in the same soap opera, or two famous towers in Europe. In fact it is estimated that selectively responding cells would respond to between 50 and 150 images. The authors are not trying to revive the idea of the ‘grandmother cell’ where one and only one neuron could respond to a particular image, for instance the image of the subject’s grandmother. Rather than that, the authors have estimated that out of one billion cells in the medial temporal lobe, two million could be responsive to specific percepts.  These cells respond to percepts that are built up in the ventral pathway rather than detailed information falling on the retina. 5.8:  Distinction between physical input and conscious percepts Kreiman, Fried & Koch (37.) demonstrated that the same environmental input to the retina can give rise to two quite different conscious visual percepts. In this study, the responses of individual neurons were recorded. Two-thirds of the visually selective medial temporal lobe neurons recorded showed changes in activity that correlated with the shifts in what was subjectively perceived, rather than the retinal input. Flash suppression is an experimental technique by which an image is sent to one eye and then a different image to the other eye. The newer image will suppress the first input. Neurons that select for the initial input and not the input to the second eye will be inactive when the first input is suppressed in this way, although the first image is still physically present on the retina. In visual illusions such as the Necker cube, the same retinal input can produce two different subjective perceptions. There is a distinction here between what happens in the primary visual cortex and in the later visual areas. Activity in the primary areas correlates to the retinal input, rather than any subjective perception. In this study performed in the US in 2002, a neuron in a subject’s amygdala responded selectively to the image of President Clinton, while failing to respond to 49 other test images presented. In the case of Clinton’s image, the neuron’s firing rate jumped from a baseline of 2.8 spikes a second to 15.1 spikes per second. However, the neuron did not react when the initial image of Clinton was suppressed by an image for which the neuron was not selective. Another amygdala neuron increased its firing in response to some faces, but was inactive when an image it didn’t select for was flashed to the other eye. A neuron in the medial temporal lobe increased its firing in response to pictures of spatial lay outs and not to other stimuli. Here again the activity did not occur when a different image was flashed to the other eye. In all these cases, the physical input to the first eye was continuing, but was not getting into conscious perception. Out of 428 neurons studied in the medial temporal lobe, 44 responded selectively to particular categories and 32 to specific images. None of these neurons were active when the images or categories they were selective for were part of the input on their retina, but were suppressed from subjective experience by a second image to the other eye. However, they could be active when both images were present, but the image they selected for was dominant. In the experimental subjects two out of three medial temporal lobe neurons changed their firing in line with subjective perceptions, but activity did not change if an input was present on the retina but not subjectively experiences because of retinal input to the other eye. This study could be seen as laying to rest two favourite ideas of twentieth century consciousness studies. The first was the idea that consciousness was non-physical. This approach is not really coherent within a scientific paradigm in any case, but experiments now demonstrate a correlation between subjective perceptions and physical levels of activity in individual neurons. Similarly, the mind-brain identity concept seemed to propose that in some mysterious way consciousness was identical to the whole operation of the brain, whereas this and other experiments clearly relate consciousness to the activity of individual neurons and specific neuronal assemblies, albeit both the neurons and assemblies involved are constantly changing. 5.9:  Object recognition Kalanit Grill-Spector discusses studies with fMRI that have shown that activation in particular brain regions correlates with the recognition of objects and also of faces. Some regions are involved in both face and object recognition. Object recognition occurs in a number of regions in the occipital and temporal cortex collectively referred to as the lateral occipital complex (LOC). These regions respond more strongly when the subjects are viewing objects. The involvement of LOC is thought to be subsequent to the early visual areas (V1-V4) and in the ventral stream, responding selectively to objects and shapes and showing less response to contrasts and positions. There are object-selective regions in the dorsal stream, but these do not correlate with object perception, and are suggested to be involved with guiding action towards objects. The LOC is responsive to objects without reference to how the object is defined, i.e. it does not differentiate between a photograph and a silhouette. The LOC responds to shapes rather than surfaces, and it responds even if part of the shape is missing. It is suggested that a pooled response across a population of neurons allows a  response to objects that does not vary according to the position of an object. This could be taken to indicate a role for individual neurons. Each neuron’s response varies according to the position of the object. It appears that for any given position in the visual field each neuron’s response is greater for one object than for all other objects presented. Apart from the LOC other regions in the ventral stream have been shown to respond more to particular categories of object. One region showed more response to letters, several foci responded more to faces than objects, including the fusiform face area, while other areas responded more to buildings and places than to faces. Nancy Kanwisher et al have suggested that the ventral temporal cortex contains modules for the recognition of particular categories such as faces, places or parts of the body. However, it is suggested that the processing of faces is extended to a more sophisticated level, given the requirements for social interaction. There may be a distinction between processing to recognise individual faces and processing to recognise categories, such as horses or dogs as a category. However, it is suggested very expert recognition of categories, such as ornithologists recognising a bird may involve a process similar to face recognition. Rafael Malach et al suggest that category recognition may respond more to peripheral input, while face and letter recognition depends more on central stimuli. 5.10:  Gamma, neurons & consciousness This could suggest that the conscious response in a single cell is linked to or dependent on global gamma synchrony. However, it would appear not necessary for the whole collection of neuronal assemblies to come into consciousness, but only for the synchrony to trigger consciousness in the individual neuron. This might make it possible to invert Hameroff’s proposal for quantum coherence in neurons to drive consciousness in the gamma synchrony. The opposite case of the synchrony triggering consciousness in single neurons would be more compatible with the type of quantum coherence that is functional in photosynthetic organism. This tends to look like pieces of a jigsaw puzzle, and unfortunately one that we may not get much help in assembling. We know that the global gamma synchrony correlates to consciousness. We know that a jump to 50 Hz spiking in individual neurons correlates to consciousness. We also now know that the spiking in the individual neurons correlates to gamma if the spiking of the individuals correlates to their neighbours. From the point of view of recent findings relative to quantum coherence in organic matter, it has become most plausible to think in terms of consciousness arising within individual neurons, but the road there may involve feed forward and feedback, as is often the case in brain processing. Processing in one neuron as a result of external signals may set off other neighbouring neurons, which ultimately broaden into a neuronal assembly oscillating as a local gamma synchrony. Longer range signals to other neuronal assemblies would set up global gamma synchrony. It might only be at that point that signals went back to individual neurons triggering quantum coherent activity within the neuron. This might account for the 500ms time lag for signals to come into consciousness (the Libet half second), while at the same time being compatible with the femto and pico second timescales of functional quantum activity in biological systems. Very speculative, but perhaps this at least provides a starting point or rational framework for thinking about the consciousness problem. In recent years, the most important neuroscientific research has arguably involved the role of emotion or emotional evaluation in the brain. This was a previously very neglected area due to various biases and misconceptions in twentieth century neuroscience. Our attention is here focused on how the orbitofrontal cortex assigns reward/punisher values to representations projected from other cortices, and how the basal ganglia integrate these subjectively-based values with inputs from the other parts of the cortex and the limbic system. 5.12:  Subjective emotion, choice & a common neural currency:  We are conscious of emotions, and they allow us to assess the reward values of actions. Without the emotion-based assessment of rewards, rational processing is not by itself adequate to deliver normal behaviour. While reasoning can be seen as working with or without consciousness, the subjective experience of emotion is closely entwined with the subjective assessment of current or future rewards. In fact, this ability to have a subjective preference or choice can be argues to be the real distinction between conscious and non-conscious systems, the difference between an automated one-to-one response and the conscious but unpredictable preference of one thing over the other. Emotion, anticipation of rewards and enjoyment of the same are all here seen to be based on subjective experience, and the key importance of these factors for behaviour suggests that subjective emotion is a common neural currency underlying the determination of behaviour. It is hard to distinguish a purely algorithmic basis for this processing, since the weighting of two subjective experiences seems to require the injection of initially arbitrary weights suggesting a non-computable or non-algorithmic element. Rewards and punishers:  Modern descriptions of emotional processing in the brain revolve round a framework of ‘rewards’ and ‘punishers’, together referred to as ‘reinforcers’, with subjects working to gain rewards and to avoid punishers. Some stimuli are primary reinforcers, so-called because they do not have to be learned. Other stimuli are initially neutral, but become secondary reinforcers, because through learning, they become associated with pleasant and unpleasant stimuli. Reward assessment is argued to be implemented in the orbitofrontal region of the prefrontal cortex and in the amygdala, a part of the subcortical limbic system. Emotions are thus viewed as states produced by reinforcers. The amygdala, the orbitofrontal and the cingulate cortex are seen as the brain areas most involved with emotions. Emotional states are usually initiated by reinforcing stimuli present in the external environment. The decoding of a stimulus in the orbitofrontal and amygdala is needed to determine which emotion will be felt. 5.13:  Neutral representations In respect of emotions, the brain is envisaged as functioning in two stages. To take the best known example of the visual system, input from the eyes is processed in the rear (occipital) area of the brain, and then progressively assembled into a conscious image arising in the inferior temporal cortex. At this stage, these representations are neutral in terms of reward value. Thus visual representations in the inferior temporal, or analogous touch representations in the somatosensory cortex, are shown to be neutral in terms of reward value, until they have been projected to the amygdala and the orbitofrontal. The brain is organised first to process a stimulus to the object level, and only after that to access its reward value. Thus reward/punisher values are learned, in respect to perceived objects produced by the later stages of processing, rather than the pixels and edges produced by the earlier stages of processing. 5.14:  Orbitofrontal cortex – subjective experience over strength of signal The orbitofrontal region of the prefrontal cortex is seen as the most important region for determining the value of rewards or punishers (55.). Objects are first represented in the visual, somatosensory and other areas of the cortex, without having any aspect of reward value. This only arises in the orbitofrontal and the amygdala. Studies show that orbitofrontal activity correlates to the subjective pleasantness of sensory inputs, rather than the actual strength of the signal. The orbitofrontal projects to the basal ganglia, which appear to integrate a variety of cortical and limbic inputs in order to drive behaviour. Thus subjective emotional assessment occurring mainly in the orbitofrontal would appear to play an important part in determining behaviour. The orbitofrontal cortex receives input from the visual, auditory, somatosensory and other association cortex, allowing it to sample the entire sensory range, and to integrate this into an assessment of reward values. In the orbitofrontal, some neurons are specialised in dealing with primary reinforcers such as pain, while others are specialised in dealing with secondary reinforcers. Orbitofrontal neurons can reflect relative preferences for different stimuli. The subjective experience of one signal can be altered by another from a different modality. The impact of words can influence the subjective impression of an odour, and colours can also influence the perception of odour. It has also been shown that the subjective quality of, for instance odours, can be altered by the top-down modulatory impact of words, while colour is thought to influence olfactory judgement. There is seen to be a triangular system involving association cortex, amygdala and orbitofrontal. 5.15:  Experimentation: Correlation with subjective experience An important study looked at the activation produced by the touch of a piece of velvet and a touch of a piece of wood in the somatosensory cortex, and the activation of the orbitofrontal produced by the same touches (38-40.). This trial compared the pressure of a piece of wood with the perceived pleasant pressure of a piece of velvet. It was demonstrated that the pressure of the wood produced a higher level of activity in the somatosensory cortex than the pressure of a piece of velvet. However, in the orbitofrontal the same pressure from velvet produced a higher level of activation, with the difference between velvet and wood being correlated to the different subjective appreciation of the two pressures. The less intense but reward-value positive stimuli, produced more activation in the orbitofrontal than a more intense but reward-value neutral stimulus. Similarly a reward value negative stimulus also produced more activation in the orbitofrontal than a neutral stimuli that was registered as stronger by the somatosensory cortex. Researchers are clear in their conclusion that the orbitofrontal registers emotionally positive or negative aspects of an input, rather than any other aspects such as intensity of signal. Thus the subjective pleasantness of the velvet touch relates directly to the activation level of the orbitofrontal cortex, demonstrating a connection between subjective appreciation and the core mechanisms for decision taking and behaviour. The orbitofrontal and to a lesser extent the anterior cingulate cortex are seen here as being adaptive in registering the emotional or reward value aspects of the initially reward-neutral somatosensory stimulation. Studies suggest that the orbitofrontal deals with a variety of types of reward values. It has been suggested that the brain has a common neural currency for comparing very different reward values. Apart from the velvet/wood study, other studies show that the level of orbitofrontal activity correlates to the subjective pleasure of the sensation, rather than the strength of the signal being received. Activation in response to taste is seen to be in proportion to the subjective pleasantness of the taste, and in responding to faces, activity increases in line with the subjective attractiveness of the face. With taste, the orbitofrontal can represent the reward value of a particular taste, and this activation relates to subjective pleasantness. In humans the subjectively reported pleasantness of food is represented in the orbitofrontal. Studies of taste in particular are seen as evidence that aspects of emotion are represented in the orbitofrontal. With faces, the activation of the orbitofrontal has been found to correlate to the subjective attractiveness of a face. This subjective ability enables flexibility in behaviour. If there is a choice of carrots or apples, carrots might be preferred and the top preference signal in the brain would correlate to carrots. However, if the range of choice was subsequently expanded to include bananas, the top preference signal could switch to bananas. This reaction looks to require some form of preferred qualia, referring to a previous subjective experience of bananas. 5.16:  Different types of reward – money v. sex:  One study attempted to compare the brain’s processing of monetary rewards, with its processing of rewards in terms of erotic images. Monetary rewards were shown to use the anterior lateral region of the orbitofrontal cortex, while erotic images activated the posterior part of the lateral orbitofrontal cortex and also the medial orbitofrontal cortex. Brain activity in these orbitofrontal regions increased with the intensity of reward, but only for types of reward in which those areas were specialised. By contrast, activity increased for both monetary and erotic rewards in the ventral striatum, the anterior cingulate cortex, the anterior insula and the midbrain. Other studies using rewards such as pleasant tastes have suggested a similar distinction between the posterior and anterior regions of the orbitofrontal. The bilateral amygdala was the only subcortical area to be activated in reward assessment and it was only activated by primary rewards such as erotic images and not by abstract rewards such as money. This area is more strongly connected to the posterior and medial orbitofrontal than to the anterior orbitofrontal. P. One distinction that is argued to emerge is between immediate reward, and the more abstract quality of a monetary reward that can only be enjoyed over time. The authors argue that studies suggest that it is not the actual delay in benefiting from the monetary reward, but its abstract nature that leads to it being processed in a different area. It was also found that patients with damage to the anterior orbitofrontal have difficulty with assessing indirect consequences as distinct from immediate consequences. 5.17:   Adaptive advantage of flexibility and response to change The adaptive advantage of the emotional system is that responses to situations do not have to be pre-specified by the genes, but can be learned from experience. If evolution had attempted to specify fixed responses for every possible stimuli, there would have been an unmanageable explosion of programmes. The reinforcer defines a particular goal, but does not specify any particular action. The orbitofrontal is also suggested to be involved in amending responses to stimuli that used to be associated with rewards, but are no longer linked to these. Three groups of neurons in the orbitofrontal provide computation, as to whether reinforcements formerly associated with particular stimuli are still being obtained. These neurons are involved in altering behavioural responses. The orbitofrontal computes mismatches between stimuli that are expected and stimuli that are obtained and changes reward representations in accord with this. This rapid reversal of response carries through from the orbitofrontal to the basal ganglia. Damage to the orbitofrontal impairs the ability to respond to such changes, and is associated with irresponsible and impulsive behaviour, and difficulty in learning which stimuli are rewarding and which are not. Patients who have suffered damage to the orbitofrontal have difficulty in establishing new and more appropriate preferences, and in daily life they tend to manifest socially inappropriate behaviour. In particular there is greater difficulty in dealing with indirect or longer term consequences of actions than with direct and immediate consequences. 5.18:  Visceral responses and emotions The orbitofrontal and amygdala act on the autonomic and the endocrine systems when stimuli appear to have significance in terms of emotion or danger. Visceral responses as a result of this signalling are fed back to the brain. Studies suggest that visceral responses are integrated into goal-directed behaviour via the ventromedial prefrontal cortex (VMPFC). The insula and the orbitofrontal are also thought likely to map visceral responses, with feedback from the viscera influencing reward assessment via levels of comfort or discomfort. There is considerable support for the idea that the body is the basis of all emotion. However, this looks difficult to square with the actual structure and nature of brain processing. While the bodily responses can certainly be seen to play a role, it is hard to see why all visual, auditory inputs, and the results of cognitive processing should have to wait on the laborious responses of the viscera, especially as it is the reward assessment areas of the brain that signal the viscera in the first place. If bodily emotion were the whole story, the orbitofrontal and amygdala would seem to be in a state of suspended activity between sending a signal to the autonomic system and getting signals back from the viscera. Conventional thinking may have here been biased by the emphasis in experimentation on fear in animal subjects, where bodily reactions are pronounced, rather than more evaluative emotional activity emphasised above. In the specific case of rapid phobic reactions in the amygdala, the idea fails completely. The more plausible view is that visceral responses are one aspect of many responses that are integrated in the orbitofrontal.  It seems more likely that in line with most brain processes there is a complex feed forward and feedback between all parts of the system including the viscera and the orbitofrontal. The body-only theory seems to depend on a simple feed forward mechanism, which is alien to how brain processing works. 5.19:  Dorsolateral prefrontal The orbitofrontal projects not only to the basal ganglia but also to the dorsolateral prefrontal, which is responsible for executive functions, planning many steps ahead to obtain rewards,  and such decisions as deferring a short-term reward in favour of a higher value but  longer-term reward. Where dorsolateral activity reflects preferences, it is found that the orbitofrontal has reflected them first, and these preferences have been projected from the orbitofrontal to the dorsolateral, where they can be utilised for planning or for deciding whether or not to defer short-term rewards. In these instances, the reward assessing functions of the orbitofrontal and the integrative role of the basal ganglia play an important role. It has been argued that that ‘moral-based’ knowledge generated by rewards and punishers cannot take place without the orbitofrontal. Ethically based rewards for good or appropriate behaviour that are decided on by the dorsolateral are seen to be influenced by processing of the orbitofrontal. In the basal ganglia, the emotional evaluation of the orbitofrontal is combined with inputs from other cortices and the limbic areas (41.). The basal ganglia can be viewed as a sort of mixer tap for the wide spread of inputs from the cortex and limbic system, and as such select or gate for material processed by the cortex, including the orbitofrontal. The basal ganglia comprise a region of the brain with strong projections from most parts of the cortex and also the limbic system. Modern brain theory views the basal ganglia as important for the choice of behaviours and movements, both as regards activation and inhibition of these. The striatum, which includes the nucleus accumbens, is the largest component of the basal ganglia, receiving projections from much of the cortex, and also receiving dopamine projections from the midbrain. The basal ganglia are sensitive to the reward characteristics of the environment, and operate within a reward-driven system based on dopamine. The region is seen as integrating sensory input, generating motivation and also releasing motor output. Incoming stimuli from the environment to the brain are always excitatory. The thalamus receives the incoming signals, and sends them forward to the cortex for processing. This is also primarily excitatory, as are further projections to the frontal cortex. The basal ganglia are seen as important for inhibition. Cortical-subcortical-cortical loops are widespread in  the brain. In these loops, the cortical inputs are always excitatory, with the subcortical for the most part inhibitory. The subcortical areas are seen to project back to the cortex, and to modulate the cortical inputs. They are indicated to have a role in deciding what information is returned to the cortex. Each loop originates in a particular area of the cortex, such as the orbitofrontal and the anterior cingulate. Inhibitory output going back via the thalamus assists the focusing of attention and action. The basal ganglia gate or select for elements of the processed information used by the cortex. Novel problem solving requires interaction between the prefrontal cortex, other parts of the cortex and the basal ganglia. Striosomes, matrisomes & TAN cells: Striosomes are the area of the basal ganglia involved in modulating emotional arousal. The basal ganglia includes the striatum, which contains neurochemically specialised sections called striosomes that receive inputs mainly from limbic system structures, such as the amygdala, and project to dopamine containing neurons in the substantia nigra. This is seen as giving them a role in dealing with the input of emotional arousal into the basal ganglia ( Graybiel, 1995). Certain regions in the cortex, and notably areas involved with emotion such as the orbitofrontal cortex, the paralimbic regions and the amygdala, all project to the striosomes (Eblen & Graybiel, 1995). This is seen as constituting a limbic-basal ganglia circuit. The role of the striatum may be to balance out a variety of sometimes conflicting inputs from different parts of the prefrontal and the limbic areas, and to switch behaviour in response to these inputs. 5.21:  TANS (tonically active neurons):  In the mid 1990s researchers discovered specialised neurons referred to as tonically active neurons (TANs) that are situated where matrisomes and striosomes meet, and are therefore well placed to integrate emotional and rational input. Cortical areas involved with anticipation and planning project to areas in the striatum known as matrisomes. These are often found in close proximity to the striosomes. This is taken to suggest a link between the planning-related matrisomes and the limbic-related striosomes. TANS (tonically active neurons) are highly specialised cells located at striosome/matrisome borders, and therefore  well placed to integrate emotional and rational input. TANS can be seen as a form of mixer tap for combining planning and emotional assessment inputs in the basal ganglia. TANs respond strongly to reward-linked stimuli, and they also responded when a previously neutral stimuli becomes associated with a reward. TANs are thought to be involved in the development of habits, with particular environmental cues having emotional meaning, and producing particular behaviour. These cells have a distinct pattern of firing when rewards are delivered during behavioural conditioning (Asoki et al, 1995). It is suggested that changes in TAN activity could be a way of redirecting information flow in the striatum. 5.22: Dopamine:  The neurotransmitter dopamine is involved in delivering the reward system for which the orbitofrontal and other areas act as a prediction. The largest concentrations of dopamine in the brain are found in the prefrontal cortex and the basal ganglia. The dopamine system is based in the ventral tegmenta area of the midbrain. The dopamine producing neurons in the mid brain appear to be influenced by the size and probability of rewards, presumably based on information from areas such as the orbitofrontal and the amygdala. Dopamine projections are mainly to the nucleus accumbens, the amygdala and the frontal cortex. This is the brains reward circuitry. The ventral striatum is highly active in anticipation of reward, and remains active during the reward. It is believed to modulate motivation, attention and cognition. Impairment of this area creates a wide range of problems. Within the striatum learning is influenced by dopamine acting on medium spiny neurons, reducing inhibition and releasing or increasing output of activity. By contrast, reduced levels of dopamine lead to increased inhibition and reduced activity. Reward/pleasure centre – nucleus accumbens:  The nucleus accumbens is part of the ventral striatum and constitutes the reward/pleasure centre of the brain. The orbitofrontal and anterior cingulate both project to the nucleus accumbens. Dopamine-based activity in the nuclear accumbens is related to seeking reward and avoiding pain. Addictions are found to be related to a lack of natural activity in this area, with drugs of addiction working to enhance otherwise depressed activity. It has further been suggested that the use of neuromodulators by-passes the need to always rely on cognitive computation in the cortex. From the point of view of consciousness studies, it is apparent that these dopamine-rewards are registered in subjective consciousness, so as with the orbitofrontal there is again a weighting of different subjective impulses. The orbitofrontal would look to base its predictions on the previous subjective experience of the delivery of dopamine rewards. The nature of emotional evaluation in the brain discussed above leads on to the vexed question of free will. The area of conventional consciousness studies has been almost unanimous in rejecting the concept of freewill in favour of human behaviour being completely deterministic. We do not have to look very far to find the explanation for this counter-intuitive notion. Conventional thinking about consciousness is based on classical/ macroscopic physics, which is entirely deterministic and has no place for anything outside a direct sequence of cause and effect. The high degree of confidence expressed in deterministic explanations rests on this assumption. Once we begin to think that classical physics might not have the full explanation for consciousness, the assurance of determinism looks to be shaky. This in itself may partly explain the furious resistance to the involvement of non-classical physics in brain processing. Recent studies of the processing of emotion in the brain do not accord well with the deterministic thesis, albeit not many have yet come to terms with this. The workings of the emotional brain provide something that can only be experienced in terms of subjective scenarios, not apparently reducible to specific weightings or to algorithms that give precise and deterministic predictions. Another way of approaching the problem of deciding between two alternative courses of action is to look at what happens when we make a list of points in favour of both courses of action. While this may somewhat clarify the mind, we are still likely to find that something is missing, something which will ultimately need to be bridged by an emotional evaluation. Whether such subjective based decisions or influences can be described as ‘free’ is hard to say. They certainly look to lie outside the classical-based neuroscience which is the usual diet provided for us, but whether they represent ‘free’ agency is another matter. As something not derived from algorithms, they can however be seen as deriving from the same fundamental level of the universe that can over some extended period of time give a pattern to the apparently randomly arising position of particles. It is perhaps beyond us at the moment to say what is that takes this sort of decision, but what they have in common with emotional evaluation is that they cannot be described by an algorithm. In the discussion below, we look at various studies that disagree with the conventionally deterministic working of the brain. The psychiatrist, Jeffrey Schwartz (42-45.), argues that the exercise of the conscious will can overcome or reduce the problems of obsessive-compulsive disorder (OCD). This disorder leads to repetitive behaviour, for example, repeated unnecessary hand-washing. The patient is aware that their behaviour is unnecessary, but has a compulsive urge to persist with it. This behaviour is related to changes in brain function in the orbital frontal cortex (OFC), anterior cingulate gyrus and basal ganglia, all areas related to emotional processing (Schwartz 1997 a&b), (Graybiel et al, 1994), (Saint-Cyr et al, 1995), (Zald & Kim, 1996a&b). The patients are able to give clear subjective accounts of their experience that can be related to cerebral changes as revealed by scanning. Thus the sight of a dirty glove can cause increased activity in the orbital frontal and anterior cingulate gyrus. There is also increased activity in the caudate nucleus, a part of the basal ganglia that modulates the orbital and the anterior cingulate (Schwartz, 1997a, 1998a). The basal ganglia have been shown to be implicated in OCD (Rauchen, Whalen et al, 1998). The striatum, which is part of the basal ganglia, contains neurochemically specialised sections called striosomes that receive inputs mainly from limbic system structures, such as the amygdala, and project to dopamine producing neurons in the substantia nigra. The prefrontal cortex, which is seen as the prime area for assessing environmental inputs, also projects to this area. The densest projections come from the orbitofrontal and the anterior cingulate (Eblen & Graybiel, 1995). At the same time, brain regions involved in anticipation, reasoning and planning projects to areas in the striatum known as matrisomes. These are often found in close proximity to the striosomes. This is taken to suggest a link between the prefrontal related matrisomes and the limbic system related striosomes. High specialised cells are found to be located at striosome/matrisome borders known as tonically active neurons (TANs) look to be a kind of mixer tap to balance the inputs from cognition, emotion and most parts of the cortex. These cells have a distinct pattern of firing when rewards are delivered during behavioural conditioning (Asoki et al, 1995) (13). It is suggested that changes in TAN activity could be a way of redirecting information flow in the striatum. TANs could produce new striatal activity in response to new information. Schwarz states that studies of patients who learn how to alter their behaviour by the apparent exercise of their conscious will, showed significant changes in the activity of the relevant brain circuits. Anticipation and planning by the patient can be used to overcome the compulsions experienced in OCD. Patients are able to learn to change their behaviour while the OCD compulsions still occur. Successful patients are active and purposeful not passive during the process of their therapy. The actual feel of the OCD compulsions does not usual change during the early stages of treatment. The underlying brain mechanisms have probably not changed, but the response of the patient has begun to change. The patient is learning to control the response to the compulsive experience. To make a change requires mental effort by the patient at every stage. New patterns of brain activity have to be created for the patients to be aware that they are getting a faulty message, when they get an urge to carry out some compulsive behaviour. At the same time the patients have to refocus on some more useful behaviour. If this is done regularly, it is suggested that the gating in the basal ganglia will be altered. It is suggested that the response contingencies of the TANs alter as a result of the patient frequently resisting the compulsive urges. This presumably reflects projections from parts of the cortex to the TAN cells within the basal ganglia. What are we to make of this study, in the light of what recent neuroscience is telling us about the emotional brainIn the first place, the disorder is related to problems in the emotional areas of the brain in the form of the orbitofrontal and the anterior cingulate. These are at least in part the areas the push the patient towards hand washing or some other repetitive behaviour. However, the orbitofrontal at any rate is capable of both of changing it assessment, and evaluating the choice between conflicting rewards. As a purely speculative suggestion, I would suggest here that rational-based inputs, most likely from the dorsolateral could change the emotional weighting of particular actions so that the subjective feel-good factor of another hand washing might be balanced out by the feel good factor of overcoming the compulsion. Projections from the orbital frontal and other regions could in turn shift the balance of inputs to the TAN cells, at the rational/emotional juncture between striosomes and matrisomes. Studies by the psychologist, Carol Dweck, suggest that subjects who believe they can influence their academic performance (referred to as incremental theorists) perform better than students who are convinced that their performance is preordained (entity theorists) (Dweck & Molden, 2005, Molden & Dweck). Entity theorists tend to withdraw effort and avoid tasks once they have failed. Incremental theorists attempt an improved approach to a problem task. In a study (Blackwell et al, 2007), in which entity and incremental theorists started a high school maths course with the same standards, the incremental students  soon pulled ahead, with the gap continuing to widen over the duration of the course. This distinction was related to the incrementalists willingness to renew efforts after a setback. A further study (Robins and Pals, 2002) showed that during their college years, entity theorists had a steady decline in their feeling of self-worth, relative to incremental theorists. Other studies (Baer, Grant & Dweck, 2005) linked some cases of depression to self-critical rumination on supposedly fixed traits by entity theorists, and suggested that incremental theorists had greater resilience to obstacles, were more conscientious in their work, and more willing to attempt challenging tasks. This suggests a role for conscious will or effort to act in a causal way on brains that initially had the same quality of rational problem solving so far, to leave them with different qualities by the end of a period of study. The essential distinction in the academic performance is that when the incrementalists suffered a setback, they did not accept this as the final judgement on their performance. This looks to point to a subjective assessment of two scenarios, the easy but disappointing scenario of giving up, and the demanding but more satisfying strategy of trying again. The second is absent in the entity theorists because they ‘know’ that they can’t achieve more than a modest performance. The psychologist, Roy Baumeister (47.), examines the reason for the scientific and psychological consensus against the existence of freewill. He suggests a metaphysical element in this, with some scientists feeling that rejection of freewill is part of being a scientist. The fact that Libet and similar experiments have shown that actual movements of the body are not driven by free will is acknowledged, but Baumeister points to researchers such as Gollwitzer (1. 1999), who distinguishes between the decision to act and the action or movement itself. It is suggested that free will may have a role in the deliberative stage. For instance, free will could govern the decision to go for a walk, but the actions of getting up, going out the door and putting one foot in front of the other would be unconsciously driven. Self-control, such as the ability to resist short-term benefits in favour of long-term goals and also rational choice based on deliberative thinking are here seen as two of the most important factors associated with freewill. Baumeister argues that reasoning entails at least a limited degree of freewill in that people can alter their behaviour on the basis of reasoning. Similarly self-control equates to the ability to alter behaviour in line with some goal. Decisions such as these can certainly be related to emotional evaluation in the orbital frontal and other regions. Baumeister cautions that the ability of modern technology to study periods of milliseconds may have blinded some researchers to the importance of processes that take extended periods of time. He wonders why people agonise over decisions if they actually have no influence on them, and also suffer negative stress effects in situations where they lack control over their lives. The implication is that the use of time and energy on such a process should have been selected out by evolution if it had no relevance. The author argues that while researchers such as Wegner have shown that people are sometimes not aware of the causes of their actions, that is very different from saying that they never determine their actions. The consensus against freewill has set the bar as high as possible in denying that freewill ever has any influence or exists at all. They have to show that none of the apparent occurrences of freewill are real, rather just producing scattered examples of freewill being an illusion, some involving rather contrived conditions. Baumeister argues for the efficacy of freewill. In particular studies show that the processes of both self control and rational choice deplete glucose in the bloodstream, leading to a deterioration in subsequent performance. It appears unlikely that evolution would have selected for such a high energy process if it was not efficacious. Consciousness is closely associated to freewill and these studies therefore carry a strong implication that consciousness itself is also a physical thing or process involving energy and being efficacious. In Baumeister’s own experimental studies, he found that the performance of self-control tasks deteriorated if there had been previous self-control tasks. The implication of this is that some resource is used up during the exercise of self-control. The exercise of choice seems to have the same effect. Subsequent to the exercise of either self-control or choice, attempts to exercise further self-control saw performance deteriorate, in a way that did not occur when participants were just thinking or answering questions. This suggests that self-control and rational choice both draw on some form of energy. Gailliot et al (2007) (47.) found that self- control caused reductions of glucose in the bloodstream, and that low levels of glucose were correlated with poor self-control. This finding has important implications for the freewill argument. If free choice was only some form of illusion, it is not clear why it would be adaptive for evolution to select for something that consumed a lot of energy, but had no influence on behaviour. There is a rather convoluted suggestion that we have the illusion of freewill because that makes us think that others have freewill and should therefore be punished if they do not make choices that are favourable to the group. There are two problems with this approach. The Baumeister study showed that the same depletion of energy that occurred with the exercise of free will in the sense of self control also occurred with the exercise of choice not requiring any particular restraint on impulses. The physical process of choosing, often referring to individual or private matters looks to go far beyond simple approval of the actions of others. Further to this,  if freewill is really just a charade, it is surprising that it should require such a noticeable amount of energy. In fact, the assessment of the positive or negative effects of the actions of other members of the group looks to be more easily accessible to an algorithm based process. There is perhaps a deeper implication, not discussed in this articles that consciousness which is closely related to the experience of free choice is itself a physical thing or process requiring energy. This should not be a surprise given the nature of the physical laws, but at the moment it looks to be contrary to the scientific consensus. The high energy cost of freewill suggested here also serves to explain why conscious as distinct from unconscious processing  is used only sparingly, and that is one reason why we rely on unconscious responses for much of our activities. 5.24:  Free won’t An area of the basal ganglia known as the subthalamic nucleus (STN) is important from the point of view of the freewill debate. Benjamin Libet (48-51.), whose experiments indicated that some minor ‘voluntary’ movements were initiated before subjects were consciously aware of wishing to move, postulated that there could be a ‘free won’t’ mechanism that blocked actions that began unconsciously, but were later determined to be inappropriate by the conscious mind. Recent studies show that the subthalamic nucleus does have an inhibitory role in stopping behaviours whose execution has already begun. The scientific consensus against freewill has created some anxiety that as this ‘knowledge’ gradually leaks from the laboratory into the popular mind there will be a deterioration in public behaviour. Ingenious arguments have been advanced against this, but studies suggest that we should fear such a deterioration. Vohs & Schooler (2008) (52.) found that participants who had read a study advocating the non-existence of freewill were more likely than controls to take advantage of an opportunity to cheat in a subsequent maths test. Other studies by Baumeister et al showed that participants encouraged not to believe in freewill were more aggressive and less helpful towards others. At this stage, we might think we have covered enough ground to try to put together a theory of consciousness that has explanatory power, and is not obviously at variance with what we know about physics, neuroscience or evolution. We have tried to define consciousness, as our subjective experience, or as the fact of it ‘being like something’ to experience things. Consciousness also involves our subjective awareness of the real or apparent ability to subjectively envisage future scenarios, and to use these for our choice of actions. I have further suggested that there is only one problem with consciousness, the problem of how qualia or subjective experience arises, and that we have to address this and essentially only this in discussing consciousness. We have examined theories of consciousness that operate within the context of classical physics, and always come up against essentially the same explanatory gap. Classical physics gives a full explanation of the relationships of macroscopic matter, without any need for consciousness, and also without any ability to generate consciousness. This creates a problem as to how the brain can generate consciousness, given that neuroscience describes the brain in terms of the macroscopic matter made up of carbon, hydrogen, oxygen and other atoms, the relationships of which can be described without either requiring or generating consciousness. The failure to find a theory with satisfactory explanatory power within classical physics pushes us towards identifying consciousness as a fundamental or given property of the universe. What does this really mean? Explanation in science works by breaking things down into their components and the forces or processes that make them function. But this downward arrow of explanation does reach a floor. Mass, charge, spin and the particular strengths of the forces of nature are given properties of the universe that are not reducible to anything else and come without any explanation. Because consciousness has a similar lack of explanation, it is similarly suggested to be a fundamental property. This is only a start. In itself it tells us nothing about how such a fundamental manifests in the brain. Rather than having a solution, we are only at the beginning of a very difficult journey towards something with explanatory value. Not only do we have to discover some system that is truly fundamental, but, given the lack of apparent consciousness in the rest of the universe, we need a process that is unique in operating only in brains, and not in other physical systems. Quantum consciousness is really a misnomer for the sort of system that we are looking for. The philosopher, David Chalmers, was correct in pointing out that there was no more reason for consciousness to arise from quanta than there was for it to arise from classical structures. Both permeate the universe outside of the brain without producing consciousness. The quanta and their behaviour are only of interest if they can allow the brain access to a fundamental property not apparent in other matter. This brings us also to the question of what really is fundamental. There are two sides to this question. The quanta and spacetime. The quanta are the fundamental particles/waves of energy, which also equates to the mass of physical objects. Some quanta such as the proton and the neutron are composed of other quanta, so are not truly fundamental or elementary. The quarks that make up the protons and neutrons of the nucleus of the atom and the force carrying particles such as photons appear to be the most fundamental quanta. But the quanta cannot be understood in isolation. They must be seen as having some form of relationship to spacetime, and that’s a more difficult area than might appear at first sight. Neither quantum theory, nor relativity which is our theory of spacetime, have ever been falsified, but they are, nevertheless, incompatible with one another. Many physicists are coming round to the notion that spacetime is not an abstraction but a real thing, and also something that is not continuous, but discrete, and perhaps best conceived in the form of a web or network. They are divided as to whether the quanta create spacetime, or spacetime generates the quanta, or the third possibility that the two are expressions of something more fundamental. However, whatever form it is conceived to take, the concept of a real and discrete structure also allows the possibility of some form of pattern or information capable of decision making, and this is the level of the universe where we need to look for an explanation of consciousness. There are two routes leading to the conclusion that consciousness has to derive from such a fundamental level of the universe. In addition to the view that classical physics simply can’t cut it in respect of consciousness, there is the Penrose approach via the function of consciousness. As described earlier, he proposed that the Gödel theorem meant that human understanding or conscious could perform tasks that no algorithm-based system such as a computer could perform. This is led to an arcane dispute with logicians and philosophers which few lay people can follow. However, I think it unnecessary to penetrate into such an arcane area. At a much more mundane level, the process of choosing between alternative forms of behaviour or courses of action by means of subjective scenarios of the future looks to also invoke a process that cannot be decided by algorithms. This suggestion is now supported by recent studies showing that in the orbitofrontal region the brain some activity correlates to subjective appreciation rather than the strength of signal, whereas in other parts of the brain not involved with preferences, activity correlates to the strength of this same signal. So while Penrose provides the original inspiration for the idea of an aspect of the universe that could not be derived from a system of calculations, it seems possible to simplify or streamline the original inspiration in a manner that is compatible with recent brain research and not open to the same sort of attacks from logicians and philosophers. In a similar way, it may be possible to simplify Penrose’s proposal of a special type of quantum wave function collapse as the gateway to conscious understanding, seen here as an aspect of fundamental spacetime geometry. Penrose dismissed the randomness of the conventional wave function collapse as irrelevant to the mathematical understanding in which he was initially interested, and instead proposed a special form of objective wave function collapse, which was neither random nor deterministic, but accessed the fundamental spacetime geometry. His proposal as to wave function collapse is currently the subject of experimental testing although this is a procedure that is likely to take up to a decade. Again the question is whether it is necessary to go to such lengths. Might there be a way around the apparent randomness that led Penrose do dismiss conventional wave function collapse. Might not the more conventional wave function collapse, or alternatively decoherence equally well provide an access to the fundamental and conscious level of the universe. There are queries as to how random the randomness is. In one form of the famous two slit experiment, single photons arrive at a screen over some extended period of time. The initial photons register on the screen in apparently random position, but as later photons arrive the familiar light and dark bands form. Somehow later photons or perhaps the earlier photons, ‘know’ where to put themselves. There is a suggestion that this puzzle links to one of the other puzzles of quantum theory, namely entanglement, by which the quantum properties of particles can be altered instantaneously over any distance. In this suggestion, the photons in the two slit experiment are entangled with other distant quanta. Whatever it is that decides the position of these particles in this scheme has no apparent explanation in terms of algorithms or systems of rules for calculating, and this is something that it holds in common with choice by emotional valuation. But how could such a mechanism related to the fundamentals of distant space arise within our brains. Penrose’s collaborator, Stuart Hameroff, proposed a scheme by which quantum coherence arose within individual neurons and then spread throughout neuronal assemblies. Most conscious commentators believe that this theory can be straightforwardly refuted because of the rapid time to collapse or decoherence for quantum states in the conditions of the brain. However, this simplistic approach has in effect been partly refuted by the discovery of functional quantum coherence in biological systems during the last few years, initially in simple organisms subsisting at low temperatures, but most recently at room temperature and in multicellular organisms. Moreover, it is now apparent that the structures of aromatic molecules within the amino acids of individual neurons are similar to those within photosynthetic organisms now known to use quantum coherence. The structures that support quantum states in photosynthetic systems rely on the pi electron clouds discussed in earlier sections and in microtubules the amino acid tryptophan supports the same structure of pi electron clouds which thus look potentially capable of sustaining quantum coherence and entanglement through significant sections of a neuron. The mechanisms by which quantum coherence could subsist in neurons looks here to be within our grasp or understanding. But as with the original Penrose proposal, Hameroff’s scheme may be more ambitious and therefore more open to criticism than it needs to be. Where quantum states have been shown to be functional they subsist for only femtoseconds or picoseconds, whereas the Hameroff scheme requires quantum coherence to be sustained for an ambitious 25 ms, moreover it has to be sustained over possibly billions of neurons spread across the brain. This lays it open to attack from many angles. It looks much more feasible to work from the basis of quantum coherence that exists in other biological systems and to look for similar short lived single cell processes in the brain. The known systems of functional quantum states that subsist within individual cells elsewhere in biology look to have the potential to exist within neurons. For this reason, it is thus much more feasible in the absence of countervailing evidence to work on the basis of consciousness arising within individual neurons. This effectively inverts the Hameroff scheme. Rather than neurons feeding into the global gamma synchrony, the synchrony, which is certainly correlated with consciousness, may be a trigger to conscious activity in neurons. Recent studies give credibility to the idea of consciousness in single neurons. Experimentation has shown that increased activation in single neurons is correlated to particular percepts. Some neurons are selective in only responding to particular images, and activity in these is correlated to the conscious experience of those images. Of course it isn’t as simple as that. With 100 bn neurons in the brain, and perhaps a good percentage of these selecting for particular images, there has to be some way of coordinating their activity. It is initially puzzling that the same type of experiments that show a correlation between consciousness and individual neurons, also show a correlation between the global gamma synchrony and consciousness. So which of these produces consciousness, the individual neurons or the gamma synchrony? Recent studies suggest that activity in individual neurons correlates with the gamma synchrony when a number of the neuron’s neighbours were also active. This agrees with studies showing ‘hot spots’ of activity in the brain also correlated with consciousness. Here we are perhaps left with the concept that the brain is a gate to the fundamental level of the universe, in the literal sense of a mechanism that allows or prevents entry. All of this may seem very speculative, but against this has to be set the lack of explanation in classical physics for the ‘something it is like’ or the ability to have choice or preference that we find in consciousness. 18 Responses 1. Simon says: Glad you like the site. – Simon 2. Simon says: Glad you like the site. – Simon 4. Peder says: I’m starting to write a science fiction love story novel regarding mind upload (and writing music for a film if it goes there) so am striving to understand quantum consciousness and other topics which are mostly over my head. I really appreciate the clarity and completeness you have given to your writing. You are clear and concise and even a non-scientist such as myself can comprehend large pieces of it, while slowly plodding through it all. I heartily thank you. If you ever visit northern California (Sonoma County) drop me an email and you can stay at my house — I’d love to pick your brain while feeding you good food and California wine. • Simon says: I’m pleased that you find the site useful. I will certainly take you up on that if I am ever in your part of the world. – Simon • Simon says: Belated reply: You could try the rather similar novel I wrote ten+ year ago ‘Persephone Wakes: of minds, machines and immortality’ under the pen name of Jack Junius, available fairly cheaply on Amazon. The first half appears to anticipate the recent film ‘Ex Machina’. – Simon 5. Bess says: Im really impressed by your blog. My blog colorado title company (Bess) 6. Simon says: I regard the site as open access, with a view to encouraging discussion of these topics and awareness of new research, so I haven’t really considered the copyright issue – Simon 7. Kira says: thought i could also create comment due to this good paragraph. obtain helpful facts concerning my study and knowledge. my web-site: your blogs really nice, keep it up! I’ll go ahead and bookmark your website to come back in the future. All the best Leave a Reply
5c8bdcdd305e70bd
tisdag 26 september 2017 Update of Real Quantum Mechanics: Electron vs Kernel I have made a discovery resolving an issue with poor correspondence between theory and observation in the new approach to quantum mechanics termed realQM presented here and here and here. In the original setting of realQM the same set-up was used as in the standard version of quantum mechanics based on the Schrödinger equation as concerns the kernel assumed to act like a point source with no extension with a corresponding potential $-Z/r$ with $Z$ the kernel charge and $r$ the distance to the kernel, thus with a singularity at the kernel with $r=0$. In this setting realQM gave a ground state energy for Helium (with two electrons meeting the kernel) of about -3.0 which was substantially lower than the observed -2.9034. Something was thus wrong with realQM in this original form, and I could not figure out what. I have now understood that this mismatch comes from the kernel singularity which, like all singularities, introduces a dark horse into the model, which has to be handled properly to not lead astray. It is thus natural to give the kernel a positive radius and study the dependence of the ground state energy on the kernel radius. The question of the boundary condition for the electron as it meets the kernel at a positive radius then comes up, something which is hidden if the radius is zero. Recalling that the boundary condition on the free boundary separating different electrons is a homogeneous Neumann condition, it is natural to try the same condition for the kernel, understanding that it requires the kernel to have positive radius. An alternative is to use a Robin boundary condition of the form $\frac{\partial\phi}{\partial r}=-Z\phi$ for a positive radius. This is the effective condition at zero radius built into the Schrödinger equation with a point source kernel. And indeed, both approaches seem to work (very similarly) as recorded in the above references. More specifically, the kernel radius (which comes out to be small (of size 0.05 - 0.01 atomic units for kernel charge 2-10) can be used as a model parameter, which can be adjusted to give exact agreement with observations as a calibration of the realQM model for two electron ions, which can serve to build a model with more electrons in outer shells. The model of realQM thus opens to inspection of the inner mechanics of an atom, including information on the effective radius of the kernel as seen by an electron in the innermost shell, something which is hidden to direct experimental observation. We recall that standard quantum mechanics stdQM does not offer a physical model of the atom and thus with stdQM the inner mechanics of an atom is closed to human understanding, a defect made into a virtue in the Copenhagen interpretation of stdQM filling text books. 3 kommentarer: 1. The inner mechanics of an atom? Do you mean the mechanics of the protons and neutrons in the kernel? 2. More the mechanics of the electrons with mutual interaction and with the kernel. 3. Would a physical model of the atom include geometrical distribution of the internal interactions within the atom structure. I have an idea about the internal energy interactions based on my model of earth, mars and venus heat flow, which use the shell theorem with both volume and surface area of the sphere to find exact solutions to surface temperature. Including thermodynamic work in the form of gravity. I think there might be a simnilarity between the atom and planets. They can maybe be treated as particles, both of them. You don´t seem fond of my comments though, since you don+t let them through. I wonder why? I feel there is way to much censorship in these "scientific" blogs. People in the academic world just want to promote themselves everywhere.
b77027dd80254756
Louche Part 2: Quantum Tunneling (Continued from Louche Part 1: Feed Stock for Energy Beings) I started Louche Part 1 with a definition of the word louche, so I want to also define the term Quantum Tunneling, since it’s not a word that anyone would use in casual conversation…unless that someone is talking to me over coffee and croissants, in which case, there would be other strange words thrown into the conversation.  But I digress… Quantum Tunneling Tunneling is often explained using the Heisenberg uncertainty principle and the wave–particle duality of matter. [1] effettunnelIn Taobabe-speak, quantum tunneling is a process where particles are fired at a barrier repeatedly so that it bounces back from the barrier. Once in a random while, the wave portion of the particle continues forward and jumps through the barrier, which creates a measurable voltage on the other side. As the waves randomly jump across the barrier, the voltage on the other side also randomly fluctuates.  A sampling of that voltage can then be use to generate random data. Aggregate this sampling over a lengthy period of time and a truly random noise source can be obtained via this equation, appropriately named, Schrödinger equation. The Schrödinger equation -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \Psi(x) + V(x) \Psi(x) = E \Psi(x) \frac{d^2}{dx^2} \Psi(x) = \frac{2m}{\hbar^2} \left( V(x) - E \right) \Psi(x) \equiv \frac{2m}{\hbar^2} M(x) \Psi(x) , where \hbar  is the reduced Planck’s constant, m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ is the Schrödinger wave function, V is the potential energy of the particle (measured relative to any convenient reference level), E is the energy of the particle that is associated with motion in the x-axis (measured relative to V), and M(x) is a quantity defined by V(x) – E which has no accepted name in physics. [1] In this graph, you can see that the wave being measured is less defined, and certainly weaker than the wave being bounced back from the barrier, but it, nevertheless, continues to exist, and is actually measurable. This measurable wave on the other side of reality is what encryption scientists use to create passwords that are almost impossible to crack.  It is, in fact, the basis for quantum encryption and what will be used in the future to keep networks secure. This all sounds great and cool, but you might be wondering why I am calling attention to random data generation, and what, if anything, it has to do with louche. Well wouldn’t you like to know? As luck would have it, a bunch of scientists have been on this quest for finding louche for a very long time.  From the very start, these really smart folks were already on the forefront of trying to find measurable louche. Since louche is described as energies created by human emotions, scientists from Princeton University’s PEAR lab, at the Institute of Noetic Sciences, began looking around the spacetime of our energy systems experiments to look at how deeply our minds are connected to the fabric of physical reality. Based upon results from the data they have published, there is an interesting connection to our minds and how we can have an unexplained ordering affect on chaotic systems.  The Global Consciousness Project has been showing that a few dozen such systems, called random number generators spread around the world can produce anomalies when global events happen that polarize human attention. Here is a graph that was generated during the attacks of 9/11.  As you can see, the energies of the world, at the point of the attack, spiked in between the first crash and the second crash, and continued to spike as the towers began to collapse. According to scientists at the Noetic Sciences, the odds that the combined Global Consciousness Project data are due to chance is less than one in one hundred billion. The implication is that there’s some deep connection between the mind and physical reality, which we don’t yet fully understand. [2] Jumping on the band wagon of the work done by the Institute of Noetic Sciences, a company called Entangled is in the process of creating an app that you can download onto your phone. When downloaded, the app converts hardware functions on your phone into a physical random number generator, which then uses your personal emotions and thoughts to transform into your very own mind meter.  The information is then uploaded and fed into a database which is supposed to be able to keep track of what they call a consciousness technology. [3] smirkPersonally, I’m not all that convinced that an app on my phone would do anything all that useful other than broadcast to some private entity my exact location at all times. Think about it.  Using the phone’s hardware does not allow for the quantum tunneling aspect of randomization.  That makes this random number generator  NOT a true random generator. Now, all this is interesting, but it still does not answer my question of HOW generating random numbers translate into louche. To get at the answer to the connection between these two entities will require that I dig into some geometry of spacetime.  It is a big subject, and one that took me awhile to grasp, as I am rather lazy and tend to put off thinking about mathematical constructs until such time as I can no longer put it off and have to think about it. Since this post is getting rather lengthy, I’ll address the HOW in my next posting. (Continue to Louche Part 3: Third Density Barrier) [1]  Wikipedia Quantum Tunneling [2]  Institute of Noetic Sciences [3]  Entangle Consciousness App
200e483aec3dd989
What Is Quantum Mechanics Good for? Physicist James Kakalios, author of The Amazing Story of Quantum Mechanics, wants people to know what quantum physics has done for them lately--and why it shouldn't take the rap for New Age self-realization hokum such as The Secret Kakalios sets out to tackle both tasks in The Amazing Story of Quantum Mechanics (Gotham Books, 2010), an accessible, mostly math-free treatment of one of the most complex topics in science. To keep things lively, the author intersperses illustrations and analogies from Buck Rogers stories and other classic science fiction tales. We spoke to Kakalios about his new book, what quantum mechanics has made possible, and how early sci-fi visions of the future compare with the present as we know it. [An edited transcript of the interview follows.] Is the purpose of this book to expose this world of quantum mechanics that people find so mysterious and point out that it's everywhere? That's right. In fact, the introduction is called, "Quantum physics? You're soaking in it!" There are many excellent books about the history and the philosophical underpinnings of quantum mechanics. But there didn't seem to be many that talked about how useful quantum mechanics is. Yes, the science has weird ideas and it can be confusing. But one of the most amazing things about quantum mechanics is that you can use it correctly and productively even if you're confused by it. I present in the introduction what I call a "workingman's view" of quantum mechanics and show how if you accept on faith three weird ideas—that light is a photon; that matter has a wavelength nature associated with its motion; and that everything, light and matter, has an intrinsic angular momentum or spin that can only have discrete values—it turns out that you can then see how lasers work. You can see how a transistor works or your computer hard drive or magnetic resonance imaging—a host of technologies that we take for granted that pretty much define our life. There were computers before the transistor; they used vacuum tubes as logic elements. To make a more powerful computer meant that you had to have more vacuum tubes. They were big, they generated a lot of heat, they were fragile. You had to make the room and the computer very large. And so if you used vacuum tubes, only the government and a few large corporations would have the most powerful computers. You wouldn't have millions of them across the country. There would be no reason to hook them all together into an Internet, and there would be no World Wide Web. The beautiful aspect to this is the scientists who developed this were not trying to make a cell phone; they were not trying to invent a CD player. If you went to Schrödinger in 1926 and said, "Nice equation, Erwin. What's it good for?" He's not going to say, "Well, if you want to store music in a compact digital format..." But without the curiosity-driven understanding of how atoms behave, how they interact with each other, and how they interact with light, the world we live in would be profoundly different. So, to take one example, how does quantum mechanics make the laser possible? One of the most basic consequences of quantum mechanics is that there is a wave associated with the motion of all matter, including electrons in an atom. Schrödinger came up with an equation that said: "You tell me the forces acting on the electron, and I can tell you what its wave is doing at any point in space and time." And Max Born said that by manipulating this wave function that Schrödinger developed, you could tell the probability of finding the electron at any point in space and time. From that, it turns out that the electron can only have certain discrete energies inside an atom. This had been discovered experimentally; this is the source of the famous line spectrum that atoms exhibit and that accounts for why neon lights are red whereas sodium streetlights have a yellow tinge. It has to do with the line spectra of their respective elements. But to have an actual understanding of where these discrete energies come from—that electrons and atoms can only have certain energies and no other—is one of the most amazing things about quantum mechanics. It's as though you are driving a car on a racetrack and you are only allowed to go in multiples of 10 miles per hour. When you take that and you bring many atoms together, all of those energies broaden out into a band of possible energies. The analogy that I use is you have an auditorium with an orchestra below and a balcony above. That means to go from the orchestra to the balcony you have to absorb some energy to be promoted from the orchestra to the balcony. Now if every seat in the orchestra is filled, and you want to move from one seat to another, you can't go anywhere unless you absorb some energy and are promoted up into the balcony, where there are empty seats and you can move around. What happens in a laser is you have a little mezzanine right below the balcony. You get promoted up to the balcony but then you fall and you sit in the mezzanine. And eventually, as the mezzanine gets filled up, there's a bunch of empty seats in the orchestra, where you came from. One person gets pushed out of the mezzanine, and because of the way they talk to each other, they all go at the same time. They release energy as they fall back from the mezzanine into the orchestra, and that energy is in the form of light. Because they are all coming from the same row of seats in the mezzanine, all the light has exactly the same color. Since they all went at the same time, they are all coherently in phase. And if you have a lot of them up in the mezzanine, you can have a very high intensity beam of single-color light. That's a laser. And just as Schrödinger couldn't have had any idea about what his equation would be used for, the same could be said of the laser, which now allows us to have CDs and DVDs and a lot of other things. The same goes for the transistor. It was first developed to amplify radio signals, and you had transistor radios that replaced the vacuum tubes that were being used. Now they are also used as logic elements, 1s and 0s. If you apply a voltage to a transistor you can basically open or close a gate and allow electrons to flow through or make it very difficult for electrons to flow. And so you have two different current states, high and low, that you can call a 1 or a 0. You can combine them in clever ways to do logic operations with the 1s and 0s. You can encode information. You can develop a language of the 1s and 0s and manipulate them that way. And again, I don't think that was the first thought of the people that developed the transistor. Look at all the things that it has brought out. There are probably more transistors in a standard hospital than there are stars in the Milky Way Galaxy, when you think about all the computers and all the electronic devices that we use just for medical applications. So it really has transformed life in a very profound way. The real superheroes of science are a small handful of people who knew they were changing physics, but I don't think they recognized that they were also changing the future. One of the ways you keep this book lively and accessible is to use anecdotes from early science fiction. How well have those predictions held up? The main problem is that they believed that there was going to be a revolution in energy, which would lead to jet packs, death rays and flying cars. But what we got was a revolution in information. This information age, of course, came about because of semiconductors and solid-state physics, which were enabled by quantum mechanics. A lot of these things go back to transistors and semiconductors. Is that in your view the biggest fundamental leap that quantum mechanics allowed us to make? More than that, even. By discerning what were the fundamental rules that govern how atoms interact with each other and how they interact with light, you also have now a fundamental understanding of chemistry. There is a reason why the atoms are arranged the way they are in the periodic table of the elements, and it comes out naturally from the Schrödinger equation when you add in the Pauli exclusion principle. There is a really deep appreciation for why the world is the way it is. Can you imagine living in a world before quantum mechanics? We take all these things for granted. It's like the Louis C. K. YouTube clip—everything is amazing and nobody is happy. "Quantum" is thrown around a lot as a label for things we don't understand, and we often lump a number of phenomena into the vague category of "quantum weirdness". Is that something that you'd like to see dissipate? I would. It's used too much as a catchall. Proposing weird and counterintuitive ideas to explain observations, developing the consequences of these ideas and testing them further, and then, if they conform with reality, accepting them is not unique to quantum mechanics. It's what we call physics. Also, because it has a reputation for weirdness, quantum mechanics is used too much as a justification for things that have nothing to do with quantum mechanics. There is an expression, "quantum woo," where people take a personal philosophy, such as the power of positive thinking or let a smile be your umbrella, and somehow affix quantum mechanics to it to try to make it sound scientific. And make a lot of money doing so. Yeah. It kind of seems to me to be at the same level as using mathematical knot theory or topology to justify crossing your fingers when you're making a wish. It has about as much relevance and justification. Rights & Permissions Share this Article: Scientific American MIND iPad Give a Gift & Get a Gift - Free! Give a 1 year subscription as low as $14.99 Subscribe Now >> Email this Article
490385d8a35cc8cb
Spline Potential Eigenfunctions Model Documents Main Document Spline Potential Eigenfunctions Model  written by Wolfgang Christian The Spline Potential Eigenfunctions Model computes the Schrödinger equation energy eigenvalues and eigenfunctions for a particle confined to a potential well with hard walls at -a/2 and a/2 and a smooth potential energy function between these walls.  The potential energy function is a third-order piecewise continuous polynomial (cubic spline) that connects N draggable control points.  Cubic-spline coefficients are chosen such that the resulting potential energy function and its first derivative is smooth throughout the interior and has zero curvature at the endpoints.  Users can vary the number of control points and can drag the control points to study level splitting in multi-well systems.  Additional windows show a table of energy eigenvalues and their corresponding energy eigenfunctions. The Spline Potential Eigenfunctions Model was created using the Easy Java Simulations (EJS) modeling tool.  It is distributed as a ready-to-run (compiled) Java archive.  Double clicking the ejs_qm_SplinePotentialEigenfunctions.jar file will run the program if Java is installed. Last Modified August 5, 2013 This file has previous versions. Source Code Documents Spline Potential Eigenfunctions Source Code  The source code zip archive contains an EJS-XML representation of the Spline Potential Eigenfunctions Model.   Unzip this archive in your EJS workspace to compile and run this model using EJS. Last Modified January 8, 2012
4819f4acc4e633c1
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term An explicit unconditionally stable numerical method for solving damped nonlinear Schrödinger equations with a focusing nonlinearity. (English) Zbl 1054.35088 Summary: This paper introduces an extension of the time-splitting sine-spectral method for solving damped focusing nonlinear Schrödinger equations (NLSs). The method is explicit, unconditionally stable, and time transversal invariant. Moreover, it preserves the exact decay rate for the normalization of the wave function if linear damping terms are added to the NLS. Extensive numerical tests are presented for cubic focusing NLSs in two dimensions with a linear, cubic, or quintic damping term. Our numerical results show that quintic or cubic damping always arrests blow-up, while linear damping can arrest blow-up only when the damping parameter δ is larger than a threshold value δ th . We note that our method can also be applied to solve the three-dimensional Gross-Pitaevskii equation with a quintic damping term to model the dynamics of a collapsing and exploding Bose-Einstein condensate. 35Q55NLS-like (nonlinear Schrödinger) equations 65T40Trigonometric approximation and interpolation (numerical methods) 65N12Stability and convergence of numerical methods (BVP of PDE) 65N35Spectral, collocation and related methods (BVP of PDE) 81-08Computational methods (quantum theory)
d0fd771470aa65af
Take the tour × Quantum mechanics: Suppose that there is a particle with orbital angular momentum $|L|$. But the particle also has spin quantity $|S|$. The question is, how do I reflect this into Schrodinger equation? I do know how Schrodinger equation becomes for each case - when a particle has particular orbital angular momentum and when a particle has some spin, but not when both occur. share|improve this question Your question makes little sense in the context of quantum mechanics. Particles don't follow paths, specifically not circles and spin in an intrinsic property, not one of motion. –  A.O.Tell Oct 12 '12 at 9:19 @A.O.Tell Modified the question. –  War Oct 12 '12 at 9:32 Just adding spin means you attach a tensor factor space containing the spin representation to the particle space. The schroedinger equation doesn't change unless you add an interaction term that incorporates spin. Which term that is depends on your actual physical model. –  A.O.Tell Oct 12 '12 at 9:36 add comment 2 Answers up vote 0 down vote accepted The Schroedinger equation does not describe spin. If you need to describe spin as well, you should use the Pauli equation or the Dirac equation (for spin 1/2). share|improve this answer so angular momentum can be reflected, but spin can't? –  War Oct 12 '12 at 9:41 That's right, unless you use some unusual definition of the Schroedinger equation. –  akhmeteli Oct 12 '12 at 9:47 What is understood by Schrödinger equation here and how to interpret should? –  Nick Kidman Oct 12 '12 at 11:11 Schrödinger equation is understood as the second equation in en.wikipedia.org/wiki/Schr%C3%B6dinger_equation , marked "Time-dependent Schrödinger equation (single non-relativistic particle)". If you understand it as the first equation there (marked "Time-dependent Schrödinger equation (general)"), then you include, e.g., the Dirac equation there and what not. I cannot add anything to dictionary definitions of "should". –  akhmeteli Oct 12 '12 at 11:44 This answer is incorrect. The term Schrödinger's equation refers to any equation of the form $i\hbar\frac{d}{dt}|\Psi\rangle =\hat{H}|\Psi\rangle$, or coordinate representations of it. For a single spinless nonrelativistic particle, this reduces to the form $i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{x},t)+V(\mathbf{x},‌​t)\Psi(\mathbf{x},t)$ you quote from Wikipedia. In other cases it can be quite different; one example of this is the Pauli equation, also known as the Schrödinger-Pauli equation –  Emilio Pisanty Oct 12 '12 at 16:59 show 3 more comments I think we can talk about spin and spin interactions with the standard Schrodinger. Start with spin orbit coupling or LS Coupling Next see the Zeeman effect, and especially Paschen Bach You need perturbation theory to pick up on spin effects given standard Schrodinger model of the atom as seen on wikipedia: First order perturbation theory with these fine-structure corrections yields the following formula for the Hydrogen atom in the Paschen-Back limit:[2] share|improve this answer add comment Your Answer
a21fe2034c457be9
Durham e-Theses You are in: Localised conduction electrons in carbon nanotubes and related structures Watson, Michael J. (2005) Localised conduction electrons in carbon nanotubes and related structures. Doctoral thesis, Durham University. Single localized polaron (quasiparticle) States are considered in structures relating to carbon nanotubes. The hamiltonian is derived in the tight-binding approximation first on a hexagonal lattice and later on a general carbon nanotube with specifiable chirality, and shares close links with the Davydov model of excitations of a one-dimensional molecular chain. First-order interactions of the lattice degrees of freedom with the electron on-site and exchange terms are included. The system equations are shown, under certain approximations, to share a close relationship with the nonlinear Schrödinger equation - an equation that is known to possess localised solutions. The ground state of system is investigated numerically and is found to depend crucially upon the strengths of the electron-phonon interactions. Item Type:Thesis (Doctoral) Award:Doctor of Philosophy Thesis Date:2005 Copyright:Copyright of this thesis is held by the author Deposited On:09 Sep 2011 09:54 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
44f0fa1d7597ca27
Psychology Wiki Computational chemistry Revision as of 07:31, July 30, 2010 by Dr Joe Kiff (Talk | contribs) 34,190pages on this wiki Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Schrödinger equation. It is, in principle, possible to solve the Schrödinger equation, in either its time-dependent form or time-independent form as appropriate for the problem in hand, but this in practice is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Present computational chemistry can routinely and very accurately calculate the properties of molecules that contain no more than 10-40 electrons. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated with classical mechanics in methods called molecular mechanics. In theoretical chemistry, chemists and physicists together develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. There are two different aspects to computational chemistry: • Computational studies can be carried out in order to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. Several major areas may be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces to find stationary points on the energy hypersurface as the position of the nuclei is varied. • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design). Molecular structureEdit A given molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (electronic energy plus repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is when the derivative of that energy with respect to all displacements of the nuclei is zero. A local minimum is when all such displacements lead to an increase in energy. The local energy that is lowest is called the global energy and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimisation. The determination of molecular structure by geometry optimisation became routine only when efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is assumed. In some ways more importantly it allows the characterisation of stationary points. The frequencies are related to the eigenvalues of the matrix of second derivatives (the Hessian matrix). If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (an imaginary frequency), the stationary point is a transition structure. If more than one eigenvalue is negative the stationary point is a more complex one and is of little interest. If found, it is necessary to move the search away from it to continue looking for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and making use of the Born-Oppenheimer approximation which, based on the much higher velocity of the electrons in comparison with the nuclei, allows the separation of electronic and nuclear motions, and simplifies the Schrödinger equation. This leads to evaluating the total energy as a sum of the electronic energy at fixed nuclei positions plus the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems the total energy is determined using molecular mechanics. The ways of determing the total energy to predict molecular structures are:- Ab initio methodsEdit Main article: Ab initio quantum chemistry methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations - being derived directly from theoretical principles, with no inclusion of experimental data - are called ab initio methods. This does not imply that the solution is an exact one. They are all approximate quantum mechanical calculations. It means that a particular approximation is carefully defined and then solved as exactly as possible. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer). Electron correlation The simplest type of ab initio electronic structure calculation is the Hartree-Fock (HF) scheme, in which the Coulombic electron-electron repulsion is not specifically taken into account. Only its average effect is included in the calculation. As the basis set size is increased the energy and wave function tend to a limit called the Hartree-Fock limit. Many types of calculations, known as post-Hartree-Fock methods, begin with a Hartree-Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to chose a basis set. This is set of functions, usually centred on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree-Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is quite inadequate and several configurations need to be used. Here the coefficients of the configurations and the coefficients of the basis functions are optimised together. The total molecular energy can be evaluated as a function of the molecular geometry, in other words the potential energy surface. Example: Is Si2H2 like acetylene (C2H2)?Edit A series of ab initio studies of Si2H2 shows clearly the power of ab initio computational chemistry. They go back over 20 years, and most of the main conclusions were reached by 1995. The methods used were mostly post-Hartree-Fock, particularly Configuration interaction (CI) and Coupled cluster (CC). Initially the question was whether Si2H2 had the same structure as ethyne (acetylene), C2H2. Slowly (because this started before geometry optimization was widespread), it became clear that linear Si2H2 was a transition structure between two equivalent trans-bent structures and that it was rather high in energy. The ground state was predicted to be a four-membered ring bent into a 'butterfly' structure with hydrogen atoms bridged between the two silicon atoms. Interest then moved to look at whether structures equivalent to vinylidene - Si=SiH2 - existed. This structure is predicted to be a local minimum, i. e. an isomer of Si2H2, lying higher in energy than the ground state but below the energy of the trans-bent isomer. Then surprisingly a new isomer was predicted by Brenda Colegrove in Henry F. Schaefer, III's group[1]. This prediction was so surprising that it needed extensive calculations to confirm it. It requires post Hartree-Fock methods to obtain a local minimum for this structure. It does not exist on the Hartree-Fock energy hypersurface. The new isomer is a planar structure with one bridging hydrogen atom and one terminal hydrogen atom, cis to the bridging atom. Its energy is above the ground state but below that of the other isomers[2]. Similar results were later obtained for Ge2H2 [3] and SiGeH2 [4]. More interestingly, similar results were obtained for Al2H2[5] (and then Ga2H2 [6] and AlGaH2)[7] which have two electrons less than the Group 14 molecules. The only difference is that the four-membered ring ground state is planar and not bent. The cis-mono-bridged and vinylidene-like isomers are present. Experimental work on these molecules is not easy, but matrix isolation spectroscopy of the products of the reaction of hydrogen atoms and silicon and aluminium surfaces has found the ground state ring structures and the cis-mono-bridged structures for Si2H2 and Al2H2. Theoretical predictions of the vibrational frequencies were crucial in understanding the experimental observations of the spectra of a mixture of compounds. This may appear to be an obscure area of chemistry, but the differences between carbon and silicon chemistry is always a lively question, as are the differences between group 13 and group 14 (mainly the B and C differences). The silicon and germanium compounds were the subject of a Journal of Chemical Education article[8]. Density Functional methodsEdit Main article: Density functional theory Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. This means that they could also be called semi-empirical methods. It is best to treat them as a class on their own. In DFT, the total energy is expressed in terms of the total electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. The drawback is, that unlike ab initio methods, there is no systematic way to improve the methods by improving the form of the functional. Semi-empirical and empirical methods Edit Main article: Semi-empirical quantum chemistry methods Semi-empirical quantum chemistry methods are based on the Hartree-Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree-Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the Extended Hückel method proposed by Roald Hoffmann. Molecular mechanics Edit Main article: Molecular mechanics In many cases, large molecular systems can be modelled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization - (the resulting set of parameters and functions is called the force field) - is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. Interpreting molecular wave functionsEdit The Atoms in Molecules model developed by Richard Bader was developed in order to effectively link the quantum mechanical picture of a molecule, as an electronic wavefunction, to chemically useful older models such as the theory of Lewis pairs and the valence bond model. Bader has demonstrated that these empirically useful models are connected with the topology of the quantum charge density. This method improves on the use of Mulliken charges. Computational chemical methods in solid state physicsEdit Main article: Computational chemical methods in solid state physics Chemical dynamicsEdit Once the electronic and nuclear variables are separated (within the Born-Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential respresenting the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. Molecular dynamics (MD) examines (using Newton's laws of motion) the time-dependent behavior of systems, including vibrations or Brownian motion, using a classical mechanical description. MD combined with density functional theory leads to the Car-Parrinello method. Software packages Edit A number of self-sufficient software packages include many quantum-chemical methods, and in some cases molecular mechanics methods. The following table illustrates the capabilities of the most versatile software packages that show an entry in two or more columns of the table. There are separate lists for specialized programs, such as:- PackageMolecular MechanicsSemi-EmpiricalHartree-FockPost-Hartree-Fock methodsDensity Functional Theory See also Edit Cited References Edit 1. Golegrove, B. T., Schaefer, Henry F. III (1990). Disilyne (Si2H2) revisited. Journal of Physical Chemistry 94: 5593. 2. Grev, R. S., Schaefer, Henry F. III (1992). The remarkable monobridged structure of Si2H2. Journal of Chemical Physics 97: 7990. 3. Palágyi, Zoltán, Schaefer, Henry F. III, Kapuy, Ede (1993). Ge2H2: A Molecule with a low-lying monobridged equilibrium geometry. Journal of the American Chemical Society 115: 6901 - 6903. 4. O'Leary, P., Thomas, J. R., Schaefer III, H. F., Duke, B. J. and B. O'Leary (1995). A study of the Silagermylyne (SiGeH2) molecule: A new monobridged structure. International Journal of Quantum Chemistry, Quantum Chemistry Symposium 29: 593 - 604. 5. Stephens, J. C., Bolton, E. E.,Schaefer, H. F. III, and Andrews, L. (1997). Quantum mechanical frequencies and matrix assignments to Al2H2. Journal of Chemical Physics 107: 119 - 223. 6. Palágyi, Zoltán, Schaefer, Henry F. III, Kapuy, Ede (1993). Ga2H2: planar dibridged, vinylidene-like, monobridged and trans equilibrium geometries. Chemical Physics Letters 203: 195 - 200. 7. Thomas, R, O'Leary, P., DeLeeuw, B. J., Schaefer III, H. F., Duke, B. J., and O'Leary, B. (1993). The structurally-rich potential energy surface of the Alagallylyne (AlGaH2) molecule. Journal of Physical Chemistry 106: 7372 - 7379. 8. DeLeeuw, B. J., Grev, R. S. and Schaefer, Henry F. III (1992). A comparison and contrast of selected saturated and unsaturated hydrides of group 14 elements. Journal of Chemical Education 69: 441. Other references Edit • T. Clark A Handbook of Computational Chemistry, Wiley, New York (1985) • C. J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002) • R. Dronskowski Computational Chemistry of Solid State Materials, Wiley-VCH (2005) • F. Jensen Introduction to Computational Chemistry, John Wiley & Sons (1999) • D. Rogers Computational Chemistry Using the PC, 3rd Edition, John Wiley & Sons (2003) • A. Szabo, N.S. Ostlund, Modern Quantum Chemistry, McGraw-Hill (1982) • D. Young Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems, John Wiley & Sons (2001) • David Young's Introduction to Computational Chemistry External links Edit ar:كيمياء حاسوبية ca:Química computacional cs:Výpočetní chemie de:Computerchemie es:Química computacional id:Kimia komputasihe:כימיה חישובית hu:Kémiai számítástechnikath:เคมีการคำนวณ vi:Hóa học tính toán zh:计算化学 Around Wikia's network Random Wiki
cf78ab5f1f88475a
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I read this on Wikipedia: [...] That most tangible way of expressing the essence of quantum mechanics is that we live in a universe of quantized angular momentum and the Planck constant is the quantum. [...] And I'm wondering if anyone can explain this a little more. Is it saying that ALL action is made up of angular momentum which is quantised in units of Planck's constant? Or is it just saying that all matter is composed of particles which have quantised angular momentum? If the answer is the prior, this would mean that a photon's movement through space is as a result of angular momentum, which seems strange. Anyway, if you could help me out with a little explanation, that would be great. share|cite|improve this question Angular momentum is the most basic observable where QM differs from classical mechanics. In QM angular momentum takes only discrete values whereas (unless the particle is trapped in a finite region of space) momentum and position take continuous values in QM just as in classical mechanics. – user10001 Mar 5 '13 at 12:30 However real essence of QM is not in the fact that angular momentum takes discrete values but in the fact that observables generally don't commute (whereas in classical mechanics they always commute). – user10001 Mar 5 '13 at 12:39 @user10001 Can you please explain what do you mean by "observables don't commute"? What is "commute"? Thanks in advance! – Cheeku Mar 5 '13 at 13:48 @Cheeku Observables in QM are matrices rather than functions so they do not commute in general(ie AB=!BA). In more simple terms suppose S is a physical system (say a particle) and A and B be two (quantum) observables associated with S. Say A= position and B=momentum. Now to know the value of A and B you have two choices - i) first measure A and then measure B ii) first measure B and then measure A. Classically both the ways are equivalent and will give the same result while for quantum systems these two ways of measurement are in general not equivalent and give different values of A and B. – user10001 Mar 5 '13 at 14:20 @user10001: surely angular momentum is not quantised for an unbound state any more than position and momentum are. – John Rennie Mar 5 '13 at 16:26 up vote 0 down vote accepted Over the decades generations of teachers have attempted to come up with an intuitive way of introducing quantum mechanics, and all have failed since it's fundamentally non-intuitive. These days students are usually just presented with the Schrödinger equation and told to get on with it. By solving the Schrödinger equation you are deriving energy eigenfunctions, so your starting point would be that the basis of QM is that energy is quantised (well, for most bound systems anyway). The Wikipedia article is attempting to introduce QM as quantised angular momentum. I don't know if this is technically possible, though I would have guessed not since I can't see how you'd derive the radial part of the hydrogen wavefunction without considering energy. Whether it's technically possible or not, it seems to me to be unhelpful and confusing for students new to QM. I would ignore that article and find some other better introduction. A formal introduction would probably start from the axioms of quantum mechanics, though these are far from intuitive. The Schrödinger equation follows from the axiom that time evolution acts linearly on states. share|cite|improve this answer Thanks. I'm going to find another intro like you suggested. This is all a little confusing :| – JoeRocc Mar 6 '13 at 13:49 This is a comment turned into an answer. Classically one can define an angular momentum of a straight track with respect to any axis as classical angular momentum The real meaning though comes from rotational states about a central axis. In this case a potential exists which constrains the particle to revolve about the axis. Quantum mechanically one defines an angular momentum in potential problems where there exists a solution as a wave function and the corresponding angular momentum operator acts on these functions: operator of angular momentum The result of this action will display the quantization of angular momentum, the value being a multiple of h_bar as shown in the table of the link. The not very clear argument you are quoting from the wiki article is based on the assumption that quantum mechanically there will be an angular momentum only if a potential exists binding a particle, which will then inevitably have a quantized angular momentum. In the microcosm there cannot be a rotation about a center without a potential problem so it is true, but not, in my opinion the easiest way to understand quantum mechancis !. share|cite|improve this answer The Wikipedia page has been modified since the question was posed and the cited text disappeared, which is a good thing as it does not make much sense. Now it can be interesting to note that the preceding sentence was Action is a general physical concept related to dynamics and is most easily recognized in the form of angular momentum. So I take it that the original wiki editor had in mind the path integral formulation where the action (a quantity with the same units as angular momentum, and the Planck constant) is responsible for the phase relationship between alternate paths and thus governs their interference. From this perspective the action could maybe be argued to represent the essence of quantum mechanics. But it would be very misleading to formulate this idea by confusing action and angular momentum; see this post for a related discussion. share|cite|improve this answer Your Answer
6248cb8f68bdc08c
Measuring angular momentum in atomic systems 1. Perhaps rather an elementary question, but I can't find a clear answer in my textbooks: Would I be right in thinking that we never actually measure directly the angular momentum of atomic systems, but rather: using the results of QM calculations about structure, and knowledge of the selection rules, we *infer* it from spectral transitions? 2. jcsd 3. jtbell Staff: Mentor What would you consider to be a "direct measurement" of angular momentum? The Einstein - de Haas effect demonstrates that the microscopic angular momentum of electrons in a metal contributes to the object's total macroscopic angular momentum. Briefly (and probably oversimplified), you start with an object that's not rotating, then flip the spins of the electrons, and observe that the object starts to rotate macroscopically in order to maintain the same total angular momentum. 4. You've identified the origin of my question, as I could not think of how an experiment could observe angular momentum in atomic systems directly. Clearly, the Einstein-de Haas effect does demonstrate a way to do this - and spin angular momentum at that (which is what I'm pursuing). However, I'm still left wondering how angular momentum is identified from atomic spectroscopy. I doubt that early work on atomic spectra relied on the Einstein-de Haas effect. Can you, or anyone else, tell me how it's conventionally done? 5. ZapperZ ZapperZ 30,451 Staff Emeritus Science Advisor Education Advisor Er.. the angular momentum of an atom is the origin of magnetism in solids! One doesn't need any "spectroscopy" studies to get that. 6. OK - let me put it a bit more precisely: I'm asking about the orbital and spin angular momenta of electrons in, for example, low density gaseous states. Textbooks glibly mention electrons being in states |n,l,m,s> But how, observationally, do we come to know what m(l) and m(s) are for the states between which we observe spectral lines? Suppose I excite some sodium vapour, and as Wikipedia states: "One notable atomic spectral line of sodium vapor is the so-called D-line, which may be observed directly as the sodium flame-test line (see Applications) and also the major light output of low-pressure sodium lamps (these produce an unnatural yellow, rather than the peach-colored glow of high pressure lamps). The D-line is one of the classified Fraunhofer lines observed in the visible spectrum of the sun's electromagnetic radiation. Sodium vapor in the upper layers of the sun creates a dark line in the emitted spectrum of electromagnetic radiation by absorbing visible light in a band of wavelengths around 589.5 nm. This wavelength corresponds to transitions in atomic sodium in which the valence-electron transitions from a 3p to 3s electronic state. Closer examination of the visible spectrum of atomic sodium reveals that the D-line actually consists of two lines called the D1 and D2 lines at 589.6 nm and 589.0 nm, respectively. This fine structure results from a spin-orbit interaction of the valence electron in the 3p electronic state. The spin-orbit interaction couples the spin angular momentum and orbital angular momentum of a 3p electron to form two states that are respectively notated as 3p(2p0,1/2) and 3p(2p0,3/2) in the LS coupling scheme. The 3s state of the electron gives rise to a single state which is notated as 3s(2S1 / 2) in the LS coupling scheme. The D1-line results from an electronic transition between 3s(2S1 / 2) lower state and 3p(2p0,1/2) upper state. The D2-line results from an electronic transition between 3s(2S1 / 2) lower state and 3p(2p0,3/2) upper state. Even closer examination of the visible spectrum of atomic sodium would reveal that the D-line actually consists of a lot more than two lines. These lines are associated with hyperfine structure of the 3p upper states and 3s lower states. Many different transitions involving visible light near 589.5 nm may occur between the different upper and lower hyperfine levels.[8][9]" (see original in Wiki to see the term symbols displayed correctly). Now, how precisely do we come to be able to state that a transition is between any of the above two states - ie to identify the states' various quantum numbers including the angular momenta? As posed in my original question - the only way I can see this being achieved is if one first *calculates* the structure of the spectrum and thus the associated n,l,m,s, values, and then one assigns the observed spectral lines to those theoretically identified states. So one never actually observes the m(l) and m(s) values, but as mentioned in the first post, one infers them. Or is there some other way to do this? 7. jtbell Staff: Mentor One way (there are probably others) to associate spin and orbital angular momentum quantum numbers of initial and final states with particular spectral lines is via the Zeeman effect. When you apply an external magnetic field, the energy levels of the different spin states shift and/or split by amounts that depend on the angular momentum quantum numbers, and on the strength of the magnetic field. 8. Yes, what you say is correct, but I think I'm failing to make explicit the point of my question. It seems to me that all the answers I've received come from hindsight, as is the case with textbooks. But how do we know *from the outset* what the quantum numbers are (which lines correspond to transitions from the lowest values), and what the units of spin and orbital angular momentum are initially, unless we have an atomic model of some sort to start with. For example: If you look at Ch1, Vol 1 of P.W. Atkins' "Molecular Quantum Mechanics", he outlines how Balmer, Rydberg and Ritz worked out some regularities in spectral lines which led Bohr to propose a model for hydrogen, based on a number of assumptions, including that: "The stationary states are to be determined by the condition that the ratio of the total energy of the electron to its frequency of rotation shall be an integral multiple of h/2. For circular orbits this is equivalent to the restriction of the angular momentum of the electron to integral multiples of h/(2pi)" ...and the calculation based on (all) the postulates yields the electron's energies in the hydrogen atom as: E(n) = - mu*e^4/(8n^2h^2eta^2) (mu being a reduced mass) ...where n = 1,2,3... is the first quantum number. And the result agrees well with experiment (as far as early observations went). So, it appears that even at the outset, the unit of angular momentum is fed into the model, and not itself observed. The first quantum number is identified by a model, and I suspect that the *ranges* of possible values of l and m(l) drop out of the spherical harmonics as solutions to the Schrodinger equation - and that these provide the original basis for *interpretation* of the observations, rather than direct measurement of orbital angular momentum. And m(s) emerges from the doublet structure of spectral lines but still refers to the *calculated* unit h/(2pi), again rather than being measured directly. I think I'm convincing myself that my original point was true, but it would be good to know if I'm wrong. Thanks for the stimulus of your contributions. 9. jtbell Staff: Mentor Bohr's atomic model has been superseded for over eighty years by modern quantum mechanics. The quantum numbers for orbital angular momentum arise directly from the solution of the Schrödinger equation. For spin, I think you have to go further to the relativistic Dirac equation, and assume that the magnitude of the spin angular momentum has a certain value; but after that, the mathematics of addition of quantum-mechanical angular momentum determine everything else. (Someone with more expertise than I in atomic physics is welcome to correct me on this.) Have something to add?
ffb6ad4a8b4ed183
Quantum Metaphysics Victor J. Stenger University of Hawaii Paper presented at the Conference on New Spiritualities, Westminster College, Oxford, England, March 1995. Published in Modern Spiritualities, Laurence Brown, Bernard C. Farr, and R, Joseph Hoffmann (eds.) Amherst NY: Prometheus Books, 1997. Also published in The Scientific Review of Alternative Medicine 1(1), 26-30, 1997.  In his talk at this conference, Antony Flew defined spirit as "incorporeal substance." As a physicist, I can relate to that. If such a thing as spirit exists, then I have no problem with it being incorporeal. It does not have to be made of matter as long as it has "substance." I interpret this to mean that although spirit may not be composed of quarks and electrons or other known constituents of matter, it still may be a meaningful concept, amenable to empirical testing or other rational analysis. One test for whether a concept has "substance" is to use Occam's razor to excise it from all discourse. If the essential content of discourse remains unchanged, then I would say the concept has no substance. Of course, like most scientific tests, this can only be used to falsify the concept, not verify it. The idea of spirit as a substantial component of the universe is of course an ancient one, fundamental to the traditional dualistic view most humans hold of the universe and themselves as part of that universe. In this view, planets, rock, trees, and the human body are made of matter, but matter is not everything. Beyond matter exists mind, soul, or spirit, an etherial substance that may even be more "real" than matter - the very quintessence of being. In the mid-nineteenth century, many scientists thought that the marvelous new discoveries of science, and the methods of science, could be applied to the world of the spirit as well as to the world of matter. For example, Sir Oliver Lodge, a physicist who had helped demonstrate the reality of electromagnetic waves, argued that if wireless telegraphy was possible, then so was wireless telepathy. Lodge, like most others of the period, believed that electromagnetic waves, including light, were vibrations of a frictionless medium, the aether, that pervaded the universe. It seemed plausible that this medium might also be responsible for the transmission thoughts, that it was the long-sought substance of mind and spirit. The electromagnetic field, like the gravitational field proposed centuries before by Newton, exhibited a holistic character that fit in well with spiritual ideas. Matter was particulate, occurring in lumps, and analyzed by the distasteful methods of reductionism in which objects are reduced to the sum of their parts. Fields, on the other hand, were continuous - holistic - occurring everywhere in space, connecting everything to everything else, and analyzable only in the whole. Even today, occultists confuse natural electromagnetic effects with "auras" surrounding living things. A popular con game at psychic fairs is the sale of "aura photographs" that are simply made with infrared-sensitive film. Kirlian photography is another example of a simple electromagnetic phenomenon, corona discharge, that is given imaginary spiritual significance. Although the atomic theory of matter was well developed by the late nineteenth century, it had not yet been convincingly verified at that time. Many chemists, and a few physicists like Lodge, still held open the possibility that matter might be continuous. The mathematics of fields had been successfully applied to solids and fluids, which appear continuous and wavy on the everyday scale. These scientists suggested that continuity, not atomism, constituted the prime unifying principle for describing the universe of both matter, light, and perhaps spirit. This comforting notion was shattered as the twentieth century got underway. First, the aether was found not to exist. Second, the atomic theory was confirmed. Third, light was found to be a component of matter, composed of particles we now call photons. And so, discreteness, rather than continuity, became the unifying principle of physics, with the universe composed solely of particulate matter. Quantum mechanics was developed to describe material phenomena in all their various, discrete forms. However, the situation was not quite so tidy as this short and simplified review may imply. The phenomena that originally led people to postulate its wave nature of light did not go away. Those observations were correct. Furthermore, other forms of matter were shown to also exhibit wave properties. Electrons were found to diffract through small openings in exactly the same way as light. The fact that particles sometimes behaved as waves and waves as particles was called the wave-particle duality. Although matter was sufficient to encompass all known physical phenomena, the apparent two-fold nature of matter gave die hard dualists some comfort. Some associated waves with mind. But waves and particles were not two separate elementary substances but characteristics of the same substance. Whether a physical entity was a wave or a particle seemed to depend on what you measured. Measure its position, and you concluded that the entity is a material body. Measure its wavelength, and you concluded that the entity is some type of continuous field. Furthermore, you can imagine deciding which quantity to measure at the last instant, long after the entity had been emitted from its source, which might be a distant galaxy. Some have inferred from this puzzle that the very nature of the universe is not objective, but depends on the consciousness of the observer. This latest wrinkle on ancient idealism implies that the universe only exists within some cosmic, quantum field of mind, with the human mind part of that field and existing throughout all space and time. Quantum phenomena seem to be very mysterious, and where mysteries are imagined, the supernatural cannot be far behind. However, despite these misgivings, quantum mechanics developed as a quantitative physical theory that has proven itself capable of making calculations and predictions to a high level of accuracy. After seventy years of exhaustive testing, no observation has been found to be inconsistent with quantum mechanics as a formal, mathematical theory. Quantum mechanics dealt early with the problem of the wave nature of matter by introducing a mathematical quantity called the wave function. Schrödinger's equation was used to calculate how the wave function evolved with time; the absolute square of the wave function gave the probability that a body would be found at a particular position. In 1927, Einstein initiated a debate on quantum mechanics with Niels Bohr that continues today, long after their deaths, as others have taken up the arguments one side or the other. Initially Einstein objected to the picture, retained today in most textbooks, in which the wave function instantaneously "collapses" upon measurement. He called this a "spooky action at a distance" because it implied that signals must travel at infinite speeds across the wave front to tell the wave function to go to zero in the places where nothing is detected. To modern dualists, the holistic quantum wave function, with its instantaneous collapse upon the act of observation, has provided a new model for the notion of spirit. They have been wittingly and unwittingly encouraged by various statements made by physicists, some of considerable distinction. Eugene Wigner is widely quoted in the new literature of quantum mysticism. He once said: "The laws of quantum mechanics itself cannot be formulated . . . without recourse to the concept of consciousness" (Wigner 1961). A similar statement by John Archibald Wheeler's is also often used, to his dismay, in justifying a connection between the quantum and consciousness: "No elementary quantum phenomenon is a phenomenon until it is a registered phenomenon. . . . In some strange sense, this is a participatory universe" (Wheeler 1982). In their book The Conscious Universe, Astrophysicist Menas Kafatos and Philosopher Robert Nadeau interpret the wave function as ultimate reality itself: ". . . Being, in its physical analogue at least, [has] been 'revealed' in the wave function. . . . . any sense we have of profound unity with the cosmos . . . could be presumed to correlate with the action of the deterministic wave function. . ." (Kafatos 1990). Physicist Amit Goswami sees a "self-aware universe," with quantum mechanics providing support for claims of paranormal phenomena. He says: ". . . psychic phenomena, such as distant viewing and out-of-body experiences, are examples of the nonlocal operation of consciousness . . . Quantum mechanics undergirds such a theory by providing crucial support for the case of nonlocality of consciousness" (Goswami 1993). This view was also promoted by the late novelist Arthur Koestler, who said: ". . . the apparent absurdities of quantum physics . . . make the apparent absurdities of parapsychology a little less preposterous and more digestible." In the United States today, alternative healing is all the rage. Traditional folk healing techniques are touted as holistic, in contrast to the reductionistic methods of modern Western medicine. Again, quantum mechanics provides a source of inspiration. Two recent best sellers by Dr. Deepak Chopra contain the word "quantum" in their titles: Quantum Healing: Exploring the Frontiers of Mind/Body Medicine (Chopra 1989)and Ageless Body, Timeless Mind: The Quantum Alternative to Growing Old (Chopra 1993). Johns Hopkins psychiatrist Patricia Newton explains the mechanism: "(Traditional healers) are able to tap that other realm of negative entropy - that superquantum velocity and frequency of electromagnetic energy and bring them as conduits down to our level. It's not magic. It's not mumbo jumbo. You will see the dawn of the 21st century, the new medical quantum physics really distributing these energies and what they are doing" (Newton 1993). Despite the claims made in many books, neither psychic phenomena (Stenger 1990) nor the vast array of alternate healing methods (Butler 1992) are supported by controlled, replicable laboratory studies. They cannot be used as evidence for mind over-matter. Nor can quantum mechanics be used to make these claims more credible. As we will now see, the mysteries and apparent paradoxes of quantum mechanics arise only when we try to cast the theory in words instead of equations, applying the language of everyday human experience to a physical realm where that experience may not be relevant. The words used to describe quantum mechanics in conventional physics textbooks were gleaned from the writings of Bohr, Werner Heisenberg, and Max Born, the primary authors of what is called the Copenhagen interpretation of quantum mechanics. In Copenhagen, the wave function is simply a mathematical object used to calculate probabilities. The results of measurements are not pre determined, but occur randomly according to the calculated probabilities. The measuring apparatus must be treated classically and is separate from the quantum system under study. No mechanism is provided for wave function collapse, and in fact collapse is not predicted by the Schrödinger equation. Louis de Broglie, who first suggested that particles like electrons have wave properties, proposed in 1927 the first of the class of what is now called hidden variables theories of quantum mechanics. He hypothesized that the wave function is a real field associated with a particle. However, Bohr and his supporters talked most of the community, including de Broglie (but not Einstein or Schrödinger), out of hidden variables and they lay dormant until being resurrected by David Bohm in the 1950's. Bohm, who became the major scientific figure in the quantum mysticism movement, had shown that all the results obtained with the Schrödinger equation can be obtained by familiar classical equations of motion, provided that an additional quantum potential is added to the equations to account for quantum effects (Bohm 1952). However, Bohm's theory, as it was proposed, gave no new empirical predictions; neither he nor his followers have yet produced a mechanism for generating a priori the quantum potential. The hidden variables approach is based on the notion, which Einstein always believed, that quantum mechanics is fine as far as it goes, as a statistical theory, but that some deterministic sub-quantum theory that lies behind physical events remains to be uncovered. Einstein's famous quotation that "God does not play dice" referred to this notion, although he thought Bohm's version was "too cheap" (Born 1971). It should be noted that hidden variables theories are not properly labelled as "interpretations" of quantum mechanics since they imply the existence of a deeper theory, not yet discovered. In the 1960s, John Bell proved an important theorem about hidden variables theories. He showed that any deterministic hidden variables theory capable of giving all the statistical results of standard quantum mechanics must allow for superluminal connections, in violation of Einstein's assertion that no signals can move faster than light (Bell 1964). In the jargon of the trade, deterministic hidden variables theories are nonlocal. In popularized language, they are holistic, allowing for simultaneous connections between all points in space. Bell proposed a definitive experimental test that has now been repeated many times with every increasing precision (Aspect 1982). In all cases, the results are fully consistent with quantum mechanics, requiring deterministic hidden variables, if they exist, to be nonlocal. Instead of giving up on hidden variables because of their apparent conflict with relativity, proponents have taken Bell's theorem to imply hidden variables are even more profound, providing for the holistic universe of the mystic's fondest desires. The problem of nonlocality is dismissed by claiming that no communication of signals faster than light takes place. This conclusion can be proven to be a general property of quantum theory (Eberhard 1989), and will be true for Bohm's theory as long as Bohm's theory is consistent with quantum mechanics. But, as we have seen, Bohm's theory by itself has no unique, testable consequences. We can use Occam's razor to excise it from our discourse, and nothing substantial is changed. The notion of hidden variables has no use unless superluminal connections are observed. This has not yet happened, and so hidden variables remain a non-parsimonious alternative to conventional quantum mechanics. Another interpretation of quantum mechanics that has caught mystics' inner and outer eyes is the many worlds interpretation of Hugh Everett (1957). Everett was able to develop a formalism that solved some of the problems associated with the conventional Copenhagen view. In particular, he included the measuring apparatus in the system being analyzed, unlike Copenhagen where it must be treated as a separate, classical system. In many worlds, the wave function of the universe does not collapse upon a measurement. Instead, the universe splits into parallel universes in which all possible events occur. In Everett's view, these parallel universes that are deemed to be "equally real." The idea that the universe is continually splitting into parallel universes whenever a measurement or observation is made strike many people as a rather extreme solution to the interpretation problems of quantum mechanics. Nevertheless, as long as the parallel universes cannot interact with one another, we can never disprove the concept. If we reject it, we must do so on aesthetic or parsimonious grounds. More recently, a number of theorists have found ways to recast Everett's ideas in a more economical, commonsensical way. This new interpretation, which some say represents only a small extension of Bohr's thinking, is called consistent histories (Omnès 1994). In the consistent histories view, as in Copenhagen and many worlds, the wave function allows you to calculate the probabilities that the universe will take various paths. Unlike many worlds, these paths are not deemed to be "equally real." Instead, the path taken in our universe is chosen randomly, as the toss of coin. The indeterminism of Copenhagen is retained but, unlike Copenhagen, the wave function "decoheres" rather than collapses upon the act of measurement. Theoretical work has provided for a logically consistent histories theory that agrees with all known data without the introduction of holistic, nonlocal, or mystical elements. In this theory, the only consistent paths (or histories) are those for which probabilities add as they do classically. The quantum-to-classical transition occurs by the mechanism of decoherence induced by measuring instruments or the environment. The idea of decoherence is quite simple. Quantum effects are characterized by phenomena, such as interference and diffraction, that are understood to be coherent properties of the wave function. These occur because the universe is granular, with matter existing in lumps separated by empty space. Only where lumps of matter exist, either in the form of a measuring instrument or environmental body, can particle paths be logically defined. At these points, the particles scatter and decohere and classical paths are produced. Classical mechanics follows as the limit of quantum mechanics in a fine grained universe. In our experience, ordinary light is coherent in air because the probability of a visible photon colliding with an air molecule over the distances involved is small. Gamma ray photons, on the other hand, appear to travel classical paths because they have high probability to scatter, and decohere, over the same distances. By being non-deterministic, consistent histories avoids the problem of nonlocality associated with hidden variables. Some still argue that the wave function is nonlocal, but if it is not a "real" field but a mathematical convenience, who cares? In any case, no signals move faster than the speed of light. Still some commentators argue that any non-deterministic quantum mechanics, be it Copenhagen or consistent histories, is still incomplete. What "causes" the universe to take the path it does, they ask? Deterministic, nonlocal hidden variables are one answer. But, we have seen that they are necessarily nonlocal and we have no empirical evidence for any superluminal or sub-quantum processes. Another even more poorly justified answer is that the path selection is made by consciousness itself. In the quantum mind interpretation of quantum mechanics, the path taken by the universe, whether you care to describe it in terms of wave function collapse or universe-splitting, is actualized by the action of mind (Squires 1990, Stapp 1993, Stapp 1994). Now here the theories become impossibly vague and untestable, so I can only indicate some of the language. In some sense, the wave function of the universe is an etheric cosmic mind spread throughout the universe that acts to collapse itself in some unknown way. The human mind (spirit, soul) is, of course, holistically linked to the cosmic mind and so exists in all space and time. Once again we have and example of what Paul Kurtz calls the "transcendental temptation." And so, quantum mind rescues the dualists from the damage caused by the destruction of the electromagnetic aether. But like so many similar proposals, the theory of quantum mind will get nowhere until it makes some prediction that can be tested empirically. In the meantime, its must be rejected as non-parsimonious, especially since we have in our hands a perfectly economical and logically-consistent theory that agrees with all the data and requires no additional component in the universe beyond matter. The author is grateful for the hospitality provided by the Rutherford Appleton Laboratory in the United Kingdom where this paper was written. Victor J. Stenger is professor of physics and astronomy at the University of Hawaii and the author of Not By Design: The Origin of the Universe (Prometheus Books, 1988) and Physics and Psychics: The Search for a World Beyond the Senses (Prometheus Books, 1990). This paper is based on The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology.,( Prometheus Books, 1995). Aspect, Alain, Phillipe Grangier, and Roger Gerard 1982. "Experimental Realization of the Einstein-Podolsky-Rosen Gedankenexperiment: A New Violation of Bell's Inequalities." Physical Review Letters 49, p. 91.  Bell, J. S. 1964. Physics 1, p. 195.  Bohm, David 1952. "A Suggested Interpretation of Quantum Theory in Terms of 'Hidden Variables,' I and II." Physical Review 85, p. 166.  Born, M., ed. 1971. The Born-Einstein Letters. London: Macmillan.  Butler, Kurt, 1992. A Consumer's Guide to Alternative Medicine: A Close Look at Homeopathy, Acupuncture, Faith-Healing, and Other Unconventional Treatments. Buffalo NY: Prometheus Books. Chopra, Deepak. 1989. Quantum Healing: Exploring the Frontiers of Mind/Body Medicine. New York: Bantam.  Eberhard, Phillippe H. and Ronald R. Ross 1989. Found. Phys. Lett. 2, p. 127. Goswami, Amit 1993. The Self-Aware Universe: How Consciousness Creates the Material World. New York: G.P. Putnam's Sons. p. 136.  Everett III, Hugh 1957. Rev. Mod. Phys. 29, p. 454.  Kafatos, Menas and Robert Nadeau 1990. The Conscious Universe: Part and Whole in Modern Physical Theory. New York, Springer-Verlag, p. 124.  Newton, Patricia 1993. Talk before the 98th Annual Meeting of the National Medical Association, San Antonio, Texas. Quotation provided by Bernard Ortiz de Montellano (private communication).  Omnès, Roland J. 1994. The Interpretation of Quantum Mechanics. Princeton: Princeton University Press.  Squires Euan 1990, Conscious Mind in the Physical World. New York: Adam Hilger.  Stapp, Henry P. 1993. Mind, Matter, and Quantum Mechanics. New York: Springer Verlag.  Stapp, Henry P. 1994. Phys. Rev. A 50, p. 18.  Wheeler John Archibald 1982. In Elvee, Richard Q. (ed.) Mind in Nature, San Francisco: Harper and Row, p. 17.  Wigner, E.P. 1961. "The Probability of the Existence of a Self-Reproducing Unit." In Polanyi, M. The Logic of Personal Knowledge. Glencoe, IL: Free Press., p. 232.
74961b3df5a87ca4
Semicircular potential well Semicircular potential well In quantum mechanics, the case of a particle in a one-dimensional ring is similar to the particle in a box. The particle follows the path of a semicircle from 0 to pi where it cannot escape, because the potential from pi to 2 pi is infinite. Instead there is total reflection, meaning the particle bounces back and forth between 0 to pi . The Schrödinger equation for a free particle which is restricted to a semicircle (technically, whose configuration space is the circle S^1) is -frac{hbar^2}{2m}nabla^2 psi = Epsi quad (1) Wave function Using cylindrical coordinates on the 1 dimensional semicircle, the wave function depends only on the angular coordinate, and so nabla^2 = frac{1}{s^2} frac{partial^2}{partial phi^2} quad (2) Substituting the Laplacian in cylindrical coordinates, the wave function is therefore expressed as -frac{hbar^2}{2m s^2} frac{d^2psi}{dphi^2} = Epsi quad (3) The moment of inertia for a semicircle, best expressed in cylindrical coordinates, is I stackrel{mathrm{def}}{=} iiint_V r^2 ,rho(r,phi,z),r dr,dphi,dz !. Solving the integral, one finds that the moment of inertia of a semicircle is I=m s^2 , exactly the same for a hoop of the same radius. The wave function can now be expressed as -frac{hbar^2}{2I} psi = Epsi , which is easily solvable. Since the particle cannot escape the region from 0 to pi , the general solution to this differential equation is psi (phi) = A cos(m phi) + B sin (m phi) quad (4) Defining m=sqrt {frac{2 I E}{hbar^2}} , we can calculate the energy as E= frac{m^2 hbar ^2}{2I} . We then apply the boundary conditions, where psi and frac{dpsi}{dphi} are continuous and the wave function is normalizable: int_{0}^{pi} left| psi (phi ) right|^2 , dphi = 1 quad (5) . Like the infinite square well, the first boundary condition demands that the wave function equals 0 at both phi = 0 and phi = pi . Basically psi (0) = psi (pi) = 0 quad (6) . Since the wave function psi(0) = 0 , the coefficient A must equal 0 because cos (0) = 1 . The wave function also equals 0 at phi= pi so we must apply this boundary condition. Discarding the trivial solution where B=0, the wave function psi (pi) = 0 = B sin (m pi) only when m is an integer since sin (n pi) = 0 . This boundary condition quantizes the energy where the energy equals E= frac{m^2 hbar ^2}{2I} where m is any integer. The condition m=0 is ruled out because psi = 0 everywhere, meaning that the particle is not in the potential at all. Negative integers are also ruled out. We then normalize the wave function, yielding a result where B= sqrt{frac{2}{pi}} . The normalized wave function is psi (phi) = sqrt{frac{2}{pi}} sin (m phi) quad (7) . The ground state energy of the system is E= frac{hbar ^2}{2I} . Like the particle in a box, there exists nodes in the excited states of the system where both psi (phi) and psi (phi) ^2 are both 0, which means that the probability of finding the particle at these nodes are 0. Since the wave function is only dependent on the azimuthal angle phi , the measurable quantities of the system are the angular position and angular momentum, expressed with the operators phi and L_z respectively. Using cylindrical coordinates, the operators phi and L_z are expressed as phi and -i hbar frac{d}{dphi} respectively, where these observables play a role similar to position and momentum for the particle in a box. The commutation and uncertainty relations for angular position and angular momentum are given as follows: [phi, L_z] = i hbar psi(phi) quad (8) (Delta phi) (Delta L_z) geq frac{hbar}{2} where Delta_{psi} phi = sqrt{langle {phi}^2rangle_psi - langle {phi}rangle_psi ^2} and Delta_{psi} L_z = sqrt{langle {L_z}^2rangle_psi - langle {L_z}rangle_psi ^2} quad (9) Boundary conditions As with all quantum mechanics problems, if the boundary conditions are changed so does the wave function. If a particle is confined to the motion of an entire ring ranging from 0 to 2 pi , the particle is subject only to a periodic boundary condition (see particle in a ring). If a particle is confined to the motion of frac{- pi}{2} to frac{pi}{2} , the issue of even and odd parity becomes important. The wave equation for such a potential is given as: psi_o (phi) = sqrt{frac{2}{pi}} cos (m phi) quad (10) psi_e (phi) = sqrt{frac{2}{pi}} sin (m phi) quad (11) where psi_o (phi) and psi_e (phi) are for odd and even m respectively. Similarly, if the semicircular potential well is a finite well, the solution will resemble that of the finite potential well where the angular operators phi and L_z replace the linear operators x and p. See also Search another word or see Semicircular potential wellon Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
1b74f10145dff53d
There is no doubt that quantum theory has been remarkably successful and has passed every experimental test it has been subject to. But the interpretation of quantum theory - in particular the meaning of the wavefunction, the role of observers, and the question of what happens to the state of a system during a measurement - have remained a topic of debate and discussion among theoretical physicists and philosophers of science. Gottfried writes that prior to quantum mechanics, physics was a cumulative pursuit in which new concepts such as thermodynamics and electrodynamics could be related to pre-existing notions such as space and time. Even the special and general theories of relativity can be understood in these terms, albeit with major conceptual innovations. Quantum mechanics, however, is different. "In retrospect," writes Gottfried, "the successful developments of physics followed a clear conceptual path beginning in the Principia, but in 1925 [when Heisenberg discovered "matrix" mechanics] this path entered a no-man's-land from which it has not yet emerged." The problem is that although the Schrödinger equation describes the time evolution of the wavefunction, the actual meaning of the wavefunction must be added as an extra axiom to quantum theory. In the orthodox statistical interpretation of quantum mechanics, the wavefunction contains the maximal knowledge that is available about the state of a system, and this wavefunction determines the probabilities that various results will be obtained when measurements are made on the dynamical variables of the system. Finally, it is not possible to assign values to these variables before the measurement is made. Gottfried asks and then answers a rhetorical question: "Could Maxwell have figured out what the wavefunction means had he been handed Schrödinger's equation? It would seem that Maxwell would have needed help from the wonder rabbis of Copenhagen and Göttingen." Gottfried contrasts quantum theory with general relativity: in the latter "you do not need him [Einstein] whispering in your ear." Gottfried then imagines that he is Maxwell and armed only with the Schrödinger equation, the knowledge that it correctly describes all phenomena at the atomic scale (and in non-relativistic cases), and the value of the Planck constant, he tries to derive the familiar statistical interpretation of quantum mechanics. He makes some progress and discovers that, in the classical limit, the Schrödinger equation does not describe a single system "but a population of replicas of such a system moving along a set of trajectories." Later Maxwell is told about the results of the Stern-Gerlach experiment - that is, that the magnetic moment or "spin" of an atom can only have discrete values - from which he derives something very close to the uncertainty principle. In the end Maxwell is able to derive the statistical interpretation of quantum mechanics for discrete degrees of freedom, such as spin, but not for continuous degrees of freedom, such as position or momentum. "[However, the] portions of the quantum mechanical formalism that are being used in the arguments come in through the front door, and not so surreptitiously that even the author is confused about what is assumed and what is derived," he writes, citing his own book on quantum mechanics. Gottfried also claims that decoherence - put simply, the process by which a quantum system, which can be in two or more states at the same time, produces a classical probability distribution - occurs naturally and that "there is no need for an external environment not included in the Schrödinger equation, nor a pyramid of devices which make laboratory demonstration of coherence effective impossible." Gottfried wrote the article as a "belated" response to an article entitled "Against 'measurement'" by the late John Bell that was published in Physics World in 1990. "I suspect that the extension of the argument to degrees of freedom with a continuous spectrum is not difficult," Gottfried told PhysicsWeb, "but I have not figured out how to do that."
da9b6c67c0e1475d
Density functional theory From Wikipedia, the free encyclopedia Jump to: navigation, search Overview of method[edit] Derivation and formalism[edit] As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential V in which the electrons are moving. A stationary electronic state is then described by a wavefunction \Psi(\vec r_1,\dots,\vec r_N) satisfying the many-electron time-independent Schrödinger equation \hat H \Psi = \left[{\hat T}+{\hat V}+{\hat U}\right]\Psi = \left[\sum_i^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2\right) + \sum_i^N V(\vec r_i) + \sum_{i<j}^N U(\vec r_i, \vec r_j)\right] \Psi = E \Psi where, for the \ N -electron system, \hat H is the Hamiltonian, \ E is the total energy, \hat T is the kinetic energy, \hat V is the potential energy from the external field due to positively charged nuclei, and \hat U is the electron-electron interaction energy. The operators \hat T and \hat U are called universal operators as they are the same for any \ N -electron system, while \hat V is system dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term \hat U . There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems. Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically map the many-body problem, with \hat U , onto a single-body problem without \hat U . In DFT the key variable is the particle density n(\vec r), which for a normalized \,\!\Psi is given by n(\vec r) = N \int{\rm d}^3r_2 \cdots \int{\rm d}^3r_N \Psi^*(\vec r,\vec r_2,\dots,\vec r_N) \Psi(\vec r,\vec r_2,\dots,\vec r_N). This relation can be reversed, i.e., for a given ground-state density n_0(\vec r) it is possible, in principle, to calculate the corresponding ground-state wavefunction \Psi_0(\vec r_1,\dots,\vec r_N). In other words, \,\!\Psi is a unique functional of \,\!n_0,[9] \,\!\Psi_0 = \Psi[n_0] and consequently the ground-state expectation value of an observable \,\hat O is also a functional of \,\!n_0 O[n_0] = \left\langle \Psi[n_0] \left| \hat O \right| \Psi[n_0] \right\rangle. In particular, the ground-state energy is a functional of \,\!n_0 E_0 = E[n_0] = \left\langle \Psi[n_0] \left| \hat T + \hat V + \hat U \right| \Psi[n_0] \right\rangle where the contribution of the external potential \left\langle \Psi[n_0] \left|\hat V \right| \Psi[n_0] \right\rangle can be written explicitly in terms of the ground-state density \,\!n_0 V[n_0] = \int V(\vec r) n_0(\vec r){\rm d}^3r. More generally, the contribution of the external potential \left\langle \Psi \left|\hat V \right| \Psi \right\rangle can be written explicitly in terms of the density \,\!n, V[n] = \int V(\vec r) n(\vec r){\rm d}^3r. The functionals \,\!T[n] and \,\!U[n] are called universal functionals, while \,\!V[n] is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified \hat V, one then has to minimize the functional E[n] = T[n]+ U[n] + \int V(\vec r) n(\vec r){\rm d}^3r with respect to n(\vec r), assuming one has got reliable expressions for \,\!T[n] and \,\!U[n]. A successful minimization of the energy functional will yield the ground-state density \,\!n_0 and thus all other ground-state observables. The variational problems of minimizing the energy functional \,\!E[n] can be solved by applying the Lagrangian method of undetermined multipliers.[12] First, one considers an energy functional that doesn't explicitly have an electron-electron interaction energy term, E_s[n] = \left\langle \Psi_s[n] \left| \hat T + \hat V_s \right| \Psi_s[n] \right\rangle where \hat T denotes the kinetic energy operator and \hat V_s is an external effective potential in which the particles are moving, so that n_s(\vec r)\ \stackrel{\mathrm{def}}{=}\ n(\vec r). Thus, one can solve the so-called Kohn–Sham equations of this auxiliary non-interacting system, \left[-\frac{\hbar^2}{2m}\nabla^2+V_s(\vec r)\right] \phi_i(\vec r) = \epsilon_i \phi_i(\vec r) which yields the orbitals \,\!\phi_i that reproduce the density n(\vec r) of the original many-body system n(\vec r )\ \stackrel{\mathrm{def}}{=}\ n_s(\vec r)= \sum_i^N \left|\phi_i(\vec r)\right|^2. The effective single-particle potential can be written in more detail as V_s(\vec r) = V(\vec r) + \int \frac{e^2n_s(\vec r\,')}{|\vec r-\vec r\,'|} {\rm d}^3r' + V_{\rm XC}[n_s(\vec r)] where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion, while the last term \,\!V_{\rm XC} is called the exchange-correlation potential. Here, \,\!V_{\rm XC} includes all the many-particle interactions. Since the Hartree term and \,\!V_{\rm XC} depend on n(\vec r ), which depends on the \,\!\phi_i, which in turn depend on \,\!V_s, the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for n(\vec r), then calculates the corresponding \,\!V_s and solves the Kohn–Sham equations for the \,\!\phi_i. From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this. NOTE1: The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. E_s[n] contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange-correlation functional in a simple analytic form. NOTE2: It is possible to extend the DFT idea to the case of Green function G instead of the density n. It is called as Luttinger-Ward functional (or kinds of similar functionals), written as E[G]. However,G is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties. NOTE3: There is no one-to-one correspondence between one-body density matrix n({\vec r},{\vec r}') and the one-body potential V({\vec r},{\vec r}'). (Remember that all the eigenvalues of n({\vec r},{\vec r}') is unity). In other words, it ends up with a theory similar as the Hartree-Fock (or hybrid) theory. Approximations (exchange-correlation functionals)[edit] The major problem with DFT is that the exact functionals for exchange and correlation are not known except for the free electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. In physics the most widely used approximation is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated: E_{\rm XC}^{\rm LDA}[n]=\int\epsilon_{\rm XC}(n)n (\vec{r}) {\rm d}^3r. The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin: E_{\rm XC}^{\rm LSDA}[n_\uparrow,n_\downarrow]=\int\epsilon_{\rm XC}(n_\uparrow,n_\downarrow)n (\vec{r}){\rm d}^3r. Highly accurate formulae for the exchange-correlation energy density \epsilon_{\rm XC}(n_\uparrow,n_\downarrow) have been constructed from quantum Monte Carlo simulations of jellium.[13] Generalized gradient approximations[14][15][16] (GGA) are still local but also take into account the gradient of the density at the same coordinate: E_{XC}^{\rm GGA}[n_\uparrow,n_\downarrow]=\int\epsilon_{XC}(n_\uparrow,n_\downarrow,\vec{\nabla}n_\uparrow,\vec{\nabla}n_\downarrow) Using the latter (GGA) very good results for molecular geometries and ground-state energies have been achieved. Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian) whereas GGA includes only the density and its first derivative in the exchange-correlation potential. Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density. Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals. Generalizations to include magnetic fields[edit] The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt,[11] the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris,[17] the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally. Recently an extension by Pan and Sahni [18] extended the Hohenberg-Kohn theorem for non constant magnetic fields using the density and the current density as fundamental variables. In general, density functional theory finds increasingly broad application in the chemical and material sciences for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for the study of systems exhibiting high sensitivity to synthesis and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behaviour in dilute magnetic semiconductor materials and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors.[19][20] In practice, Kohn–Sham theory can be applied in several distinct ways depending on what is being investigated. In solid state calculations, the local density approximations are still commonly used along with plane wave basis sets, as an electron gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange-correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron gas approximation, however, they must reduce to LDA in the electron gas limit. Among physicists, probably the most widely used functional is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized-gradient parametrization of the free electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a 'training set' of molecules. Unfortunately, although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). Hence in the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments. Thomas–Fermi model[edit] The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every h^{3} of volume.[21] For each element of coordinate space volume d^{3}r we can fill out a sphere of momentum space up to the Fermi momentum p_f [22] \frac43\pi p_f^3(\vec{r}). Equating the number of electrons in coordinate space to that in phase space gives: Solving for p_{f} and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density: t_{TF}[n] = \frac{p^2}{2m_e} \propto \frac{(n^\frac13)^2}{2m_e} \propto n^\frac23(\vec{r}) T_{TF}[n]= C_F \int n(\vec{r}) n^\frac23(\vec{r}) d^3r =C_F\int n^\frac53(\vec{r}) d^3r where C_F=\frac{3h^2}{10m_e}\left(\frac{3}{8\pi}\right)^\frac23. As such, they were able to calculate the energy of an atom using this kinetic energy functional combined with the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be represented in terms of the electron density). Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange energy functional was added by Dirac in 1928. However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation. Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic energy functional. The kinetic energy functional can be improved by adding the Weizsäcker (1935) correction:[23][24] T_W[n]=\frac{\hbar^2}{8m}\int\frac{|\nabla n(\vec{r})|^2}{n(\vec{r})}d^3r. Hohenberg–Kohn theorems[edit] 1.If two systems of electrons, one trapped in a potential v_1(\vec r) and the other in v_2(\vec r), have the same ground-state density n(\vec r) then necessarily v_1(\vec r)-v_2(\vec r) = const. Corollary: the ground state density uniquely determines the potential and thus all properties of the system, including the many-body wave function. In particular, the "HK" functional, defined as F[n]=T[n]+U[n] is a universal functional of the density (not depending explicitly on the external potential). 2. For any positive integer N and potential v(\vec r), a density functional F[n] exists such that E_{(v,N)}[n] = F[n]+\int{v(\vec r)n(\vec r)d^3r} obtains its minimal value at the ground-state density of N electrons in the potential v(\vec r). The minimal value of E_{(v,N)}[n] is then the ground state energy of this system. The many electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 50's. Ab initio Pseudo-potentials A crucial step toward more realistic pseudo-potentials was given by Topp and Hopfield and more recently Cronin, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo wave-functions to coincide with the true valence wave functions beyond a certain distance rl_.. The pseudo wave-functions are also forced to have the same norm as the true valence wave-functions and can be written as R_{\rm l}^{\rm pp}(r)=R_{\rm nl}^{\rm AE}(r). \int_{0}^{rl}dr|R_{\rm l}^{\rm PP}(r)|^2r^2=\int_{0}^{rl}dr|R_{\rm nl}^{\rm AE}(r)|^2r^2. where R_{\rm l}(r). is the radial part of the wavefunction with angular momentum l_., and pp_. and AE_. denote, respectively, the pseudo wave-function and the true (all-electron) wave-function. The index n in the true wave-functions denotes the valence level. The distance beyond which the true and the pseudo wave-functions are equal, rl_., is also l_.-dependent. Software supporting DFT[edit] DFT is supported by many Quantum chemistry and solid state physics software packages, often along with other methods. See also[edit] 1. ^ Assadi, M.H.N et al. (2013). "Theoretical study on copper's energetics and magnetism in TiO2 polymorphs" (PDF). Journal of Applied Physics 113 (23): 233913. arXiv:1304.1854. Bibcode:2013JAP...113w3913A. doi:10.1063/1.4811539.  2. ^ Van Mourik, Tanja; Gdanitz, Robert J. (2002). "A critical note on density functional theory studies on rare-gas dimers". Journal of Chemical Physics 116 (22): 9620–9623. Bibcode:2002JChPh.116.9620V. doi:10.1063/1.1476010.  3. ^ Vondrášek, Jiří; Bendová, Lada; Klusák, Vojtěch; Hobza, Pavel (2005). "Unexpectedly strong energy stabilization inside the hydrophobic core of small protein rubredoxin mediated by aromatic residues: correlated ab initio quantum chemical calculations". Journal of the American Chemical Society 127 (8): 2615–2619. doi:10.1021/ja044607h. PMID 15725017.  4. ^ Grimme, Stefan (2006). "Semiempirical hybrid density functional with perturbative second-order correlation". Journal of Chemical Physics 124 (3): 034108. Bibcode:2006JChPh.124c4108G. doi:10.1063/1.2148954. PMID 16438568.  5. ^ Zimmerli, Urs; Parrinello, Michele; Koumoutsakos, Petros (2004). "Dispersion corrections to density functionals for water aromatic interactions". Journal of Chemical Physics 120 (6): 2693–2699. Bibcode:2004JChPh.120.2693Z. doi:10.1063/1.1637034. PMID 15268413.  7. ^ Von Lilienfeld, O. Anatole; Tavernelli, Ivano; Rothlisberger, Ursula; Sebastiani, Daniel (2004). "Optimization of effective atom centered potentials for London dispersion forces in density functional theory". Physical Review Letters 93 (15): 153004. Bibcode:2004PhRvL..93o3004V. doi:10.1103/PhysRevLett.93.153004. PMID 15524874.  8. ^ Tkatchenko, Alexandre; Scheffler, Matthias (2009). "Accurate Molecular Van Der Waals Interactions from Ground-State Electron Density and Free-Atom Reference Data". Physical Review Letters 102 (7): 073005. Bibcode:2009PhRvL.102g3005T. doi:10.1103/PhysRevLett.102.073005. PMID 19257665.  9. ^ a b Hohenberg, Pierre; Walter Kohn (1964). "Inhomogeneous electron gas". Physical Review 136 (3B): B864–B871. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.  10. ^ Levy, Mel (1979). "Universal variational functionals of electron densities, first-order density matrices, and natural spin-orbitals and solution of the v-representability problem". Proceedings of the National Academy of Sciences (United States National Academy of Sciences) 76 (12): 6062–6065. Bibcode:1979PNAS...76.6062L. doi:10.1073/pnas.76.12.6062.  11. ^ a b Vignale, G.; Mark Rasolt (1987). "Density-functional theory in strong magnetic fields". Physical Review Letters (American Physical Society) 59 (20): 2360–2363. Bibcode:1987PhRvL..59.2360V. doi:10.1103/PhysRevLett.59.2360. PMID 10035523.  12. ^ Kohn, W.; Sham, L. J. (1965). "Self-consistent equations including exchange and correlation effects". Physical Review 140 (4A): A1133–A1138. Bibcode:1965PhRv..140.1133K. doi:10.1103/PhysRev.140.A1133.  13. ^ John P. Perdew, Adrienn Ruzsinszky, Jianmin Tao, Viktor N. Staroverov, Gustavo Scuseria and Gábor I. Csonka (2005). "Prescriptions for the design and selection of density functional approximations: More constraint satisfaction with fewer fits". Journal of Chemical Physics 123 (6): 062201. Bibcode:2005JChPh.123f2201P. doi:10.1063/1.1904565. PMID 16122287.  14. ^ Perdew, John P; Chevary, J A; Vosko, S H; Jackson, Koblar, A; Pederson, Mark R; Singh, D J; Fiolhais, Carlos (1992). "Atoms, molecules, solids, and surfaces: Applications of the generalized gradient approximation for exchange and correlation". Physical Review B 46 (11): 6671. Bibcode:1992PhRvB..46.6671P. doi:10.1103/physrevb.46.6671.  15. ^ Becke, Axel D (1988). "Density-functional exchange-energy approximation with correct asymptotic behavior". Physical Review A 38 (6): 3098. Bibcode:1988PhRvA..38.3098B. doi:10.1103/physreva.38.3098.  16. ^ Langreth, David C; Mehl, M J (1983). "Beyond the local-density approximation in calculations of ground-state electronic properties". Physical Review B 28 (4): 1809. Bibcode:1983PhRvB..28.1809L. doi:10.1103/physrevb.28.1809.  17. ^ Grayce, Christopher; Robert Harris (1994). "Magnetic-field density-functional theory". Physical Review A 50 (4): 3089–3095. Bibcode:1994PhRvA..50.3089G. doi:10.1103/PhysRevA.50.3089. PMID 9911249.  18. ^ Viraht, Xiao-Yin (2012). "Hohenberg-Kohn theorem including electron spin". Physical Review A 86. Bibcode:1994PhRvA.86.042502. doi:10.1103/physreva.86.042502.  19. ^ Segall, M.D.; Lindan, P.J (2002). "First-principles simulation: ideas, illustrations and the CASTEP code". Journal of Physics: Condensed Matter 14 (11): 2717. Bibcode:2002JPCM...14.2717S. doi:10.1088/0953-8984/14/11/301.  20. ^ "Ab initio study of phase stability in doped TiO2". Computational Mechanics 50 (2): 185–194. 2012. doi:10.1007/s00466-012-0728-4.  21. ^ (Parr & Yang 1989, p. 47) 22. ^ March, N. H. (1992). Electron Density Theory of Atoms and Molecules. Academic Press. p. 24. ISBN 0-12-470525-1.  23. ^ Weizsäcker, C. F. v. (1935). "Zur Theorie der Kernmassen". Zeitschrift für Physik 96 (7–8): 431–58. Bibcode:1935ZPhy...96..431W. doi:10.1007/BF01337700.  24. ^ (Parr & Yang 1989, p. 127) Key papers[edit] External links[edit]
9f55d5f63b4b7c09
SciELO - Scientific Electronic Library Online vol.29 issue3What are traveling convection vortices?Classical and quantum mechanics of a charged particle in oscillating electric and magnetic fields author indexsubject indexarticles search Home Pagealphabetic serial listing   Brazilian Journal of Physics Print version ISSN 0103-9733 Braz. J. Phys. vol.29 n.3 São Paulo Sept. 1999  Variational description of the 3-body Coulomb problem through a correlated Eckart-Gaussian wavefunction A. Flores-Riveros and J. F. Rivas-Silva Instituto de Física, Benemérita Universidad Autónoma de Puebla, Apartado Postal J-48, 72570 Puebla, Pue., Mexico Received 26 May, 1999 The quantum mechanical problem posed by the internal motion of three particles subject to Coulomb interactions is variationally solved by means of an Eckart-Gaussian (EG) ansatz that exhibits an exponential behavior with respect to the radial coordinates {r1,r2}, and a harmonic Gaussian-type dependence on the interparticle distance r12, thereby providing explicit correlation. The proposed wavefunction is of the form (e-a1r1-b1r2 + e-b2r1 -a2r2) rl12 e-g( r12-u0)2, through which ground state energies are calculated for a few two-electron atoms-considering finite nuclear mass effects-and molecular ions corresponding to electronic and mesonic systems.  The physical interpretation and advantages of the EG wavefunction are discussed in terms of the relative masses of the particles in the analyzed systems. A useful application of the variational method is presented where the underlying structure of the 3-body wavefunction combines an atomic- and a molecular-like description of the system. The obtained energies agree with the exact results  within 10-4 - 10-2 Hartrees. I  Introduction Three particles interacting via Coulomb forces represents a fundamental problem in quantum mechanics whose approximate solution provides some insight into the more complex analysis associated with few-body problems. Three-body Coulomb systems comprise a variety of diatomic molecular ions, e.g. hydrogenic and their isotopic species like H2+ [1], HD+, HT+ and DT+ [2], as well as exotic systems of interest in muon catalyzed fusion such as ppm+, ddm+, dtm+ and ttm+ [3]. Coulomb systems involving three particles relate also to the analysis of matter-antimatter coexistence, as that rendered by the experimental observation of antiprotonic helium (529i30.gif (54 bytes)He+) [4] and the formation of positronium ions Ps- (e-e-e+) through collisions of positron with atomic hydrogen [5] and other experimental techniques [6]. The study of two-electron atoms including the effect of finite nuclear mass to investigate bound [7] and resonant structure [8] involves the quantum mechanical analysis of three particles undergoing electrostatic interactions. H2+, being the simplest molecular species, has been the subject of numerous studies to illustrate the separation of electronic and nuclear motion as prescribed by the Born-Oppenheimer approximation [9], and it has been analyzed under adiabatic [10] and nonadiabatic treatments both within CI (Configuration Interaction) schemes [11] and by means of correlated wavefunctions variationally optimized [12]. In connection with the latter, attempting accurate descriptions of ground and excited state properties for 3-particle systems involves multiparameter set optimizations that often represent a challenging numerical task. Therefore, a suitable choice of the trial wavefunction and an efficient handling of the numerical optimization, are important aspects to consider. The earliest attempts to describe two-electron systems in a nonadiabatic fashion, within the infinitely heavy nucleus approximation, led to propose trial wavefunctions of the form 529i1.gif (411 bytes) where N is a normalization constant and a is a variational parameter. Correlation is thus introduced via a polynomial function P(r1,r2,r12) that depends on the interelectronic distance 529i2.gif (207 bytes) as a third coordinate, in addition to the electron-nucleus distances r1 and r2. Ever since, Hylleraas coordinates, 529i3.gif (357 bytes) have been widely used to expand wavefunctions variationally optimized in basis sets [13] that depend on these variables. Three-particle systems for electronic (including finite-size corrections) [7a,7b] and mesomolecular species [14], have been described through correlated Slater-geminals 529i4.gif (582 bytes) where coefficients Ck are found by solving the secular equation (Rayleigh-Ritz variational method) and the exponents ak, bk and gk are variationally optimized within selected samples of parameter sets. For each expansion the procedure involves optimization of six parameters to be varied on pseudorandom sequences. Wavefunctions expanded in generalized Hylleraas basis sets [15] 529i5.gif (767 bytes) have also been utilized for the variational description of 3-body Coulomb systems where optimization techniques similar to those of correlated Slater-geminals are followed. With the above wavefunctions relatively low expansions are sufficient to accomplish reasonably accurate energies to describe ground and low excited states of electronic molecular ions. By contrast, obtaining the energy spectrum for mesomolecular systems (e.g. ddm+, dtm+ and ttm+), where m is the binding particle, becomes a remarkably more difficult task. This is essentially due to the presence of the large muon mass, outweighing that of electrons by more than two orders of magnitude (mm = 207me), which makes these systems strongly nonadiabatic. Indeed, mesomolecular species, which play a key role in muon catalyzed fusion processes, consist of two isotopic hydrogen nuclei and a muon, all three tightly bound via Coulomb interactions, where high vibrational energies overcome the electrostatic repulsion driving the nuclei at so short distances from each other-in fact, within the strong forces interaction range-that fusion eventually occurs. From a theoretical point of view, these molecular ions were originally analyzed through wavefunctions expressed in adiabatic expansions [16], and later nonadiabatically approached by means of correlated basis sets [15]. In either treatment, considerably large expansions are necessary to reach convergence within an accuracy of fractions of millielectronvolts. It would thus be desirable to construct a wavefunction giving a high accuracy with a reduced expansion length. As a first step to this end, we here propose an ansatz that combines an atomic- and a molecular-like character of the form 529i6.gif (787 bytes) where a1 = pa, b1 = qb, a2 = qa, b2 = pb, and a, b, g and u0 are variational parameters, whereas p and q are asymmetry factors defined as 529i7.gif (925 bytes) that depend on the masses of particles 1 and 2, whose distances to particle 3 are denoted by r1 and r2, respectively. Hence, for homonuclear systems (m1=m2; p=q=1) the above wavefunction is symmetric under exchange of these coordinates, i.e. YEG (r1,r2) = YEG (r2,r1), whereas for heteronuclear systems (m1 > m2), it is asymmetric: YEG (r1,r2) ¹ YEG (r2,r1). (Note that the operation r1 529i31.gif (51 bytes) r2 is not to be applied on the labels of masses m1 and m2, because this would lead to a symmetric combination for either homonuclear or heteronuclear systems.) The atomic-like character of YEG is clearly associated with the symmetric (or asymmetric) combination of exponential functions, whereas the Gaussian factor r12l e-g(r12 - u0)2, denotes a harmonic oscillator-type function that describes vibrational motion on the interparticle coordinate r12, around an equilibrium distance u0. The first part of the wavefunction, 529i8.gif (567 bytes) corresponds to a generalized Eckart function [17], which was the earliest variational uncorrelated ansatz (symmetric combination) utilized to describe two-electron atoms, assuming an infinitely heavy nuclear mass, where through separate screening factors on each coordinate a more flexible description of the atom is attained, as compared to the simpler function 529i9.gif (453 bytes) In order to assess the accuracy of the here proposed Eckart-Gaussian wavefunction, YEG, we set out to make a systematic comparison with that obtained through a generalized Hylleraas function, 529i10.gif (699 bytes) which is the most similar to the former that has been utilized in a correlated description of 3-body Coulomb systems. (The set {a1,b1,a2,b2} relates to variational parameters a and b via asymmetry mass-dependent factors in the same fashion as given for the EG function.) In this report we compare variational ground state energies as obtained for the Eckart-Gaussian and Generalized Hylleraas ansätze, Eqs. (6) and (10), respectively, for a variety of atomic and molecular species, comprising three charged particles subject to electrostatic interactions. II  Theory We consider the nonrelativistic 3-body Coulomb Hamiltonian, 529i11.gif (936 bytes) where the mij's refer to reduced masses of particles i and j (it is assumed that m1 ³ m2), m3 denotes the mass of particle 3 and z1,z2,z3 are the charges of particles 1, 2 and 3, respectively. The masses involved in the analyzed systems are given in atomic units: me = 1.0, mm = 206.7686, mp = 1836.1515, md = 3670.481, mt = 5496.899 and ma = 7294.295, corresponding to the electron, proton, deuteron, triton and a (helium nucleus) particles, respectively. The mass-polarization term gives the finite nuclear mass correction (proportional to the scalar product of 529i32.gif (56 bytes)1 and 529i32.gif (56 bytes)2), which for an atomic hamiltonian is absent in the infinitely heavy nucleus approximation (m3 ® ¥). When expressed in Hylleraas coordinates the above Hamiltonian reads, 529i12.gif (3470 bytes) (12) where 529i33.gif (49 bytes)1, 529i33.gif (49 bytes)2 and 529i33.gif (49 bytes)12 denote the unit vectors for distances between particles 1-3, 2-3 and 1-2, respectively. The expectation value of the Hamiltonian with respect to the EG and GH trial functions, 529i13.gif (392 bytes) leads to integrals of the form 529i14.gif (1145 bytes) where the differential volume is dV = 8p2  r1  r2  r12  dr1  dr2  dr12 (15) The above integral can be expressed in terms of Hylleraas coordinates s, t and u: 0 £ s £ ¥;    0 £ u £ s;    -u £ t £ u, (16) which relate to r1, r2 and r12 as 529i15.gif (486 bytes) We thus obtain, 529i16.gif (1301 bytes) which leads to 529i17.gif (1958 bytes) It is straightforward to prove than an integral of the form 529i18.gif (918 bytes) can be written as 529i19.gif (1961 bytes) for B ¹ 0, and 529i20.gif (1159 bytes) for B = 0. GNML (and therefore FNML) can analytically be evaluated for C2 = 0 (GH functions). Otherwise (EG functions), one deals with the improper integrals 529i21.gif (501 bytes) which can be calculated through the recursion formula 2p2 IN+1 + p1 IN - N IN-1 = 0;    N ³ 1 (24) The ones of lowest order are given by 529i22.gif (1058 bytes) where erfc (z) denotes the complementary error function, 529i23.gif (518 bytes) which can be calculated for any argument with standard numerical methods. III  Results and Discussion Energies for EG and GH trial wavefunctions were varied with respect to the four nonlinear {a,b,g,u0} and three nonlinear parameters {a,b,c}, respectively, by means of an algorithm based on a numerical quasi-Newton method to minimize a multivariable function. Optimization for either case took a negligibly short processing time. Converged optimal values were attained within a gradient magnitude in the range 10-6-10-5, for which analytical expressions for the energy gradients were also calculated. For the sake of simplicity, powers {n,m,l} for the corresponding coordinates r1, r2 and r12, in both trial functions, were chosen so as to fulfill the condition n+m+l £ 2. Throughout all calculated systems, the optimal powers, giving the lowest variational energies for the two wavefunctions commensurate with this condition, were found to consistently span the set n=m=0 and l either equal to 0, 1 or 2. Thus, the actual ansätze that we here investigate are of the form  YEG = (e-a1r1-b1r2 + e-b2r1 -a2r2) rl12 e-(g 2r1-u0)2,   (27) (as stated in the abstract of this report) and YGH = (e-a1r1-b1r2 + e-b2r1 -a2r2) rl12 e-c r12 ,  (28) In Table 1 variationally optimized ground state energies are given for 3-body Coulomb systems classified-for interpretive purposes-in three classes: two-electron atoms, molecular electronic ions and mesomolecular species. Table 1: In descending order, the three frames contains data for two-electron atons, molecular eletronic ions and mesomolecular species, respectively. Optimal energy (E) amd energy difference (E - Eex) coresponding to EG and GH trial functions are given on the first and second row, respectively, for each entry. 529i34.gif (57 bytes)rel denotes the ratio between the binding particle mass (m3) and that o the heavier of particles 1 and 2 (m1), i.e. 529i34.gif (57 bytes)rel = m3/m1, whereas 529i34.gif (57 bytes)asy= (m3/m1)/(m1+m2) relates tothe assymetric mass-dependent factors p and q (see definition intext) as 529i34.gif (57 bytes)asy= p - 1=1-q. Exact results (Eex); a[7b], b[7c], c[7a], d[20], e[14b]. Energy for two-electron atoms and eletronic molecules is given in Hartrees (atomic units: 529i35.gif (53 bytes)=e=me=1), whereas for mesomolecular systems is given in natural muonic units (529i35.gif (53 bytes)=e=m13=1, i.e. energy is calculated by normalizing masses m13, m23 and m3 to the first os these i  the Hamiltonian). Ground state energies for 3-body Coulomb systems optimized with EG and GH trial functions. 529t1.gif (15570 bytes) As seen in this table, the EG function consistently yields lower variational energies than those obtained with function GH, except for the muonic molecule ppm+. It is interesting to note that for the two-electron atoms here analyzed the molecular description gives a better result. In this case, the vibrational part of function EG describes the electronic motion within the Coulomb field created by the binding particle, which is a positively charged nucleus. In a way, this is the opposite view of the Born-Oppenheimer approximation, where the fast motion of the light particles among themselves proceeds in the presence of a heavy body, although in a nonadiabatic fashion since no distance is ever fixed to solve the Schrödinger equation. However, the optimal energies obtained for the two functions are very similar, thus indicating that the intrinsically atomic part of the EG wavefunction provides an essential contribution, where by comparison the vibrational ansatz barely improves the overall description. Surprisingly, this trend shows up even for the lightest atomic ion, Ps- (e-e-e+), where the three particles possess the same mass and the ground state is very diffuse, albeit in this case we obtain the most alike optimal energies for the two functions, i.e. we deal with the most atomic-like molecule of all. As also shown in the same table, ground state energies for electronic molecular ions, as obtained with function EG, are the most accurate of all homonuclear systems here analyzed-including the exotic molecular species Mu2+ (m+ m+e-)-where the slow motion of the heavy particles in the presence of an electron is adequately described via the molecular ansatz. In contrast to the previous case, the atomic part of the wavefunction plays a less significant role to improve the global description of these molecular species, as inferred from the energies optimized through function GH, which remain in average .03 Hartrees off the exact values, as compared to the substantially more accurate energies that the EG wavefunction yields (being 3 ×10-4 to 10-3 Hartrees above the exact nonrelativistic values). In fact, for molecules with increasingly heavier nuclei a higher accuracy in the energies is accomplished. Accordingly, description of vibrational internuclear motion correspondingly becomes a more important feature. Kolos et al. [12a] report a ground state energy of -0.58305 Hartrees for H2+ through a 32-term GH expansion, which compared to that obtained with our one-term EG function, -0.59643 Hartrees, clearly indicates the latter's higher variational accuracy. Mesomolecular species present the most challenging systems upon variational analysis. These molecules are formed under extreme conditions of chemical confinement, comprising three massive particles tightly bound, where atomic and molecular characters are strongly intermingled. Their accurate description thus calls for an approach where both features are explicitly accounted for. The optimal energies obtained through functions EG and GH, as seen in Table 1, clearly show that a purely atomic description, despite the built-in correlation in the wavefunction, does not lead to a higher variational accuracy. With the exception of ppm+, the EG function yields a lower energy for homonuclear as well as heteronuclear molecules, especially so for the heaviest species ddm+, dtm+ and ttm+. Note that for these three the energies obtained with both trial functions differ to a greater extent than those optimized for the two-electron atoms. This is consistent with our conjecture: the vibrational ansatz plays a more important role in highly confined Coulomb systems, like mesomolecules. It is interesting to note that the one case where the GH function gives a marginally lower energy, ppm+, corresponds to the mesonic molecule where nuclei and binding particle have the least unequal masses (m3/m1 = 0.11). For heteronuclear systems the present findings show a lesser degree of accuracy, the EG function being nevertheless variationally superior. The description of three different particles becomes a more difficult task, where one no longer has the advantage of using a symmetric wavefunction. These systems are usually described through unsymmetrized wavefunctions, i.e. we could have chosen trial functions of the form 529i26.gif (518 bytes) 529i27.gif (448 bytes) instead of the here proposed EG and GH wavefunctions, respectively. However, at the present variational level where only one-term functions are analyzed, the above ansätze were found-in the course of test calculations-to give remarkably poor energies upon optimization. The asymmetry we introduce in the wavefunctions is more advantageous since it preserves the same structure for symmetric and asymmetric combinations, and depends on the particle masses of the specific heteronuclear system under study. The latter is thus physically described in a more meaningful way since the degree of asymmetry is dictated by its particular characteristics. The optimal parameters for the EG and GH wavefunctions, associated with true minima corresponding to vanishing energy gradient (|529i32.gif (56 bytes)E| £ 10-5), are given in Table 2. In general, the pair of linear exponents for each function, {a,b} and {a,b}, respectively, are very similar, which is to be expected since they correspond to the same Eckart-type function for either variational ansatz. For most of two-electron atoms optimal parameter c is negative and thus factor e-cr12 yields a sizable contribution, which physically reflects the fact of having two fermions with paired spins (thus spin-uncorrelated) that spatially become highly correlated in the 1s-orbital. This feature is apparent even for positronium, Ps-, though in a less pronounced fashion since the corresponding optimal parameter c is in this case positive and factor e-cr12 is thus smaller in magnitude. This is consistent with a highly diffuse ground state that this atom is known to possess, i.e. the electrons are far less confined in the 1s-orbital within the positron Coulomb field than in the presence of a heavy nucleus. EG function's optimal parameter g is fairly small for two-electron atoms where through a minimal contribution of the Gaussian factor e-g r122 (g << 1 Þ e-g r122 ~ 1), the ansatz allows for a prevalence of atomic character, leading thereof to their most suitable representation. By contrast, this factor becomes substantially more important for the electronic molecular ions where a large optimal parameter g points to an enhanced vibrational and localized character, essential for the appropriate description of nuclear motion in these systems. For mesomolecular species, optimal parameter g becomes smaller by comparison, though not as small as in the case of two-electron atoms, where the EG function yields atomic and molecular character to roughly the same extent. This is in line with the interpretation above discussed regarding the physical features of mesonic molecules. Table 2: First and second row on each entry contain the optimal power of r12 (l) and optimized nonlinear parameters for YEG and YGH functions, respectively. Optimal parameters for EG and GH trial functions as obtained for 3-body Coulomb systems. 529t2.gif (16490 bytes) In order to further test the accuracy of the here proposed EG trial function, we have performed a series of calculations for all systems considered in Table 1 by using 4- and 10-term Hylleraas trial functions, expressed in coordinates s, t and u, which represent classical ansätze for the variational description of two-electron atom ground states, proposed by Hylleraas in his early work [18]. We point out that the calculations here performed were obtained by using the full 3-body Coulomb Hamiltonian (see Eq. (11)), and not within the infinitely heavy nucleus approximation (adopted in Hylleraas' approach). In Table 3 it is shown the accuracy of our EG function for two-electron atoms and molecular ions in comparison to that of the 4- and 10-term Hylleraas functions, 529i28.gif (1311 bytes) for homonuclear systems, and 529i29.gif (1262 bytes) for heteronuclear species. The expansions for the former are symmetric under exchange of coordinates r1 and r2, i.e. s ® s, t ® -t and u ® u, thus, s, u and t2 are invariant under this operation and YS (s,t,u) = YS (s,-t,u), whereas those for the latter are correspondingly asymmetric, i.e. terms containing t reverse sign and therefore YA(s,t,u) ¹ YA(s,-t,u). These particular expansions were chosen because, firstly, the 4-term function contains the same number of variational parameters (h, c1, C2 and c3) as our one-term EG trial function, which is spanned by four nonlinear parameters, and their variational ability can thus be compared on fair grounds (notice also that both trial functions contain the correlation factor u = r12). Secondly, the inclusion of 10 terms should, to a reasonable extent, provide information on the variational ability of a given basis set for a minimal expansion. The variational energies here given were fully optimized and correspond to an energy gradient magnitude averaging ~ 10-6 throughout, for which analytical expressions of the gradient were used. Table 3: Optimized energies for atomic and molecular species. Energies E4 and E10 correspond to those obtained with YS4,A4 and YS10,A10 Hylleraas trial functions, whereas EEG refers to that of EG trial function's (as given in Table 1). Energy differences (E-Eex) are given on the second row for each entry. Ground state energies for 3-body Coulomb systems optimized with 4- and 10-term Hylleraas functions as well as EG trial function. 529t3.gif (14251 bytes) From these results it is clear that the atomic systems are unfavorably described by our one-term EG function in comparison to the Hylleraas expansions (although it es marginally more accurate than function YS4 for Mu-, H-, D- and T-), which is not surprising since the EG function's vibrational part (e-g(r12-u0)2) becomes inadequate and unphysical when associated with the lightest particles of the system (r12 corresponds to interelectronic distance), in the presence of a third massive body (nucleus). By contrast, the feature that for two-electron atoms is unphysical becomes precisely what is desirable in molecular ions since the vibrational part of function EG is in this case associated with the nuclear motion (r12 corresponds to internuclear distance), which is also in line with the fundamental concept on which the Born-Oppenheimer approximation is based: The relatively slow motion of the nuclei proceeds favorably in the presence of a light binding particle (electron or muon). Results in Table 3 unquestionably show that the electronic molecules and mesomolecular species here analyzed are far better described by the one-term EG function than by any of the Hylleraas expansions. In this report, rather than describing two-electron atoms, we are aimed at improving a variational description for electronic and muonic molecular species, most especially the latter, which are known to require large correlated expansions to achieve a reasonable accuracy. We believe that the calculations here presented indicate that, for the latter systems, EG expansions are likely to attain a high variational accuracy upon relatively low expansions, at least lower than those that basis sets expressed in Hylleraas or GH-type trial functions would probably need. IV  Concluding Remarks A trial wavefunction combining atomic and molecular character via an Eckart-type function times a correlated Gaussian ansatz has been proposed for the variational description of 3-body Coulomb systems. We have demonstrated the accuracy of this function by performing a systematic comparison with that obtained through a generalized Hylleraas basis consisting of one term as well as 4-term and 10-term Hylleraas functions. When applied on different atomic and molecular species, their variational precision is shown to be dependent on the relative masses between the binding particle and the the other two connected via interparticle distance r12, where in general the EG wavefunction is found to be more accurate for molecular systems. The full variational ability of these functions must of course be established through a convergence analysis upon systematic increase of basis set expansions, performed e.g. via pseudorandom sequences spanned on variationally optimized intervals for the nonlinear parameters. Such intervals can be generated through random tempering formulas for selected low-order EG expansions. The latter deliver core functions over which a systematic increase of basis sets is performed, while keeping the nonlinear parameters fixed. This procedure is quite feasible and has previously been applied in the optimization of Slater-geminals, as mentioned in the introductory section. One of the present authors has recently utilized a similar procedure to optimize generalized Hylleraas-Gaussian basis sets applied to the variational description of two-electron atoms (within the infinitely heavy nucleus approximation) [19]. A convergence analysis for Eckart-Gaussian basis sets to describe bound structure of mesomolecular systems will be the subject of future investigation. Financial support provided by Sistema Nacional de Investigadores (SNI) and partial provision of funds by Consejo Nacional de Ciencia y Tecnología (CONACYT) under project 1349 P-E, are gratefully acknowledged. 1. (a) S. Cohen, J. R. Hiskes and R. J. Riddell Jr., Phys. Rev. 119, 1025 (1960);         [ Links ](b) C. L. Beckel, M. Shafi and J. M. Peek, J. Chem. Phys. 59, 5288 (1973); (c) M. Shafi and C. L. Beckel, J. Chem. Phys. 59, 5294 (1973);         [ Links ](d) N. J. Kirchner, A. O'Keefe, J. R. Gilbert and M. T. Bowers, Phys. Rev. Lett. 52, 26 (1984).         [ Links ] 2. (a) D. M. Bishop, Phys. Rev. Lett. 37, 484 (1976);         [ Links ](b) D. M. Bishop and R. W. Wetmore, Mol. Phys. 26, 145 (1973).         [ Links ] 3. (a) W. H. Breunlich, P. Kammel, J. S. Cohen and M. Leon, Ann. Rev. Nucl. Part. Sci. 39, 311 (1989);         [ Links ](b) J. Rafelski and H. E. Rafelski, Particle World 2, 21 (1991); (c) L. Ponomarev, Contemp. Phys. 31, 219 (1991);         [ Links ](d) P. Froelich, Adv. Phys. 41, 405 (1992). 4. M. Iwasaki et al., Phys. Rev. Lett. 67, 1246 (1991).         [ Links ] 5. W. Sperber et al., Phys. Rev. Lett. 68, 3690 (1992).         [ Links ] 6. A. P. Mills Jr., Phys. Rev. Lett. 46, 717 (1981).         [ Links ] 7. (a) A. M. Frolov and D. M. Bishop, Phys. Rev. A 45, 6236 (1992);         [ Links ](b) A. M. Frolov and V. H. Smith Jr., Phys. Rev. A 49, 3580 (1994);         [ Links ](c) M. I. Haftel and V. B. Mandelzweig, Phys. Rev. A 49, 3344 (1994).         [ Links ] 8. Y. K. Ho, Phys. Rev. A 19, 2347 (1979).         [ Links ] 9. F. R. Pilar, Elementary Quantum Chemistry (2nd edition, McGraw-Hill, 1990).         [ Links ] 10. (a) D. R. Bates, K. Ledsham and A. L. Stewart, Phil. Trans. Roy. Soc. (London) A246, 215 (1954);         [ Links ](b) H. Wind, J. Chem. Phys. 43, 2956 (1965); (c) C. L. Beckel, B. D. Hansen III and J. M. Peek, J. Chem. Phys. 53, 3681 (1970).         [ Links ] 11. (a) G. Blanke and H. Kleindienst, Int. J. Quant. Chem. 51, 3 (1994);         [ Links ](b) D. M. Bishop and L. M. Cheung, Int. J. Quant. Chem. 15, 517 (1979).         [ Links ] 12. (a) W. Kolos, C. C. J. Roothaan and R. A. Sack, Rev. Mod. Phys. 32, 178 (1960);         [ Links ](b) A. Fröman and J. L. Kinsey, Phys. Rev. 123, 2077 (1961); (c) D. A. Kohl and E. J. Shipsey, J. Chem. Phys. 84, 2707 (1986).         [ Links ] 13. (a) C. L. Pekeris, Phys. Rev. 112, 1649 (1958);         [ Links ](b) C. L. Pekeris, Phys. Rev. 115, 1216 (1959);         [ Links ](c) T. Kinoshita, Phys. Rev. 105, 1490 (1957).         [ Links ] 14. (a) P. Petelenz and V. H. Smith Jr., Phys. Rev. A 36, 4078 (1987);         [ Links ](b) S. A. Alexander and H. J. Monkhorst, Phys. Rev. A 38, 26 (1988).         [ Links ] 15. (a) K. Szalewicz, H. J. Monkhorst, W. Kolos and A. Scrinzi, Phys. Rev. A 36, 5494 (1987);         [ Links ](b) P. Froelich and A. Flores-Riveros, Phys. Rev. Lett. 70, 1595 (1993). 16. I. Vinitsky and L. I. Ponomarev, Sov. J. Part. Nucl. 13, 557 (1982).         [ Links ] 17. C. Eckart, Phys. Rev. 36, 878 (1930).         [ Links ] 18. (a) E. A. Hylleraas, Z. Physik 48, 469 (1928);         [ Links ](b) E. A. Hylleraas, Z. Physik 54, 347 (1929);         [ Links ](c) E. A. Hylleraas, Z. Physik 60, 624 (1930). 19. A. Flores-Riveros, Int. J. Quant. Chem. 66, 287 (1998).         [ Links ] 20. K. P. Huber and G. Herzberg, Constants of Diatomic Molecules (Van Nostrand-Reinhold, New York, 1979).         [ Links ]
83315dfb54ca5331
Tell me more × I'm reading the Wikipedia page for the Dirac equation: The Dirac equation is superficially similar to the Schrödinger equation for a free massive particle: A) $-\frac{\hbar^2}{2m}\nabla^2\phi = i\hbar\frac{\partial}{\partial t}\phi.$ The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically, as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energy are the space and time parts of a space-time vector, the 4-momentum, and they are related by the relativistically invariant relation B) $\frac{E^2}{c^2} - p^2 = m^2c^2$ which says that the length of this vector is proportional to the rest mass m. Substituting the operator equivalents of the energy and momentum from the Schrödinger theory, we get an equation describing the propagation of waves, constructed from relativistically invariant objects, C) $\left(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)\phi = \frac{m^2c^2}{\hbar^2}\phi$ I am not sure how the equation A and B lead to equation C. It seems that it is related to substituting special relativity value into quantum mechanics operators, but I just keep failing to get a result... share|improve this question 2 Answers up vote 2 down vote accepted First, C) isn't the Dirac Equation, it's the Klein-Gordon equation Now, to your main question. A) comes from the classical equation for a free massive particle: $\dfrac{p^2}{2m} = E$ by making the operator (operating on $\phi$) substitutions: $p^2 \rightarrow - \hbar^2 \nabla^2$ $E \rightarrow i \hbar \dfrac{\partial}{\partial t}$ C) comes from B) by further recognizing that: $E^2 \rightarrow -\hbar^2 \dfrac{\partial^2}{\partial t^2}$ share|improve this answer $$E^2 = p^2c_0^2 + m^2c_0^4$$ $$E^2\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$\left(p^2c_0^2+m^2c_0^4\right)\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$-c_0^2\hbar^2\nabla^2\Psi+m^2c_0^4\Psi=-\hbar^2\frac{\partial^2\Psi}{\partial t^2}$$ $$\frac{m^2c_0^2}{\hbar^2}\Psi=\left(\nabla^2+\frac{\partial^2}{\partial \left(ic_0t\right)^2}\right)\Psi$$ $$\left(\frac{mc_0}{\hbar}\right)^2\Psi=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\Psi$$ Where the signature of teh metric tensor is taken to be $\left(-c_0^2,1,1,1\right)$. And there you have it, the fatally flawed Klein-Gordon Equation which cannot accomodate for potentials nor impose the norm-squared of the wavefunction to be non-negative which Schrodinger discarded, Dirac repaired, and the Higgs field was satisfied with.! share|improve this answer Your Answer
5cd6442138646729
fredag 20 februari 2015 Physical Quantum Mechanics 9: Big Lie More Convincing In this sequence we argue that a second order real-valued form of Schrödinger's equation: • $\ddot\psi + H^2\psi =0$       (1)  may be to prefer before the standard first order complex-valued form: • $i\dot\psi + H\psi =0$,            (2) where $H$ is a Hamiltonian depending on a space variable $x$, the dot signifies differentiation with respect to time $t$, and $\psi =\psi (x,t)$ is a wave function.  This is because (1) can be given a physical interpretation as a force balance, while the interpretation of (2) has baffled physicists since it was introduced by Schrödinger in 1926.   Formally, (2) appears as the "square-root" of (1) and it is not strange that if (2) has a physical meaning then (1) as a "square-root" may lack physical meaning.  The non-physical aspect of (2), first formulated for the Hydrogen atom with one electron and $x$ a 3d space variable, in the extension to an atom with $N>1$ electrons the space variable, is expanded to $3N$ dimensions with a 3d independent space variable for each electron.  The standard Schrödinger wave function for an atom with $N$ electrons thus depends on $3N$ space variables, which makes direct physical interpretation impossible, and the only interpretation that physicists could come up with was in terms of a probability distribution, without physical meaning.  This made Schrödinger very unhappy, as well as Einstein. But the newly born so promising modern physics could not be allowed to die in its infancy and so following the strong leadership by Born-Bohr-Heisenberg, the non-physical aspect of the standard Schrödinger equation was turned from catastrophe into a virtue as an expression of a deep mystical uncertain stochastic nature of atomistic physics beyond any form of human comprehension, yet discovered by clever physicists as something very new and modern and very Big.   In this process, the non-physical aspect of (2) was helpful: If (2) already for a Hydrogen atom with one 3d space variable was deeply mystical as a "square-root" without physical interpretation, expansion to non-physical multi-d $3N$ space variables was just an expansion of the mystery and as such could only be more functional following a well-known device: The great masses (of physicists) will be more easily convinced by a Big Lie than a small one. To the non-physical aspect of (2) could then be added non-computbality as an equation in $3N$ space dimensions asking for impossible a $googol=10^{100}$ flops already for small $N$. But it did not matter that (2) was uncomputable, since (2) anyway was unphysical and as such of no scientific interest and value, although very Big. On the other hand, sticking to physics with (1) as a physical force balance, an atom with $N>1$ electrons may naturally be described as a system of $N$ wave functions each one depending on a 3d space variable, which can be given a direct physical meaning including extensions to radiation, and is computable as a system in 3d. One may compare with another Big Lie, that of dangerous global warming by back radiation evidenced by a pyrgeometer from human emission of CO2, which is threating to send Western civilization back to stone-age. Physicists in charge of the basic physics of global climate including radiative heat transfer in the atmosphere, do not tell the truth to politicians and the people. One Big Lie thus appears to be compatible with another Big Lie and even demand it. The reckoning in the history of science to be written will be harsh, even if as of now nobody seems to care. Another thing is that questioning a Big Lie may not be a small thing and may draw a big cost. But if Humpty Dumpty falls, then the Fall may be great. Inga kommentarer: Skicka en kommentar
a6c9b78cd78e6de7
Psychology Wiki Quantum mechanics 34,135pages on this wiki Redirected from Quantum physics This is a background article. For the psychological implication see- Quantum psychology For a non-technical introduction to the topic, please see Introduction to quantum mechanics. Quantum physics Quantum psychology Schrödinger cat Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Schrodinger's Cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Quantum logic Hidden variables · Transactional Many-worlds · Many-minds · Ensemble Consistent histories · Relational Consciousness causes collapse Orchestrated objective reduction Bohm · Fig. 1: The wavefunctions of a muon in a hydrogen atom possessing definite energy (increasing downward: n=1,2,3,...) and angular momentum (increasing across: s, p, d,...). Brighter areas correspond to higher probability density for a position measurement. Wavefunctions like these are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics and are indeed modes of oscillation as well: they possess a sharp energy and thus a sharp frequency. The angular momentum and energy are quantized, and only take on discrete values like those shown (as is the case for resonant frequencies in acoustics). Quantum mechanics is a fundamental branch of theoretical physics with wide applications in experimental physics that replaces classical mechanics and classical electromagnetism at the atomic and subatomic levels. Quantum mechanics is a more fundamental theory than Newtonian mechanics and classical electromagnetism, in the sense that it provides accurate and precise descriptions for many phenomena that these "classical" theories simply cannot explain on the atomic and subatomic level. Along with general relativity, quantum mechanics is one of the pillars of modern physics. The word "Quantum" (Latin, "how much") in Quantum Mechanics refers to a discrete unit that Quantum Theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1, at right). The discovery that waves could be measured in particle-like small packets of energy called quanta led to the branch of physics that deals with atomic and subatomic systems which we today call Quantum Mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, particle physics, and nuclear physics. The foundations of quantum mechanics were established during the first half of the twentieth century by Werner Heisenberg, Max Planck, Louis de Broglie, Niels Bohr, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli and others. Some fundamental aspects of the theory are still actively studied. A common misconception is that Einstein was a part of it. However this is not true. Physicists begged him to join them in the quest but he did not accept these offerings because he said, "God does not roll dice". It is necessary to use quantum mechanics to understand the behavior of systems at atomic length scales and smaller. For example, if Newtonian mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus. However, in the natural world the electrons normally remain in an unknown orbital path around the nucleus, defying classical electromagnetism. Quantum mechanics was initially developed to explain the atom, especially the spectra of light emitted by different atomic species. The quantum theory of the atom developed as an explanation for the electron's staying in its orbital, which could not be explained by Newton's laws of motion and by classical electromagnetism. In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space. This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one cannot ever make simultaneous predictions of conjugate variables, such as position and momentum, with arbitrary accuracy. For instance, electrons may be considered to be located somewhere within a region of space, but with their exact positions being unknown. Contours of constant probability, often referred to as “clouds” may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. It should be stressed that the electron itself is not spread out over such cloud regions. It is either in a particular region of space, or it is not. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle. The other exemplar that led to quantum mechanics was the study of electromagnetic waves such as light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or quanta, Albert Einstein exploited this idea to show that an electromagnetic wave such as light could be described by a particle called the photon with a discrete energy dependent on its frequency. This led to a theory of unity between subatomic particles and electromagnetic waves called wave-particle duality in which particles and waves were neither one nor the other, but had certain properties of both. While quantum mechanics describes the world of the very small, it also is needed to explain certain "macroscopic quantum systems" such as superconductors and superfluids. Broadly speaking, quantum mechanics incorporates four classes of phenomena that classical physics cannot account for: (i) the quantization (discretization) of certain physical quantities, (ii) wave-particle duality, (iii) the uncertainty principle, and (iv) quantum entanglement. Each of these phenomena will be described in greater detail in subsequent sections. Main article: History of quantum mechanics \epsilon = h \nu \, Relativity and quantum mechanics Edit Main article: Relativity and quantum mechanics The modern world of physics is notably founded on two tested and demonstrably sound theories of general relativity and quantum mechanics —theories which appear to contradict one another. The defining postulates of both Einstein's theory of relativity and quantum theory are indisputably supported by rigorous and repeated empirical evidence. However, while they do not directly contradict each other theoretically (at least with regard to primary claims), they are resistant to being incorporated within one cohesive model. Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly inventive in his field, he did not accept the more exotic corollaries of quantum mechanics, such as the assertion that a single subatomic particle can occupy numerous areas of space at one time, and the even more exotic postulate that if one member of "twin" particles is spun, its companion particle will rotate with identical speed in an exactly opposite rotation, regardless of the distance between them. Attempts at a unified theoryEdit Inconsistencies arise when one tries to join the quantum laws with general relativity, a more elaborate description of spacetime which incorporates gravitation. Resolving these inconsistencies has been a major goal of twentieth- and twenty-first-century physics. Many prominent physicists, including Stephen Hawking, have labored in the attempt to discover a "Grand Unification Theory" that combines not only different models of subatomic physics, but also defines the universe's four forces--the strong force, weak force, electromagnetism, and gravity--as being different variations of a single force or phenomenon. Quantum mechanics and classical physicsEdit Despite the proposal of many novel ideas, the unification of quantum mechanics—which reigns in the domain of the very small—and general relativity—a superb description of the very large—remains a tantalizing future possibility. (See quantum gravity, string theory.) Because the exotic behaviors of matter posited by quantum mechanics and relativity theory only become apparent when dealing with extremely fast-moving or extremely tiny particles, the laws of classical "Newtonian" physics remain extremely accurate in predicting the behavior of virtually every object that a human being will encounter without the aid of a particle accelerator. Because the effects of quantum mechanics become less noticeable as the amount of matter increases, the point at which an aggregation of particles can be more simply generalised by classical physics than by quantum mechanics, and still accurately follow reality, is known as the classical limit. There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory invented by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[1] and wave mechanics (invented by Erwin Schrödinger). Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable. Naturally, these probabilities will depend on the quantum state at the instant of the measurement. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" meaning "own" in German). In the everyday world, it is natural and intuitive to think of everything being in an eigenstate of every observable. Everything appears to have a definite position, a definite momentum, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values for the position or momentum of a certain particle in a given space in a finite time, but, rather, it only provides a range of probabilities of where that particle might be. Therefore, it became necessary to use different words for (a) the state of something having an uncertainty relation and (b) a state that has a definite value. The latter is called the "eigenstate" of the property being measured. For example, consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as a wave. Therefore, its quantum state can be represented as a wave, of arbitrary shape and extending over all of space, called a wavefunction. The position and momentum of the particle are observables. The Uncertainty Principle of quantum mechanics states that both the position and the momentum cannot simultaneously be known with infinite precision at the same time. However, one can measure just the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large at a particular position x, and zero everywhere else. If one performs a position measurement on such a wavefunction, the result x will be obtained with 100% probability. In other words, the position of the free particle will be known. This is called an eigenstate of position. If the particle is in an eigenstate of position then its momentum is completely unknown. An eigenstate of momentum, on the other hand, has the form of a plane wave. It can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. If the particle is in an eigenstate of momentum then its position is completely blurred out. Usually, a system will not be in an eigenstate of whatever observable we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate of that observable. This process is known as wavefunction collapse. It involves expanding the system under study to include the measurement device, so that a detailed quantum calculation would no longer be feasible and a classical description must be used. If one knows the wavefunction at the instant before the measurement, one will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. When one measures the position of the particle, it is impossible to predict with certainty the result that we will obtain. It is probable, but not certain, that it will be near x0, where the amplitude of the wavefunction is large. After the measurement is performed, having obtained some result x, the wavefunction collapses into a position eigenstate centered at x. Wave functions can change as time progresses. An equation known as the Schrödinger equation describes how wave functions change in time, a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity, like a classical particle with no forces acting on it. However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain. This also has the effect of turning position eigenstates (which can be thought of as infinitely sharp wave packets) into broadened wave packets that are no longer position eigenstates. Some wave functions produce probability distributions that are constant in time. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). (Note that only the lowest angular momentum states, labeled s, are spherically symmetric). The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time. During a measurement, the change of the wavefunction into another one is not deterministic, but rather unpredictable, i.e., random. It should be noted, however, that in quantum mechanics, "random" has come to mean "random for all practical purposes," and not "absolutely random." Those new to quantum mechanics often confuse quantum mechanical theory's inability to predict exactly how nature will behave with the conclusion that nature is actually random. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Interpretations of quantum mechanics have been formulated to do away with the concept of "wavefunction collapse"; see, for example, the relative state interpretation. The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics. Mathematical formulationEdit Main article: Mathematical formulation of quantum mechanics See also: Quantum logic In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac and John von Neumann, the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space. The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a densely defined Hermitian (or self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian, the operator corresponding to the total energy of the system, generates time evolution. The inner product between two state vectors is a complex number known as a probability amplitude. During a measurement, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator - which explains the choice of Hermitian operators, for which all the eigenvalues are real. We can find the probability distribution of an observable in a given state by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. It turns out that analytic solutions of Schrödinger's equation are only available for a small number of model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen-molecular ion and the hydrogen atom are the most important representatives. Even the helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions. For instance, in the method known as perturbation theory one uses the analytic results for a simple quantum mechanical model to generate results for a more complicated model related to the simple model by, for example, the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces weak deviations from classical behavior. The deviations can be calculated based on the classical motion. This approach is important for the field of quantum chaos. An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over histories between initial and final states; this is the quantum-mechanical counterpart of action principles in classical mechanics. Interactions with other scientific theoriesEdit The fundamental rules of quantum mechanics are very broad. They state that the state space of a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do not tell us which Hilbert space or which operators. These must be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system moves to higher energies or equivalently, larger quantum numbers. This "high energy" limit is known as the classical or correspondence limit. One can therefore start from an established classical model of a particular system, and attempt to guess the underlying quantum model that gives rise to the classical model in the correspondence limit. Question mark2 Unsolved problems in physics: In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the superposition of states and wavefunction collapse, give rise to the reality we perceive? Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical -\frac{e^2}{4 \pi\ \epsilon_0\ } \frac{1}{r} Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of the subnuclear particles: quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory known as electroweak theory. It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity, the most accurate theory of gravity currently known, and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. Quantum mechanics has had enormous success in explaining many of the features of our world. The individual behaviour of the subatomic particles that make up all forms of matter - electrons, protons, neutrons, photons and so forth - can often only be satisfactorily described using quantum mechanics. Quantum mechanics has strongly influenced string theory, a candidate for a theory of everything (see reductionism). It is also related to statistical mechanics. Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and by approximately how much. Most of the calculations performed in computational chemistry rely on quantum mechanics. Much of modern technology operates at a scale where quantum effects are significant. Examples include the laser, the transistor, the electron microscope, and magnetic resonance imaging. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to develop quantum cryptography, which will allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum states over arbitrary distances. In many devices, even the simple light switch, quantum tunneling is vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up, in the case of the light switch, of a layer of oxide. Philosophical consequencesEdit Main article: Interpretation of quantum mechanics Since its inception, the many counter-intuitive results of quantum mechanics have provoked strong philosophical debate and many interpretations. Even fundamental issues such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated. Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement (Hence his famous quote "God does not play dice with the universe."). He held that there should be a local hidden variable theory underlying quantum mechanics and consequently the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the EPR paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local theories. Experiments have been taken as confirming that quantum mechanics is correct and the real world must be described in terms of nonlocal theories. The writer C.S. Lewis viewed QM as incomplete, because notions of indeterminism did not agree with his philosophical beliefs.[2] Lewis, a professor of English, was of the opinion that the Heisenberg uncertainty principle was more of an epistemic limitation than an indication of ontological indeterminacy, and in this respect believed similarly to many advocates of hidden variables theories. The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a "multiverse" composed of mostly independent parallel universes. This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical (not just formally mathematical, as in other interpretations) quantum superposition. (Such a superposition of consistent state combinations of different systems is called an entangled state.) While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe, i.e. the consistent state contribution to the mentioned superposition, we inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, the parallel universes will never be accessible for us, making them physically meaningless. This inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away towards the other end of the universe; in order to prove that the wave function did not collapse one would have to bring all these particles back and measure them again, together with the system that was measured originally. This is completely impractical, but even if one can theoretically do this, it would destroy any evidence that the original measurement took place (including the physicist's memory). See alsoEdit • P. A. M. Dirac, The Principles of Quantum Mechanics (1930) -- the beginning chapters provide a very clear and comprehensible introduction • David J. Griffiths, Introduction to Quantum Mechanics, Prentice Hall, 1995. ISBN 0-13-124405-1 -- A standard undergraduate level text written in an accessible style. • Richard P. Feynman, Robert B. Leighton and Matthew Sands (1965). The Feynman Lectures on Physics, Addison-Wesley. Richard Feynman's original lectures (given at Caltech in early 1962) can also be downloaded as an MP3 file from[2] • Richard P. Feynman, QED: The Strange Theory of Light and Matter -- a popular science book about quantum mechanics and quantum field theory that contains many enlightening insights that are interesting for the expert as well • Marvin Chester, Primer of Quantum Mechanics, 1987, John Wiley, N.Y. ISBN 0-486-42878-8 • Hagen Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3th edition, World Scientific (Singapore, 2004)(also available online here) • Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.), Prentice Hall. ISBN 0-13-805326-X. • Omnes, Roland (1999). Understanding Quantum Mechanics, Princeton University Press. ISBN 0-691-00435-8. • H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publications 1950. • Max Jammer, "The Conceptual Development of Quantum Mechanics" (McGraw Hill Book Co., 1966) • Gunther Ludwig, "Wave Mechanics" (Pergamon Press, 1968) ISBN 0-08-203204-1 • Albert Messiah, Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer, fourth printing 1966, North Holland, John Wiley & Sons. • José Croca (2003). Towards a Nonlinear Quantum Physics. World Scientific. ISBN 981-238-210-0. 1. Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born has been obfuscated. A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This was recognized in a paper by Heisenberg, in 1950, honoring Max Planck. See: Nancy Thorndike Greenspan, “The End of the Certain World: The Life and Science of Max Born (Basic Books, 2005), pp. 124 - 128, and 285 - 286. 2. [1] Further readingEdit • Akarsu, B. (2008). Students' conceptual understanding of quantum physics in college level classroom environments. Dissertation Abstracts International Section A: Humanities and Social Sciences. • Alper, G. (1989). Quantum mechanics as subjectivity and projective stimulus: Journal of Contemporary Psychotherapy Vol 19(4) Win 1989, 315-324. • Auletta, G. (2003). Some Lessons of Quantum Mechanics for Cognitive Science: Intentionality and Representation: Intellectica No 36-37 2003, 293-317. • Baker, R. G. V. (1999). On the quantum mechanics of optic flow and its application to driving in uncertain environments: Transportation Research Part F: Traffic Psychology and Behaviour Vol 2F(1) Mar 1999, 27-53. • Band, W. (1931). Wave-particles as transmitted possibilities: quantum postulates deduced from logical relativity: Phys Rev 37 1931, 1164-1170. • Chari, C. T. (1978). Critique of the theory of consciousness as the "hidden variable" of quantum mechanics: Journal of Indian Psychology Vol 1(2) Jul 1978, 119-129. • Bao, L. (2000). Dynamics of student modeling: A theory, algorithms, and application to quantum mechanics. Dissertation Abstracts International Section A: Humanities and Social Sciences. External linksEdit Wikiquote has a collection of quotations related to: 1. redirect Template:Commonscat Course material: Around Wikia's network Random Wiki
c09731f776d27635
Peter Barker Myungshik Kim Spin-probed matter-wave interformetry of levitated diamond nano-particles Quantum mechanics is widely regarded as our most effective theory to date. Its accuracy and the insight it offers us are unprecedented and stunning. However, there remain serious problems with QM, and amongst them is the question of where it should break down and give way to classical mechanics. When we measure a quantum state we transition from unitary evolution governed by the Schrödinger equation to a probabilistic final outcome. However, what constitutes this measurement is not properly defined, other than a rough idea of scale. Unless we adopt a 'Many Worlds' interpretation, in which there is no wavefunction collapse; we must make a subjective distinction between a quantum system and a measurement device capable of collapsing superpositions into definite states.  Collapse theories are one possible resolution to this problem, which is known as the measurement problem. By modifying the Schrödinger equation they promise a new, general mechanics; one which goes over to quantum mechanics in the limit of small masses, and goes over to classical mechanics in the limit of larger objects. Though various attempts have been made to resolve the measurement problem over the years, collapse theories are remarkable in that they are testable. My work focuses on ways of testing collapse theories using optomechanical systems. Currently I am working on a scheme to use a levitated nanosphere, trapped inside an optical cavity, to probe the signature effects of the postulated noise field causing collapse. If such a field exists, it will interact with the sphere, acting on it like a Brownian noise source. In turn, this action on the position of the sphere will affect the light entering and leaving the cavity, and it is in the profile of the exiting light that we hope to look for evidence of collapse.
a86001e224e68f1d
Structural Biochemistry/Organic Chemistry/Reagents From Wikibooks, open books for an open world < Structural Biochemistry‎ | Organic Chemistry Jump to: navigation, search A reagent is an inorganic or small organic molecule that helps the reactant react in a chemical reaction. List of Reagents, Its Uses and Information[edit] 1) AIBN [azobis (isobutyronitrile)] is used for radical initiator. AIBN is a white acicular crystal,which is insolvable in water,solvable in organic solvent such as methyl alcohol,ethanol,acetone,ethyl ether and light petroleum etc.lce point of pure product is 105 degree Celsius. The product is decomposed flashily and it releases nitrogen gas in the position of melting. It decomposes slowly under ordinary temperature,which should be stored under 20 degreee Celsius. AIBN is maily used as polymerization initiator of monomer such as chloroethylene,vinyl acetate,acrylonitrile,etc. Also, it is used as blowing agent for PVC,polyalkene,polyurethane,polyvinyl alcohol,acrylonitrile/butadiene copolymer,chloroethylene copolymer,acrylonitrile/ butadiene/styrene copolymer,polyisocyanate,polyvinyl acetate,polyamide and polyester,etc. Moreover,it is also used in other organic synthesis. 2) AlCl3 (aluminum trichloride) is used for Lewis acid catalyst. It is a yellowish or grayish-white, crystalline powder with a sharp oodor. It is used as a chemical intermediate for Aluminum compounds, as a catalyst for cracking petroleum, in preserving wood, and in medications, disinfectants, cosmetics, photography and textiles. 3) BF3 (boron trifluoride) is used for Lewis acid catalyst of chemical reactions. It is a colorless gas with a pungent odor. It reacts readily to form coordination complexes with molecules having at least one pair of unshared electrons. 4) BH3 (borane) is used for hydroboration. Borane-lewis base complexes are often found in literature. Borane-tetrahydrofuran (BTHF) and borane-dimethyl sulfide (BMS, DMSB) are often used as a borane source. Both reagents are available in solution (e.g. 1 M in THF), and are therefore easier to handle than diborane. Volatility and flammability are always a drawback. BMS is more stable than BTHF but has an unpleasant odor. 5) Br2 (bromine) is used for radical bromination and dibromination. Bromine compounds are used as pesticides, dyestuffs, water purification compounds, and as a flame-retardants in plastics. 1,2-dibromoethane is used as an anti-knock agent to raise the octane number of gasoline and allow engines to run more smoothly. This application has declined as a result of environmental legislation. Potassium bromide is used as a source of bromide ions for the manufacture of silver bromide for photographic film. 6) CCl4 (carbon tetrachloride) is used for nonpolar inert solvent. It is a manufactured chemical that does not occur naturally. It is a clear liquid with a sweet smell that can be detected at low levels. It is also called carbon chloride, methane tetrachloride, perchloromethane, tetrachloroethane, or benziform. Carbon tetrachloride is most often found in the air as a colorless gas. It is not flammable and does not dissolve in water very easily. It was used in the production of refrigeration fluid and propellants for aerosol cans, as a pesticide, as a cleaning fluid and degreasing agent, in fire extinguishers, and in spot removers. Because of its harmful effects, these uses are now banned and it is only used in some industrial applications. 7) CHCl3 (chloroform) is used for polar, nonflammable solvent. It is also a highly volatile, clear, colourless, heavy, and highly refractive. 8) CH2Cl2 (dichloromethane) is used for polar, nonflammable solvent. Chloroform has a relatively narrow margin of safety and has been replaced by better inhalation anesthetics. In addition, it is believed to be toxic to the liver and kidneys and may cause liver cancer. Chloroform was once widely used as a solvent, but safety and environmental concerns have reduced this use as well. Nevertheless, chloroform has remained an important industrial chemical. 9) CH2I2 (diiodomethane) is used for Simmons-Smith cyclopropanation. It is a colorless liquid. It decomposes upon exposure to light liberating iodine, which colours samples brownish. 10) CH2N2 (diazomethane) is used for making methyl esters from acid and cyclopropanation. It is not only toxic but also potentially explosive. 11) DIBAL (diisobutylaluminum) is used for selective reduction of esters, amides, and nitriles to aldehydes. 12) Dicycolhexylborane is used for hydroboration of alkyne derivatives and anti-Markovnikov hydration. 13) Dioxane is used for good solvent for dissolving water and organic substrates. It is a colorless liquid with a faint sweet odor similar to that of diethyl ether. It is classified as an ether. 14) DMD (dimethyldioxirane) is used for epoxidation of alkenes. It is the most commonly used dioxirane in organic synthesis, and can be considered as a monomer of acetone peroxide. 15) DMF (dimethylformamide) is used for polar aprotic solvent. This colourless liquid is miscible with water and the majority of organic liquids. DMF is a common solvent for chemical reactions. 16) DMSO (dimethylsulfoxide) is use for polar aprotic solvent. This colorless liquid is an important polar aprotic solvent that dissolves both polar and nonpolar compounds and is miscible in a wide range of organic solvents as well as water. 17) Et2O (diethyl ether) is used for medium polarity solvent. It is a colorless, highly volatile flammable liquid with a characteristic odor. 18) FeBr3 (iron tribromide) is used for Lewis acid catalyst in the halogenation of aromatic compounds. 19) H2 (hydrogen) is used for hydrogenation and reduction of nitro. Hydrogen is the only element that can exist without neutrons. Hydrogen’s most abundant isotope has no neutrons. Hydrogen forms both positive and negative ions. It does this more readily than any other element. It is the most abundant element in the universe. Hydrogen is the only atom for which the Schrödinger equation has an exact solution. Moreover, it reacts explosively with the elements oxygen, chlorine and fluorine: O2, Cl2, F2. 20) H2O2 (hydrogen peroxide) is used for oxidative workup of hydroboration. It is used to help stop infection in cuts or scrapes, we can use it as a mouthwash when diluted with water, and it is also used to bleach hair. 21) Hg(OAc)2 (mercuric acetate) is used for oxymercuriation. Mercuric Acetate can affect in breating and by passing through skin. Also, mercuric acetate should be handled as a teratogen with extreme caution. Mercury poisoning can cause "shakes", irritability, sore gums. It increased saliva, personality change and permanent brain or kidney damage. Mercury accumulates in the body. 22) HgSO4 (mercuric sulfate) is used for Markovnikov hydration of alkynes. It is an odorless solid that forms white granules or crystalline powder. In water, it separates into an insoluble sulfate with a yellow color and sulfuric acid. 23) HIO4 (metaperiodic acid) is used for oxidative cleavage of 1,2-diols. In dilute aqueous solution, periodic acid exists as discrete hydronium and metaperiodate ions. 24) HMPA (hexamethylphosphoramide) is used for preventing aggregation (polar aprotic solvent). It is a phosphoramide having the formula [2N]3PO. 25) K2Cr2O7/H2SO4 (potassium dichromate) is used in oxidation of alcohols. (Jones Reagent) 26) LAH (lithium aluminum hydride) is used for very strong hydride source and reduces esters to alcohols. It is an inorganic compound with the chemical formula LiAlH4. 27) LiAl(Ot-Bu)3H [lithium tri(t-butoxy) aluminum hydride] is used for modified hydride source and reduces acid chlorides to aldehydes. 28) LDA (lithium diisopropylamide) is used for strong, hindered base. 29) Lindlar's catalyst is used for reducing alkynes to cis-alkenes. 30) mCPBA (m-chlroperbenzoic acid) is used for epoxidation of alkenes. 31) MnO2 (manganese dioxide) is used for selective oxidation of allylic alcohols. 32) MsCl [methanesulfonyl chloride(mesyl chloride)] is used for converting hydroxyl to a good LG. 33) NaBH4 (sodium borohydride) is used for mild source of hydride. It is an inorganic compound. 34) NaBH3CN (sodium cyanoborohydride) is used for reductive amination and hydride source stable to mild acid. 35) NaNO2 (sodium nitrite) is used for diazotization of amines. Sodium nitrite is a salt and an anti-oxidant that is used to cure meats like ham, bacon and hot dogs. Sodium nitrite serves a vital public health function: it blocks the growth of botulism-causing bacteria and prevents spoilage. It is also gives cured meats their characteristic color and flavor. Also, USDA-sponsored research indicates that sodium nitrite can help prevent the growth of Listeria monocytogenes, an environmental bacterium that can cause illness in some at-risk populations. 36) NBS (N-bromosuccinimide) is used for bromine surrogate. It is a brominating and oxidizing agent that is used as source for bromine in radical reactions. For example: allylic brominations and various electrophilic additions. The NBS bromination of substrates such as alcohols and amines, followed by elimination of HBr in the presence of a base, leads to the products of net oxidation in which no bromine has been incorporated. 37) n-BuLi is used for strong base. 38) NCS (N-chlorosuccinimide) is used for chlorine surrogate. 39) O3 (ozone) is used for oxidative cleavage of alkenes. It is a triatomic molecule, consisting of three oxygen atoms. It is an allotrope of oxygen that is much less stable than the diatomic allotrope, breaking down with a half life. 40) OsO4 (osmium tetroxide) is used for dihydroxylation of alkenes. 41) PCC (pyridinium chlorochromate) is used for selective oxidation of primary alcohols to aldehydes. 42) PPH 3 (triphenylphosphine) is used for making Wittig reagents. 43) SOCl2 (thionyl chloride) is used for converting alcohols to alkyl chlorides. 44) THF (tetrahydrofuran) is used for medium polarity solvent. 45) pTsCl [p-toluenesulfonyl chloride (tosyl chloride)] is used for converting hydroxyl to a good LG. 46) pTsOH [p-toluenesulfonic acid (tosic acid)] is used for oragnic-soluble source of strong acid. 47) Zn(Hg) (zinc amalgam) is used for Clemmensen reduction (with HCl). 48) Jones Reagent (CrO3, H2SO4, H2O) is a solution of chromium trioxide in diluted sulfuric acid that can be used safely for oxidations of organic substrates. 49) SOCL2 - forms alkyl chlorides from alcohols 50) Clemmensen Reduction (Zn(Hg), HCl) - removes a ketone and replaces it with hydrogens. 51) Grinard reagents (R-Mg-X) - an organometalic chemical reaction where an alkyl-magnesium halide is added to a carbonyl group in an aldehyde or ketone.
e70259f366cc10ad
Strange Paths Physics, computation, philosophy of mind 2010-01-23T07:44:47Z Copyright 2009 WordPress xantox <![CDATA[Canon 1 a 2]]> 2009-01-18T02:39:25Z 2009-01-18T02:39:25Z Gallery In the enigmatic Canon 1 a 2 from J. S. Bach’s “Musical Offering” (1747) (also known as “crab canon” or “canon cancrizans”), the manuscript shows a single score, whose beginning joins with the end. This space is topologically equivalent to a bundle of the line segment over the circle, known as a Möbius strip. The simultaneous performance of the deeply related forward and backward paths gives appearance to two voices, whose symmetry determines a reversible evolution. A musical universe is built and then is “unplayed” back into silence.1 1. Animation created in POV-Ray by Jos Leys. Music performed by xantox with Post Flemish Harpsichord, upper manual. [] xantox <![CDATA[Atomic orbital]]> 2008-04-20T00:14:29Z 2008-04-20T00:14:29Z Gallery according to the Schrödinger equation (colors represent phase). In atomic matter, electrons orbiting the nucleus do not follow any determined classical path, but exist for each quantum state within an orbital, which can be visualized as a cloud of [...]]]> Click image to zoom1 1. © Dean E. Dauger [] xantox <![CDATA[Reversible computation]]> 2008-01-20T17:33:05Z 2008-01-20T17:33:05Z Computation A computation (from latin computare, “to count”, “to cut”), is the abstract representation of a physical process in terms of states, and transitions between states or events. The definition of possible states and events is formulated in a computation model, such as the Turing machine or the finite automaton. For example, a Turing machine state is the complete sequence of symbols on its tape plus the head’s position and internal symbol, and an event is the motion between two successive states, defined deterministically as a combination of read, write, move left and move right elementary motions. In order to perform a computation, a robust mapping is first established between a computation model and a physical system, meaning that states and events in the model are used to label states and events observed in the system, and that the choosen correspondence is sufficiently stable in respect to various kinds of perturbations. The system is then prepared in an initial state and is allowed to evolve through a path of events within the space of states, until it eventually reaches a state labeled as final. The discretized dynamics of the computational space may be represented with a directed graph, where nodes are possible states of the system and edges are events transforming a state into another. Cellular automata state transition graph for n=3 rule 249, L=15, seed 0, displaying trees rooted in attractor cycles. (© A. Wuensche, M. Lesser) Cellular automata state transition graph for n=18 rule 110 (© A. Wuensche, M. Lesser) Irreversible computational dynamics Click image to zoom1 Logical reversibility A function is said reversible (from latin revertere, ‘to turn back’) if, given its output, it is always possible to determine back its input, which is the case when there is a one-to-one relationship between input and output states. If the space of states is finite, such a function is a permutation. Logical reversibility implies conservation of information. When several input states are mapped onto the same output state, then the function is irreversible, since it is impossible by only knowing the final state to find back the initial state. In boolean algebra, NOT is reversible, while SET TO ONE is irreversible. Two-argument boolean functions like AND, OR, XOR are also irreversible, since they map 22 input states into 21 output states so that information is lost in the merging of paths, like shown in the following graph of a NAND computation, whose reverse evolution is no longer deterministic. Irreversible NAND computation The right side tries to depict the inverse mapping to the left side Physical reversibility Known laws of physics are reversible. This is the case both of classical mechanics, based on lagrangian/hamiltonian dynamics, and of standard quantum mechanics, where closed systems evolve by unitary transformations, which are bijective and invertible. As a consequence, when a physical system performs an irreversible computation, the computation model’s mapping indicates that the computing system cannot stay closed. More precisely, since an irreversible computation reduces the space of physical information-bearing states, then their entropy must decrease by increasing the entropy of the non-information bearing states, representing the thermal part of the system. In 1961 Landauer studied this thermodynamical argument, and proposed the following principle: if a physical system performs a logically irreversible classical computation, then it must increase the entropy of the environment with an absolute minimum of heat release of kT x ln(2) per lost bit (where k is Boltzmann’s constant and T the temperature, ie. about 3 x 10-21 joules at room temperature),2 which emphasizes two facts: • the logical irreversibility of a computation implies the physical irreversibility of the system performing it (”information is physical”); • logically reversible computations may be at least in principle intrinsically nondissipative (which bears a relationship with Carnot’s heat engine theorem, showing that the most efficient engines are reversible ones, and Clausius theorem, attributing zero entropy change to reversible processes). Reversible embedding of irreversible computations Landauer further noticed that any irreversible computation may be transformed into a reversible one by embedding it into a larger computation where no information is lost, eg. by replicating every output in the input (’sources’) and every input in the output (’sinks’). For example, the NAND irreversible function seen above may be embedded in the following bijection, also known as Toffoli gate3 (the original function is indicated in red): NAND embedding in a reversible Toffoli gate The additional bits of information, like Ariadne’s threads, ensure that any computational path may be reversed: they are the garbage of the forward path and the program of the backwards path. Instead of losing them in the environment, they are kept in the controlled computational space. Toffoli gates are universal reversible logic primitives, meaning that any reversible function may be constructed in terms of Toffoli gates. The Fredkin gate is another example of universal reversible logic primitive. It exchanges its two inputs depending on the state of a third control input, thus allowing to embed any computation into a conditional routing of paths carrying conserved signals. Some railroad switches are reversible Reversible computation models The billiard-ball model, invented by Fredkin and Toffoli,4 was one of the first computation models focusing on implementation with reversible physical components. Based on the laws of classical mechanics, it is equivalent to the formalism of kinetic theory of perfect gases. The presence of moving rigid spheres at specified points are defined as 1’s, their absence as 0’s. Interactions by means of right-angle collisions allow to construct various logic primitives, like for example the following 2-input, 3-output universal gate due to Feynman,5 who also proposed with Ressler a billiard-ball version of the Fredkin gate. Feynman gate (© CJ. Vieri, MIT) Feynman switch gate B detects A without affecting its path In practice, these computing spheres would be very unreliable, as instability arising from arbitrarily small perturbations would quickly generate chaotic deviations, producing an output saturated with errors. The errors may be corrected (for example, by adding potentials to stabilize the paths), however the error correction process is itself irreversible and dissipative - since it has to erase the erroneous information. Hence, error correction appears to be the only aspect of computation defining a lower bound to energy dissipation. A stabler approach is that of the Brownian computation model,6 where thermal noise is on the contrary allowed to freely interact with a computing system near equilibrium. Potential energy barriers define the paths of a computational space, where the system walks randomly until it eventually reaches a final state. RNA polymerase, the enzyme involved in DNA transcription, is an example of a brownian logically reversible tape-copying machine. The DNA replication process also follows a similar mechanism, but adds a logically irreversible error-correcting step. Lecerf-Bennett reversal The embedding method is however insufficient to build a physically reversible universal computer, since the growing amount of information needing to be replicated for each event would saturate any finite memory. Then, computation would come to an end - unless the memory would be irreversibly erased, but then dissipation would have been merely postponed, and not avoided. This seemed to rule out the possibility of useful reversible computing machines, until a remarkable solution was found by Bennett7 (earlier work by Lecerf8 anticipated its formal method), showing that it is possible at least in principle to perform an unlimited amount of computation without any energy dissipation. The reversible system shall compute the embedding function twice: the first time “forwards” to obtain and save the computation result, and the second time “backwards”, as a mirror-image computation of the inverse function, de-computing the first step and returning the closed system to its initial state. M. C. Escher, Swans (1956) M. C. Escher, Swans (1956). All M.C. Escher works (c) 2007 The M.C. Escher Company - the Netherlands. All rights reserved. Used by permission. Click image to zoom Logical irreversibility and Maxwell’s demon In 1867 Maxwell devised a thought experiment involving a finite microscopic “demon” capable of observing the motion of individual molecules. This demon guards a small hole separating two containers, filled with gas at the same temperature. When a molecule approaches, the demon checks its speed and then opens or closes a shutter, so as slower molecules always go in one container (cooling it), and faster molecules go in the other (heating it), in apparent violation of the 2nd law of thermodynamics. A first important step toward the solution of this controversial paradox was taken in 1929 by Szilard9 who, after avoiding dualistic traps by substituting the intelligent demon with a simple machine, suggested that proper accounting of entropy is restored in the process of measuring the molecule position. This explanation became the standard one until 1981, when Bennett showed6 that the fundamentally dissipative step is surprisingly not the measurement (which can be done reversibly) but the logically irreversible erasure of demon’s memory, to make room for new measurements. Reversibility in quantum computation Quantum computation takes advantage of the physical effects of superposition and entanglement, leading to a qualitatively new computation paradigm.10 In quantum mechanical computation models, all events occur by unitary transformations, so that all quantum gates are reversible. Quantum systems are less susceptible to certains kinds of errors affecting classical computations, since their discrete spectrum prevents trajectories from becoming chaotic, so that, for example, a quantum “billiard ball model” is more reliable than its classical counterpart. However, quantum systems are also affected by new sources of error, as a consequence of interactions with the environment, such as the loss of quantum coherence. It is possible to correct generic quantum errors up to a limit,11 so as to reconstruct an error-free quantum state, at the price of performing an irreversible quantum erasure of the erroneous quantum information. I thank Charles H. Bennett for stimulating comments on the draft. 1. A. Wuensche, M. Lesser, “The global dynamics of cellular automata“, Ref Vol. I of the Santa Fe Institute Studies in the Sciences of Complexity. Addison-Wesley (1992) [Images of cellular automata state transition graphs]. [] 2. R. Landauer, “Irreversibility and heat generation in the computing process“, IBM Journal of Res. and Dev., 5:3, 183 (1961) [Logical irreversibility, Landauer’s principle]. [] 3. T. Toffoli, “Reversible computing“, Tech. Memo MIT/LCS/TM-151, Mit Lab. for Comp. Sci. (1980) [Toffoli gate, reversible automata]. [] 4. E. Fredkin, T. Toffoli, “Conservative logic“, International Journal of Theoretical Physics, 21:3-4, 219-253 (1982) [Billiard ball model]. [] 5. R. P. Feynman, “Feynman lectures on computation (1984-1986)”, Perseus Books (2000) [] 6. C. H. Bennett, “The thermodynamics of computation - a review“, International Journal of Theoretical Physics, 21:12, 905-940 (1982) [Brownian computation model; logical irreversibility and Maxwell’s demon]. [] 7. C. H. Bennett, “Logical reversibility of computation“, IBM Journal of Res. and Dev., 17:6 525 (1973). [In this paper, related to the problem of the connection between computing and heat generation explored by Landauer, Bennett devised the “save result and reverse” method and proved that any irreversible computation may be simulated reversibly]. [] 8. Y. Lecerf, “Machines de Turing réversibles“, english translation by M. Frank, “Reversible Turing Machines“, Comptes Rendus Hebdomadaires des Séances de L’académie des Sciences 257:2597-2600 (1963). [In this mathematical paper, unrelated to issues of physical reversibility, Lecerf sought to design a reversible Turing machine. It is the first work proposing the method of saving the computation history and then decomputing it away, though it had initially little impact and was ‘discovered’ only much after Bennett’s results, perhaps because it was not published in english and Lecerf himself did not emphasize it. It has a minor flaw, ie the inverse of a read-write-shift quintuple is a quintuple of different sort, namely shift-read-write]. [] 9. L. Szilard, “Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen“, Journal Zeitschrift für Physik, 53, 840-856 (1929); english translation “On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings” in Behavioral Science, 9:4, 301-310 (1964). [] 10. D. Deutsch, “Quantum Theory, the Church-Turing Principle, and the Universal Quantum Computer“, Proc. Roy. Soc. Lond., A400, 97–117 (1985). [Foundation of the quantum model of computation, universal quantum Turing machine] [] 11. A. R. Calderbank, P. W. Shor, “Good quantum error-correcting codes exist“, Phys. Rev. A 54, 1098-1105 (1996). [] xantox <![CDATA[Marangoni flow]]> 2008-01-06T00:45:34Z 2008-01-06T00:45:34Z Gallery Liquid surfaces are pulled by the intermolecular forces, which are unbalanced on the boundary, producing surface tension. When liquid layers with different surface tension get in contact, these forces cause a flow, also known as Marangoni effect,1 which is also the origin of the beautiful patterns found in the ancient japanese art of Suminagashi (”floating ink”). In this image, a film of oleic acid surfactant (with surface tension 32.5 mN/m) quickly spreads spontaneously about 2.5 mm over a layer of glycerol (with surface tension 63.4 mN/m). Both Marangoni and capillary stresses cause variations in the film thickness, leading to dendritic flow patterns. The contour lines are interference fringes. Branching Dynamics in Surfactant Driven Flow Click image to zoom2 1. C. Marangoni, “Über die Ausbreitung der Tropfen einer Flüssigkeit auf der Oberfläche einer anderen”, Ann. Phys. Leipzig, 143:337-354 (1871). [] 2. © B. J. Fischer, A. A. Darhuber, S. M. Troian, Department of Chemical Engineering, Princeton University [] xantox <![CDATA[Water Clouds]]> 2007-09-17T10:28:18Z 2007-09-17T10:28:18Z Gallery Terrestrial clouds are the result of extraordinarily complex interactions between water and air, with several feedback mechanisms combining the effects of fluid dynamics and thermodynamics.1 © 2004 Sarah Robinson & Jean Hertzberg, University of Colorado Click image to zoom2 The kind of convective clouds known as cumulus are produced by the vertical winds occurring in regions of warm moist air, per Archimedes principle. This rapid lifting results in adiabatic expansion and cooling, and consequent accretion of water droplets. The irregular distribution of droplets scatters sunlight geometrically in all directions, producing a bright white appearance like in snow, decaying into gray shades as per their optical thickness. Each cloud is short-lived, lasting approximately 15 minutes in average. 1. H. R. Pruppacher, J. D. Klett, “Microphysics of clouds and precipitation“, Springer (1997); R. A. Houze, “Cloud Dynamics“, Academic Press (1994) [] 2. © 2004 Sarah Robinson, Flow Visualization Course, University of Colorado [] xantox <![CDATA[Genomes inside genomes]]> 2007-09-05T17:17:07Z 2007-09-05T17:17:07Z News Scientists at the University of Rochester and the J. Craig Venter Institute have discovered a copy of the entire genome of Wolbachia, a bacterial parasite, residing inside the genome of its completely different host species Drosophila Ananassae, the fruitfly. To isolate the fly’s genome from the parasite’s, the flies were fed with a simple antibiotic, killing the Wolbachia, but Wolbachia genes were still there. The scientists found that the genes were residing directly inside the second chromosome of the insect, and that some of these genes are even transcribed in uninfected flies, so that copies of the gene sequence are made in cells that could be used to make Wolbachia proteins. © University of Rochester xantox <![CDATA[Bouncing liquid jets]]> 2007-07-19T19:30:15Z 2007-07-19T19:30:15Z News Physicists from the University of Texas at Austin found that “a liquid jet can bounce off a bath of the same liquid if the bath is moving horizontally with respect to the jet. Previous observations of jets rebounding off a bath (e.g. Kaye effect) have been reported only for non-Newtonian fluids, while we observe bouncing jets in a variety of Newtonian fluids, including mineral oil poured by hand. A thin layer of air separates the bouncing jet from the bath, and the relative motion replenishes the film of air. Jets with one or two bounces are stable for a range of viscosity, jet flow rate and velocity, and bath velocity. The bouncing phenomenon exhibits hysteresis and multiple steady states”.1 Bouncing liquid jets 1. M. Thrasher, S. Jung, Y. Kwong Pang, C. Chuu, H. L. Swinney, “The Bouncing Jet: A Newtonian Liquid Rebounding off a Free Surface“, arXiv:0707.1721v1 [physics.flu-dyn] (2007). [] xantox <![CDATA[Teleportation without shared entanglement]]> 2007-07-13T23:07:04Z 2007-07-13T23:07:04Z News xantox <![CDATA[Classical Molecules]]> 2007-07-09T06:28:00Z 2007-07-09T06:28:00Z Gallery Animation showing the interaction of four charges of equal mass1, two positive and two negative, in the approximation of classical electromagnetism. The particles interact via the Coulomb force, mediated by the electric field represented in yellow. A repulsive “Pauli force” of quantum mechanical origin, which becomes very large at a critical distance of about the radius of the spheres shown in the animation, keeps the charges from collapsing into the same point. Additionally, the motion of the particles is damped by a term proportional to their velocity, allowing them to “settle down” into stable (or meta-stable) states. When the charges are allowed to evolve from the initial state, the first thing that happens (very quickly, since the Coulomb attraction between unbalanced charges is very large) is that they pair off into dipoles. Thereafter, there is still a (much weaker) interaction between neighboring dipoles (van der Waals force). Although in principle it can be either repulsive or attractive, there is a torque that rotates the dipoles so that it is attractive, eventually bringing the two dipoles together in a bound state. This mechanism binds the molecules of some substances into a solid. 1. © 2004 MIT TEAL/Studio Physics Project, John Belcher [] xantox <![CDATA[Axions not confirmed by PVLAS]]> 2007-07-09T04:03:21Z 2007-07-09T04:03:21Z News From PhysicsWeb News: “The existence of a hypothetical particle called the axion has been put into further doubt now that the team that first claimed its discovery has failed to reproduce their results. Physicists working on the PVLAS experiment in Italy say that the tiny rotation in the polarization of laser light that they reported last year does not support the existence of axions, but rather is an artifact related to how the experiment had been performed”1. 1. E. Zavattini et al., “New PVLAS results and limits on magnetically induced optical rotation and ellipticity in vacuum“, arXiv:0706.3419v1 (2007) []
a1b7be785ac97def
Sign up × I don't really know much about Quantum mechanics, but would like to know one simple fact. The state function $\Psi(r, t)$ whose magnitude gives the probability density of the position of the particle and the magnitude of its ($\Psi(r, t)$) fourier transform gives probability density of its momentum. Is there any rule that these state functions are smooth (possess infinite order derivatives everywhere) (derivatives of all orders exist)? share|cite|improve this question I'm not sure your statement about the Fourier transform is quite correct. Foruier-transforming the wavefunction in terms of position will indeed give the momentum wavefunction, but whether this can be done on the probability distribution ($|\psi|^2$), I do not know. Hopefully someone more mathematically adept can enlighten me. –  Noldorin Nov 18 '10 at 15:35 @Noldorin: I meant it on the wave function itself, not on the magnitude/probability distribution. Thanks for the clarification in the question. –  Rajesh D Nov 18 '10 at 15:36 Ok, sure. That makes more sense now. :) (And in your question, I'm also presuming you define $S(r, t) = |\Psi(r, t)|^2|$.) –  Noldorin Nov 18 '10 at 15:37 Can you change title to something meaningful like "Is it guaranteed that wavefunction is well behaved everywhere?"? –  Pratik Deoghare Nov 18 '10 at 16:11 related: Is the world $C^\infty$? –  Tobias Kienzler Mar 1 '11 at 9:09 2 Answers 2 up vote 9 down vote accepted The only general requirement on the state function for a single, spinless, quantum particle (quanton) in a physically realistic state is that the state function be square integrable, i.e., the integral of its absolute value squared over all space be finite. Non-square integrable state functions are used for many purposes, but they are all idealizations that do not, individually, represent realistic states. If the state function is also to belong to the domain of definition of the Hamiltonian, then, in non-relativistic QM, the state function must be spatially differentiable to second order as well. State functions which are square integrable but not second order differentiable do not satisfy the Schroedinger equation. But their time evolution is still determined by continuity considerations since the second order differentiable state functions are everywhere dense in the state space, i.e., Hilbert space. share|cite|improve this answer Is there any derivative operator in the QM ? –  Rajesh D Nov 18 '10 at 16:19 Momentum is represented by the derivative operator, up to a factor. –  Raskolnikov Nov 18 '10 at 17:20 derivative of what ? –  Rajesh D Nov 18 '10 at 21:19 This is a slightly convoluted answer. I'm actually not sure what point you're trying to get across, I'm afraid. –  Noldorin Nov 19 '10 at 19:40 Some of the conditions for wavefunctions $\Psi(x)$, for all elements $x$ of a subset of $\mathbb{R^{d}}$ (in the hyperphysics link, they use $x \in \mathbb{R}$). 1.- Must be a solution of the Schrodinger equation. 2.- Must be normalizable. 3.- Must be a continuous function of $x$. 4.- The slope of the function in $x$ must be continuous, that is, $\displaystyle \frac{\partial \Psi(x)}{\partial x}$ must be continuous. The property of being square-integrable is included in the condition 2. share|cite|improve this answer @Robert Smith: "some of the conditions", do you there are more ? –  Rajesh D Nov 18 '10 at 16:25 The solution to the Dirac delta potential is not continuously differentiable, so it violates condition 4. –  Keenan Pepper Nov 18 '10 at 16:40 The assumption for the Delta potential is separate the space for $x<0$ and $x>0$. Therefore, the solution is one wavefunction for $x<0$ and other for $x>0$. Is that what you're saying? I don't see how that violates the condition 4. –  Robert Smith Nov 18 '10 at 16:55 @Robert: your point 3. says precisely that it has to be continuous at every point $x$. What you forgot to include (in this formulation) is that a particle in QM lives in Hilbert space $H = L^2(\mathbb{R}^d)$ so that indeed it needs to be defined (and continuous) for every $x \in \mathbb{R}^d$. The problem with Delta potential arises only because it's not quite physical to assume infinite jump in potential. You do this to make things simple, e.g. to disallow movement through walls. But in reality walls are made of atoms so the potential is smooth (just very fast growing). –  Marek Nov 18 '10 at 18:13 For sure Marek, but I don't see why the Schrödinger equation is not considered a mathematical idealization then? After all, it's only a non-relativistic approximation. And if we're gonna start like this, everything that has ever been conceived of in physics is an idealization. Your decision to consider one more physically relevant than the other is arbitrary if you don't specify the bounds within which the approximation is valid or not. So, without doubt, the Schrödinger equation can do more than models for Anderson localization which are not unphysical, only less broadly applicable. –  Raskolnikov Nov 19 '10 at 15:57 Your Answer
08183fb116d6b845
Sign up × Quantum mechanics: Suppose that there is a particle with orbital angular momentum |L|. But if the particle also has spin quantity |S| the question is: How do I reflect this into Schrodinger equation? I do know how Schrodinger equation becomes for each case - when a particle has particular orbital angular momentum and when a particle has some spin, especially when both occur. share|cite|improve this question Your question makes little sense in the context of quantum mechanics. Particles don't follow paths, specifically not circles and spin in an intrinsic property, not one of motion. – A.O.Tell Oct 12 '12 at 9:19 @A.O.Tell Modified the question. – War Oct 12 '12 at 9:32 Just adding spin means you attach a tensor factor space containing the spin representation to the particle space. The schroedinger equation doesn't change unless you add an interaction term that incorporates spin. Which term that is depends on your actual physical model. – A.O.Tell Oct 12 '12 at 9:36 2 Answers 2 up vote 0 down vote accepted The Schroedinger equation does not describe spin. If you need to describe spin as well, you should use the Pauli equation or the Dirac equation (for spin 1/2). share|cite|improve this answer so angular momentum can be reflected, but spin can't? – War Oct 12 '12 at 9:41 That's right, unless you use some unusual definition of the Schroedinger equation. – akhmeteli Oct 12 '12 at 9:47 What is understood by Schrödinger equation here and how to interpret should? – NikolajK Oct 12 '12 at 11:11 Schrödinger equation is understood as the second equation in , marked "Time-dependent Schrödinger equation (single non-relativistic particle)". If you understand it as the first equation there (marked "Time-dependent Schrödinger equation (general)"), then you include, e.g., the Dirac equation there and what not. I cannot add anything to dictionary definitions of "should". – akhmeteli Oct 12 '12 at 11:44 This answer is incorrect. The term Schrödinger's equation refers to any equation of the form $i\hbar\frac{d}{dt}|\Psi\rangle =\hat{H}|\Psi\rangle$, or coordinate representations of it. For a single spinless nonrelativistic particle, this reduces to the form $i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{x},t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{x},t)+V(\mathbf{x},‌​t)\Psi(\mathbf{x},t)$ you quote from Wikipedia. In other cases it can be quite different; one example of this is the Pauli equation, also known as the Schrödinger-Pauli equation – Emilio Pisanty Oct 12 '12 at 16:59 I think we can talk about spin and spin interactions with the standard Schrodinger. Start with spin orbit coupling or LS Coupling Next see the Zeeman effect, and especially Paschen Bach You need perturbation theory to pick up on spin effects given standard Schrodinger model of the atom as seen on wikipedia: First order perturbation theory with these fine-structure corrections yields the following formula for the Hydrogen atom in the Paschen-Back limit:[2] share|cite|improve this answer Your Answer
cbc26faf03648818
Sign up × You know how there are no antiparticles for the Schrödinger equation, I've been pushing around the equation and have found a solution that seems to indicate there are - I've probably missed something obvious, so please read on and tell me the error of my ways... Schrödinger's equation from Princeton Guide to Advanced Physics p200, write $\hbar$ = 1, then for free particle $$i \psi \frac{\partial T}{\partial t} = \frac{1}{2m}\frac{\partial ^2\psi }{\partial x^2}T$$ $$i \frac{1}{T} \frac{\partial T}{\partial t} = \frac{i^2}{2m}\frac{1}{\psi }\frac{\partial ^2\psi }{\partial x^2}$$ this is true iff both sides equal $\alpha$ it can be shown there is a general solution (1) $$\psi (x,t) \text{:=} \psi (x) e^{-i E t}$$ But if I break time into two sets, past -t and future +t and allow energy to have only negative values for -t, and positive values for +t, then the above general solution can be written as (2) $$\psi (x,t) \text{:=} \psi (x) e^{-i (-E) (-t)}$$ and it can be seen that (2) is the same as (1), diagrammatically energy time diagram And now if I describe the time as monotonically decreasing for t < 0, it appears as if matter(read antimatter) is moving backwards in time. Its as if matter and antimatter are created at time zero (read the rest frame) which matches an interpretation of the Dirac equation. This violates Hamilton's principle that energy can never be negative, however, I think I can get round that by suggesting we never see the negative states, only the consequences of antimatter scattering light which moves forward in time to our frame of reference. In other words the information from the four-vector of the antiparticle is rotated to our frame of reference. Now I've never seen this before, so I'm guessing I've missed something obvious - many apologies in advance, I'm not trying to prove something just confused. share|cite|improve this question Shouldn't the second line (where you have rearranged) have a $-i$ at the front since you have multiplied both sides by $i^2$. On LHS you get $i^3$ = $-i$. – PPG Nov 26 '13 at 0:22 4 Answers 4 The functions $-iEt$ and $-i(-E)(-t)$ are exactly the same so they obviously correspond to the same sign of energy if they appear in the exponent defining $|\psi\rangle$. It seems that you think that you may freely replace $t$ by $-t$ and change nothing else. However, this operation isn't a symmetry of the laws of physics, as you have actually demonstrated for Schrödinger's equation (because you also need to change the sign of $E$ or the sign in front of $H$ to make it work). The correct time reversal symmetry acts on the wave function in the simplest Schrödinger's equation model as $$ T: \psi(x,t)\mapsto \psi^T(x,t)= \psi^*(x,-t) $$ Note that there is the extra complex conjugation here – this map is "antilinear" rather than linear, we say. This complex conjugation maps $\exp(ipx) $ to $\exp(-ipx)$ which means that it reverts the sign of the momenta (and velocities), as needed for the particle(s) to evolve backwards in time relatively to the original state. This complex conjugation also restores the positivity of the energy if the original equation had a positive definite Hamiltonian. Note that the sign of the energy and the sign of the direction of time are correlated – much like the position is correlated with the momentum via $[x,p]=i\hbar$. They're "complementary" although the interpretation has to be a bit different for $E,t$. share|cite|improve this answer Hmm. You state "It seems that you think that you may freely replace t by −t and change nothing else", actually I stated "if I break time into two sets, past -t and future +t AND allow energy to have only negative values for -t, and positive values for +t," so both Energy and Time are inverted. Nope, no cigar. – metzgeer Oct 17 '12 at 10:52 Fine, but I have explained why the non-relativistic kinetic energy is always positive, whether or not you act on the situation with time reversal: the correct time reversal includes the complex conjugation. When I said that you think you may just replace $t$ by $-t$, I meant that you think - and you just confirmed it - that you don't do anything else with the wave function than $t\to -t$ and you may extract things like the sign of the energy. But this ain't the case. Have you tried to read my answer or are you interested in it at all? – Luboš Motl Oct 18 '12 at 4:54 I thought you were drunk – metzgeer Oct 18 '12 at 11:06 Feynman studied the relation between negative energy, antimatter, and particles moving backward in time. Let me quote him [1]: "The fundamental idea is that the 'negative energy' states represent the states of electrons moving backward in time [...] reversing the direction of proper time s amounts to the same as reversing the sign of the charge so that the electron moving backward in time would look like a positron moving forward in time." He uses the classical equation of motion for a simple proof, but then uses the representation of positrons as electrons moving backward in time in his Dirac equation approach to QED. Notice that the propagation kernel associated to the Dirac equation takes non-zero values for negative times. But taking the non-relativistic limit, the propagation kernel associated to the Schrödinger equation is exactly zero for negative times (see 15-3) and there is not room for antiparticles within the Schrödinger regime. In fact he confirms this before (15-12): "On the nonrelativistic case, the paths along which the particle reversed its motion in time are excluded". The disappearance of the negative energy levels in the nonrelativistic limit can be easily shown in the technique of the large and small components of the Dirac wavefunctions. [1] Section "Interpretation of negative energy states" In Richard P. Feynman. Quantum Electrodynamics; Advanced Book Classics; Perseus Books Group; 1998. share|cite|improve this answer Energy would exhibit both positive as well as negative energy if it were a living entity. So first one must answer is time alive? To solve any equasion shouldn't you know the values of all propertys within it?Idetify the propertys first. Only then could you solve it. share|cite|improve this answer Your answer doesn't make much sense. What does it mean for time to be alive? Why does energy have negative and positive values if it is alive? – Chris Mueller Feb 13 '14 at 4:26 Try working back through the maths if you assume that Time itself is a negative form of matter and energy. We are very good at measuring time, but so far have never managed to explain what exactly it is. Time was created in the Big Bang to balance the creation of matter and energy. It displays a negative gravitational force. share|cite|improve this answer protected by Qmechanic Feb 13 '14 at 6:29 Would you like to answer one of these unanswered questions instead?