chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
e32bf451da17d890 | The Classical and Quantum Mechanics of a Thin Ring Spinning About Two Axes
San José State University
Thayer Watkins
Silicon Valley
& Tornado Alley
The Classical and Quantum Mechanics
of a Thin Ring Spinning About Two Axes
The Classical Analysis
Consider a thin ring of mass M and radius R. (Here thin means that it is a line.) The linear mass density of the ring is ρ=M/(2πR). The moment of inertia I1 for spinning about an axis perpendicular to the plane of the ring which passes through the ring center is MR². There is another spin of a ring which is like the flipping of a coin. This is a rotation about an axis which is a diameter of the ring. The moment of inertia for this type of spinning is given by
I2 = ∫(R·cos(θ))²ρRdθ = ρR³∫cos²(θ)dθ
where the integration is from 0 to 2π. The integral of cos²(θ) from 0 to π is ½ so the integral from 0 to 2π is 1 and hence
I2 = ρR³ = (2πRρ)R²/(2π) = (1/(2π))MR².
This flipping rotation is of particular interest for the structure of nuclei. Experimental measurement indicate that that nuclei have at least approximately spherical shapes. If a circular band of nucleons rotates in this flipping fashion the dynamic appearance of the nucleus would be that of a sphere.
A spin can also take place perpendicular to the diameter considered above. The moment of inertia about that axis is same as that of I2. It unnecessarily complicates the analysis and will not be included now.
The angular momentum of the ring spinning at an angular rate of ω about the axis perpendicular to its plane is L=MR(Rω)=MR²ω. This means that
ω = L/(MR²)
and hence
½I1ω² = ½MR²(L²/(MR²)² = L²/(2MR²)
For the spin about a diameter the angular momentum Λ is found as follows.
Λ = ∫ (R·cos(θ))(R·cos(θ)Ω)ρdθ = R²ρΩ∫cos²(θ)dθ
with the integration being from 0 to 2π
which reduces to
Λ = R²ρΩ = R²(M/(2πR)Ω = MRΩ/(2π)
and hence
Ω = 2πΛ/(MR)
½I2Ω² = ½((1/(2π))MR²)(2πΛ/(MR))² = Λ²/(4πMR)
The kinetic energy of the spinning ring is then
K = L²/(2MR²) + Λ²/(4πMR)
and there is no potential energy. This means that the ring will continue spinning at the rates ω and Ω indefinitely. These rates can have any real values. This is the classical solution.
The Quantum Mechanical Analysis
To get the quantum mechanical solution the energy is expressed as
E = ½I1(dθ/dt)² + ½I2(dφ/dt)²
The momentum associated with θ is (∂E/∂θ) which is I1(dθ/dt). Let this be denoted as pθ. Thus (dθ/dt)=pθ/I1. Likewise the momentum associated with φ is pφ=(dφ/dφ)/I2. Therefore the Hamiltonian function for the the spinning ring is
H = pθ/(2I1) + pφ/(2I2)
and the Hamiltonian operator is
H^ = −h²(∂²/∂θ²) −h²(∂²/∂φ²)
and the time-independent
Schrödinger equation is
h²(∂²ψ/∂θ²) −h²(∂²ψ/∂φ²) = Eψ
If it is assumed that the wave function ψ is of the form Θ(θ)Φ(φ) then above equation becomes
h²Θ"(θ)Φ(φ) −h²Θ(θ)Φ"(φ) = EΘ"(θ)Φ(φ)
which upon division
by Θ(θ)Φ(φ) gives
h²Θ"/Θ −h²Φ"/Φ = E
This equation can be expressed as
−Θ"/Θ = Φ"/Φ + E/h²
The LHS is independent of φ and the RHS independent of θ. Therefore their common value must be a constant, say k². This means that
Θ"(θ) + k²Θ(θ) = 0
The solution is
Θ(θ) = A·cos(k(θ+θ0)
where A and θ0 are constants. By proper choice of the coordinate system θ0 can be made equal to zero. Then k(2π) must be an integral multiple of 2π. Therefore k must be an integer.
The other equation is
Φ"/Φ + E/h² = k²
or, equivalently
Φ" + (E/h² − k²)Φ = 0
From the previous case it is found that the coefficient of Φ must be a squared integer; i.e.,
(E/h² − k²) = q²
or, equivalently
E/h² = k² + q²
Thus the energy of the spinning ring is quantized such that it is equal to a multiple of of the sum of squared integers. The wave function then has the form
ψ(θ, φ) = cos(kθ)cos(qφ)
for 0≤θ≤2π
and 0≤φ≤2π
This means that the probability density function P(θ, φ) is given by
P(θ, φ) = cos²(kθ)cos²(qφ)
When a second rotation axis is included the quantization condition for energy involves the sum of the squares of three integers.
HOME PAGE OF applet-magic |
a21fe2034c457be9 |
Durham e-Theses
You are in:
Localised conduction electrons in carbon nanotubes and related structures
Watson, Michael J. (2005) Localised conduction electrons in carbon nanotubes and related structures. Doctoral thesis, Durham University.
Single localized polaron (quasiparticle) States are considered in structures relating to carbon nanotubes. The hamiltonian is derived in the tight-binding approximation first on a hexagonal lattice and later on a general carbon nanotube with specifiable chirality, and shares close links with the Davydov model of excitations of a one-dimensional molecular chain. First-order interactions of the lattice degrees of freedom with the electron on-site and exchange terms are included. The system equations are shown, under certain approximations, to share a close relationship with the nonlinear Schrödinger equation - an equation that is known to possess localised solutions. The ground state of system is investigated numerically and is found to depend crucially upon the strengths of the electron-phonon interactions.
Item Type:Thesis (Doctoral)
Award:Doctor of Philosophy
Thesis Date:2005
Copyright:Copyright of this thesis is held by the author
Deposited On:09 Sep 2011 09:54
Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter |
3c45be77bfe46a7a | Search tips
Search criteria
Nanoscale Res Lett. 2012; 7(1): 489.
Published online 2012 August 31. doi: 10.1186/1556-276X-7-489
PMCID: PMC3477053
Singly ionized double-donor complex in vertically coupled quantum dots
The electronic states of a singly ionized on-axis double-donor complex (D2+) confined in two identical vertically coupled, axially symmetrical quantum dots in a threading magnetic field are calculated. The solutions of the Schrödinger equation are obtained by a variational separation of variables in the adiabatic limit. Numerical results are shown for bonding and antibonding lowest-lying artificial molecule states corresponding to different quantum dot morphologies, dimensions, separation between them, thicknesses of the wetting layers, and magnetic field strength.
Keywords: Quantum dots, Adiabatic approximation, Artificial molecule, 78.67.-n, 78.67.Hc, 3.21.-b
Quantum dots (QDs) have opened the possibility to fabricate both artificial atoms and molecules with novel and fascinating optoelectronic properties which are not accessible in bulk semiconductor materials. An attractive route for nano-structuring semiconductor materials offers self-assembled quantum dots which are formed by the Stranski-Krastanow growth mode by depositing the material on a substrate with different lattice parameters [1-5]. The electrical and optical properties of these structures may be changed in a controlled form by doping the shallow impurities whose energy levels are defined by the interplay between the reductions of the physical dimension, the Coulomb attraction, and the inter-particle correlation.
Recently, it has been proposed to use the singly ionized double-donor system (D2+) confined in a single semiconductor QD [6] or ring [7] as an adequate functional part in a wide range of device applications, including spintronics, optoelectronics, photovoltaics, and quantum information technologies. This two-level system encodes logical information either on the spin or on the charge degrees of freedom of the single electron and allows us to manipulate conveniently its molecular properties, such as the energy splitting between the bonding and antibonding lowest-lying molecular-like states or the spatial distribution of carriers in the system [8-12]. One can expect that the singly ionized double-donor system (D2+) confined in vertically coupled QDs should have similar properties. In this paper, we analyze the electronic states of an artificial hydrogen molecular ion (D2+) compound by two positive ions that interchange their electron, which is constrained to exchange between two identical vertically coupled, axially symmetrical QDs in the presence of a threading magnetic field.
Below, we analyze the model of two separated on-axis singly ionized donors, confined in two coaxial, vertically stacked QDs, whose identical morphologies present axially symmetrical layers whose shape is given by the dependence of the layer thickness h on the distance ρ from the axis as follows: h(ρ) = db + d0fn(ρ)[theta](R0ρ). Here, R0 is the base radius, db is the wetting layer thickness, d0 is the maximum height of the QD over this layer, [theta](x) is the Heaviside step function, equal to 0 for x < 0 and to 1 for x > 0, and fn(ρ) = [1 − (ρ/R0)n]1/n. The morphology is controlled in this model by means of the integer shape-generating parameter n which is equal to 1, 2, or tends to infinity for conical pyramid-like, lens-like, and disk-like geometrical shapes, respectively. As an example, the 3D image of an artificial singly ionized molecule confined in lens-like QDs is presented in Figure 1.
Figure 1
Image of the singly ionized molecule confined in lens-like QDs.
Besides, we assume that the external homogeneous magnetic field B = Bz is applied along the quantum dot's axis. The dimensionless Hamiltonian of the single electron in this D2+ complex in the effective-mass approximation can be written as
where Vc(ρ, z) is the confinement potential, equal to 0 and V0 inside and outside the QD, respectively. The last two terms in Equation 1 correspond to the attraction between electron and ions. The effective Bohr radius a0* = [variant Planck's over 2pi]2ε/m*e2, the effective Rydberg Ry* = e2/2εa0*, and γ = e[variant Planck's over 2pi]B/2m*cRy* have been taken above as units of length, energy, and the conventional dimensionless magnetic field strength, respectively.
As both donors are located at the axis, the potential is axially symmetrical, the angular momentum Lz commutes with the Hamiltonian, and the corresponding eigenvalues give us one good quantum number m. At this representation, the Hamiltonian (Equation 1) cylindrically coordinates only on two coordinates:
Hmρ,z=1ρ[partial differential][partial differential]ρρ[partial differential][partial differential]ρ[partial differential]2[partial differential]z2+γm+γ2ρ24+Vρρ,z;Vρρ,z=Vcρ,z2ρ2+zZ122ρ2+zZ22.
Taking into account that the thickness of QDs is typically much smaller than their lateral dimension and therefore the electron motion in the first direction is much faster than in-plane motion, one can use the advantage of the adiabatic approximation [13] in which the wave function is presented as a product of two functions:
where the first function f(ρ, z) describes the fast motion in z direction and satisfies the wave equation
[partial differential]2fρ,z[partial differential]z2+Vρ,zfρ,z=Efρfρ,z
with ‘frozen out’ radial coordinate ρ, while the radial part of the wave function is found in the second step from the equation
1ρ[partial differential][partial differential]ρρ[partial differential]Φmρ[partial differential]ρ+γm+m2ρ2+γ2ρ24+EfρΦmρ=EmΦmρ.
In our numerical procedure, we solve Equation 4 repeatedly for each value ρ by using the trigonometric sweep method [13] in order to restore the unknown function Ef(ρ). Once this function is found, then the energies Em of the molecular complex can be established by solving Equation 5.
As the potential V(ρ, z) for each fixed value of ρ presents an even function V(ρ, − z) = V(ρ, z) with respect to the variable z corresponding to a symmetrical (no-rectangle) quantum well, then all solutions of Equation 4 can be arranged in two sets: odd solutions f(ρ, − z) = − f(ρ, z) and even solutions f+(ρ, − z) = f+(ρ, z), called antibonding and bonding states, respectively. These sets of functions can be found as the solutions of the boundary value problems corresponding to the differential Equation 4 within the range 0 <z < ∞ with the frontier conditions df+ρ,0dz=0;fρ,0=0.
Results and discussion
We have performed numerical calculations of two-electron renormalized energies Em as a function of the magnetic flux and for QDs with different morphologies, dimensions, and separation between layers in order to analyze the Aharonov-Bohm and the quantum size effects. We consider for our simulations the In0.55Al0.45As/Al0.35 Ga0.65As structures with the following values of physical parameters: dielectric constant ε = 12.71, the effective mass in the dot region and the region outside the dot for the electron m * = 0.076m0, the conduction and the valence band offset in junctions is V0 = 358meV, the effective Bohr radius a0* ≈ 10nm, and the effective Rydberg Ry* ≈ 5meV.
First, we calculate the energies of the molecular complex as functions of the magnetic field in disk-like, lens-like, and cone-like vertically coupled QDs and in a single one-electron QR with smooth non-homogeneity of the surface. Results for vertically coupled QDs with the heights d0 = 4nm, the wetting layer thicknesses db = 1nm, radii R0 = 20nm, and the separation between them d = 6nm are shown in Figure 2.
Figure 2
Energies as functions of the magnetic field of a D2+ in vertically coupled quantum dots. (Heights 4 nm, wetting layer thicknesses 1 nm, radii 20 nm, and separation between them 6 nm).
It is seen that in all cases, the energy levels are very sensitive to the magnetic field and their dependencies on the magnetic field strength exhibit multiple crossovers and reordering. Comparing these dependencies for the disk, the lens, and the cone in Figure 2, one can also observe a successive increase of the number of crossovers and the lowering of the region energies where such crossovers occur. It is related to the variation of the electron probability distribution inside and around their InAs layers, which is similar to charge distribution in a metallic surface when its geometry varies from the flat to the spiked-type one. Such variation of the probability distribution is a consequence of the stronger confinement in structures with spiked-type QD geometry where the electron-ion separation is defined by interplays between the electrostatic interaction between them and the strong structural confinement, making it more stable with respect to the external magnetic field and the ring-like electron probability density distribution. Therefore, the energy dependencies for cone-like QDs have a shape similar to those that exhibit structures with ring-like geometry known as the Aharonov-Bohm effect.
The Aharonov-Bohm effect observed usually in ring-like heterostructures is a manifestation of the competition between the paramagnetic and diamagnetic terms in the Hamiltonian, resulting in the oscillation of the ground state energy. Such oscillations are impossible in the disk-like structures because of a significant decrease of the diamagnetic term contribution as the magnetic field increases and the electron probability distribution becomes more contracted. In QDs with a spike-like morphology, the electron probability density is already strongly confined, the external magnetic field can no longer decrease more the diamagnetic term contribution, and the energy dependencies on the increasing magnetic field become similar to those of ring-like structures.
In Figure 3, we present results of the calculation of the density of electronic states in the zero-magnetic field for QDs with three different morphologies on the left side case γ = 0 and on the right side for γ = 0.8. It is seen that the density of electronic states in the case of the zero-magnetic field for the disk-like structure has a larger value in the region of the low-lying energy levels and it decreases successively while the morphology becomes more and more spike-liked. It is due to the fact that the electron confinement in the disk is weaker than that in the lens and that in the lens is weaker than that in the cone.
Figure 3
Density of the electronic states for a D2+ in vertically coupled quantum dots. (Heights 3 nm, wetting layer thicknesses 2 nm, radii 20 nm, and separation between them 6 nm for two different values of the magnetic field (γ = 0) and (γ = ...
Also, it is seen that the lowest peak corresponding to the ground bonding state in the cone-like structure is more significantly separated from other excited states than in two other structures. It is due to the stronger confinement of the electron in the cone-like structure where the electron is mainly located nearer to the donor than in disk-like and lens-like structures.
Comparing the densities of states presented on the left and right sides of Figure 3, one can see remarkable modifications that suffer the corresponding curves. Particularly, in the disk-like structure, the presence of the magnetic field provides a displacement of the peaks at the region of the low-lying energies. In the lens-like and cone-like structures, the modification is inversed; the peaks are reorganized in such a way that their distribution becomes almost homogeneous. Redistribution of the peaks' positions in the lens is defined mainly by the additional confinement that provides the external magnetic field, while analogous redistribution in other two spike-liked structures is mainly due to the Aharonov-Bohm effect.
In short, we propose a simple numerical procedure for calculating the energies and wave functions of a singly ionized molecular complex formed by two separated on-axis donors located at vertically coupled QDs in the presence of the external magnetic field. Our calculation includes some important characteristics of the heterostructure such as the presence of the wetting layer and the possibility of the variation of the QD morphology. The curves of the energy dependencies on the external magnetic field for the disk-like, lens-like, and cone-like structures are presented. We find that the effect of the in-plane confinement on the electron-ion separation is stronger in spike-shaped QDs and therefore the energy dependencies in such structures exhibit a behavior similar to that in ring-like structures. The analysis of the curves of the density of electronic states also confirms this result.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to this work. JSO created the analytic model with contributions from IM. RMG and GES performed the numerical calculations and wrote the manuscript. All authors discussed the results and implications and commented on the manuscript at all stages. All authors read and approved the final manuscript.
Authors’ information
JSO obtained his Ph.D. in 2004 at the Universidad Industrial de Santander, where IM was his advisor. His research interests include the theory of semiconductor nanostructures. JSO is the head of the research group ‘Condensed Matter Theory’ at the University of Magdalena. GES and RMG are master's degree and Ph.D. students, respectively, and teachers at the University of Magdalena.
This work was financed by the Universidad del Magdalena through the Vicerrectoría de Investigaciones (Código 01).
• Jacak L, Hawrylak P, Wójs A. Quantum Dots. Berlin: Springer; 1997.
• Leonard D, Pond K, Petroff PM. Critical layer thickness for self-assembled InAs islands on GaAs. Phys Rev B. 1994;50:11687–11692. doi: 10.1103/PhysRevB.50.11687. [PubMed] [Cross Ref]
• Lorke A, Luyken RJ, Govorov AO, Kotthaus JP. Spectroscopy of nanoscopic semiconductor rings. Phys Rev Lett. 2000;84:2223–2226. doi: 10.1103/PhysRevLett.84.2223. [PubMed] [Cross Ref]
• Granados D, García JM. In(Ga)As self-assembled quantum ring formation by molecular beam epitaxy. Appl Phys Lett. 2003;82:2401. doi: 10.1063/1.1566799. [Cross Ref]
• Raz T, Ritter D, Bahir G. Formation of InAs self-assembled quantum rings on InP. Appl Phys Lett. 2003;82:1706. doi: 10.1063/1.1560868. [Cross Ref]
• Movilla JL, Ballester A, Planelles J. Coupled donors in quantum dots: quantum size and dielectric mismatch effects. Phys Rev B. 2009;79:195319.
• Gutiérrez W, García LF, Mikhailov ID. Coupled donors in quantum ring in a threading magnetic field. Physica E. 2010;43:559. doi: 10.1016/j.physe.2010.09.015. [Cross Ref]
• Calderón MJ, Koiller B. External field control of donor electron exchange at the Si/SiO2 interface. Phys Rev B. 2007;75:125311.
• Tsukanov AV. Single-qubit operations in the double-donor structure driven by optical and voltage pulses. Phys Rev B. 2007;76:035328.
• Openov LA. Resonant pulse operations on the buried donor charge qubits in semiconductors. Phys Rev B. 2004;70:233313.
• Koiller B, Hu X. Electric-field driven donor-based charge qubits in semiconductors. Phys Rev B. 2006;73:045319.
• Barrett SD, Milburn GJ. Measuring the decoherence rate in a semiconductor charge qubit. Phys Rev B. 2003;68:155307.
• Mikhailov ID, Marín JH, García LF. Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys Stat Sol (b) 2005;242:1636. doi: 10.1002/pssb.200540053. [Cross Ref]
Articles from Nanoscale Research Letters are provided here courtesy of Springer |
ee0524b93062b3eb | Friday, April 29, 2011
Fun with an Argon Atom
Photon-recoil bilocation experiment at Heidelberg
A recent experiment on Argon atoms by Jeri Tomkovic and five collaborators at the University of Heidelberg has demonstrated once again the subtle and astonishing reality of the quantum world.
Erwin Schrödinger, who devised the Schrödinger equation that governs quantum behavior, also demonstrated the preposterousness of his own equation by showing that under certain special conditions quantum theory seemed to allow a cat (Schrödinger's Cat) to be alive and dead at the same time. Humans can't yet do this to cats, but clever physicists are discovering how to put larger and larger systems into a "quantum superposition" in which a single entity can comfortably dwell in two distinct (and seemingly contradictory) states of existence.
The Heidelberg experiment with Argon atoms (explained popularly here, in the physics arXiv here and published in Nature here) dramatically demonstrates two important features of quantum reality: 1) if it is experimentally impossible to tell whether a process went one way or the other, then, in reality, IT WENT BOTH WAYS AT ONCE (like a Schrödinger Cat); 2) quantum systems behave like waves when not looked at--and like particles when you look.
The Heidelberg physicists looked at laser-excited Argon atoms which shed their excitation by emitting a single photon of light. The photon goes off in a random direction and the Argon atom recoils in the opposite direction. Ordinary physics so far.
But Tomkovic and pals modified this experiment by placing a gold mirror behind the excited Argon atom. Now (if the mirror is close enough to the atom) it is impossible for anyone to tell whether the emitted photon was emitted directly or bounced off the mirror. According to the rules of quantum mechanics then, the Argon atom must be imagined to recoil IN BOTH DIRECTIONS AT ONCE--both towards and away from the mirror.
But this paradoxical situation is present only if we don't look. Like Schrödinger's Cat, who will be either alive or dead (if we look) but not both, the bilocal Argon atom (if we look) will always be found to be recoiling in only one direction--towards the mirror (M) or away from the mirror (A) but never both at the same time.
To prove that the Argon atom was really in the bilocal superposition state we have to devise an experiment that involves both motions (M and A) at once. (Same to verify the Cat--we need to devise a measurement that looks at both LIVE and DEAD cat at the same time.)
To measure both recoil states at once, the Heidelberg guys set up a laser standing wave by shining a laser directly into a mirror and scattered the bilocal Argon atom off the peaks and troughs of this optical standing wave. Just as a wave of light is diffracted off the regular peaks and troughs of a matter-made CD disk, so a wave of matter (Argon atoms) can be diffracted from a regular pattern of light (a laser shining into a mirror).
When an Argon atom encounters the regular lattice of laser light, it is split (because of its wave nature) into a transmitted (T) and a diffracted (D) wave. The intensity of the laser is adjusted so that the relative proportion of these two waves is approximately 50/50.
In its encounter with the laser lattice, each state (M and A) of the bilocated Argon atom is split into two parts (T and D), so now THE SAME ARGON ATOM is traveling in four directions at once (MT, MD, AT, AD).
Furthermore (as long as we don't look) these four distinct parts act like waves. This means they can constructively and destructively interfere depending on their "phase difference". The two waves MT and AD are mixed and the result sent to particle detector #1. The two waves AT and MD are mixed and sent to particle detector #2. For each atom only one count is recorded--one particle in, one particle out. But the PATTERN OF PARTICLES in each detector will depend on the details of the four-fold experience each wavelet has encountered on its way to a particle detector. This hidden wave-like experience is altered by moving the laser mirror L which shifts the position of the peaks of the optical diffraction grating.
In quantum theory, the amplitude of a matter wave is related to the probability that it will trigger a count in a particle detector. Even though the unlooked-at Argon atom is split into four partial waves, the looked-at Argon particle can only trigger one detector.
The outcome of the Heidelberg experiment consists of counting the number of atoms detected in counters #1 and #2 as a function of the laser mirror position L.
The results of this experiment show that, while it was unobserved, a single Argon atom was 1) in two places at once because of the mirror's ambiguisation of photon recoil, then 2) four places at once after encountering the laser diffraction grating, 3) then at last, only one place at a time when it is finally observed by either atom counter #1 or atom counter #2.
The term "Schrödinger Cat state" has come to mean ANY MACROSCOPIC SYSTEM that can be placed in a quantum superposition. Does an Argon atom qualify as a Schrödinger Cat? Argon is made up of 40 nucleons, each consisting of 3 quarks. Furthermore each Argon atom is surrounded by 18 electrons for a total of 138 elementary particles--each "doing its own thing" while the atom as a whole exists in four separate places at the same time. Now a cat surely has more parts than a single Argon atom, but the Heidelberg experiment demonstrates that, with a little ingenuity, a quite complicated system can be coaxed into quantum superposition.
Today's physics students are lucky. When I was learning quantum physics in the 60s, much of the quantum weirdness existed only as mere theoretical formalism. Now in 2011, many of these theoretical possibilities have become solid experimental fact. This marvelous Heidelberg quadralocated Argon atom joins the growing list of barely believable experimental hints from Nature Herself about how She routinely cooks up the bizarre quantum realities that underlie the commonplace facts of ordinary life.
kcb000 said...
The arXiv link to the paper is broken ( HTTP 403 ). It can however be found here:
kcb000 said...
As you were kind enough to send me rummaging through arViv, I also found this paper:
It's about using humans as photon detectors to observe entanglement macroscopically .
Perhaps you could comment on it here or in a future post?
nick herbert said...
If you click on the tag "Gisin" you'll find a post on an earlier version of this experiment called "How To Quantum Entangle Human Beings." Gisin is quite clear that his experiment DOES NOT entangle people but it's a small step in that direction. |
8d9d1b6938c57723 | Particles can also be called wave packets. There is some probability function that determines which part of the wave packet the mass of the particle is in. The tail of this probability function can extend into a seperate neighboring object, during which time, the particle could decide to jump to that other place and therefore reshape it's probability distribution.
An example would be a scanning tunneling microscope. It has a tiny probe-tip of conducting wire mounted on a pizeoelectric arm, which enables the tip to be scanned over the sample surface at an atomic distance. If a small voltage is applied across the tip and sample, some electrons will quantum tunnel from the tip across the gap to the sample, thus creating a measurable current. As the tip scans the atoms, the current changes, and a graphical representation of that change can be created.
Consider a small metal ball bearing put in a bowl. The ball bearing has an equilibrium position at the bottom of the bowl. Now if you were to push it a bit it would climb up the walls of the bowl, and fall back again, oscillate about the bottom and come to rest. If you were to push it hard enough however the ball would get out of the bowl. This is described by saying that the wall of the bowl acts as a potential barrier. The ball is in a potential well. For it to get out you must give it enough kinetic energy (push it hard enough) to get out.
However for very small objects things are not so simple. If the ball had been an electron and the bowl had been a quantum bowl then the ball could have got out without having enough energy to cross the potential barrier. So it is possible for the ball to simply materialize on the other side of the wall (even when it does not have enough energy to cross it) without the wall breaking or rupturing.
This is a very naive explanation of course but I hope it explains the principle behind Quantum Mechanical tunneling.
Consider a particle with energy E moving towards a potential barrier of height U0 and width a:
_______________________|||||||_______________________ x
| -a- |
Using Schrödinger's (time-independent, one-dimensional) Equation, we can solve for the wave function of the particle (using h for h-bar):
- h22ψ
--------_ + U(x)ψ = Eψ
The potential U(x) is divided into three parts:
U(x) = { 0 : x < 0,
U0: 0 < x < a,
0 : x > a }
In order to solve for ψ, the wave function of the particle, we also divide it into three parts: ψ0 for x < 0, ψ1 for 0 < x < a, and ψ2 for x > a. Astute readers will notice at this point that the potential is the same for ψ0 and ψ2 -- these two wave functions ought, then, to look at least somewhat similar. As we shall see, they will have the same wavelength but different amplitudes. Since U = 0 for both ψ0 and ψ2, they each take the same form as the wave function for a free particle with energy E, or:
ψ(x) = A*ei*k0x + B*e-ik0x (where k0 = √(2*m*E/h2) )
The first portion of this equation corresponds to a wave moving rightwards while the second portion corresponds to a wave moving to the left. Or they would, had we folded in time-dependence (see note at the bottom). In order to make our lives easier, it is necessary to think a little bit about what is actually physically happening in this system. Our particle is approaching the potential barrier from the left, moving rightwards. When it hits the potential barrier, common sense says that at least some of the time, the particle will bounce off the barrier and begin moving leftwards. From this, we know that ψ0 contains both the leftward (reflected particle) and rightward (incident particle) portions of the wave function. As the other nodes in this writeup explain, when the particle hits the potential barrier, in addition to bouncing off some of the time, some of the time it will pass through. So we know that ψ2 has at least the rightward-moving component. But there is nothing in the experimental setup that would cause the particle to begin moving towards the left once it has passed through the potential barrier, so we can deduce that the leftward-moving component of ψ2 has an amplitude of zero.
Now, to deal with the particle while it is inside the barrier. Common sense would suggest that the particle can never actually exist within the barrier, (let alone cross over it). Physically, however, we know for sure that a particle can, in certain circumstances, pass through the barrier, so common sense would suggest that if it exists on both sides of the barrier, it must also exist within the barrier. But how on earth are we supposed to observe a particle while it is inside a potential barrier? The answer is that while we can't observe the particle inside the potential barrier, the mathematical properties of the wave function suggests that it does in fact exist while it is inside the barrier.
Since the only thing that matters in physics is relative potential, we can pretend like the particle, while it is inside the potential barrier, isn't in a potential of U0, but rather simply has an energy of E - U0 = - (U0 - E) (since U0 > E). As before, then, the equation for this situation the wave equation with a wave number (k) of √(2*m*E/h2). In this case however, the particle has negative energy (tis a very good thing we can't physically observe the particle while it is inside the barrier, since negative energies can't exist), so it has an imaginary wave number, k1 = i√(2*m*(U0-E)/h2).
We now know enough to write out all three parts of the wave equation:
ψ(x) = {
A*ei*k0x + B*e-ik0x : x < 0
C*e-ik1x + D*eik1x : 0 < x < a
E*eik0x : x > a
The wave function and its first derivative have to continuous over all x ∈ R. We can use these boundary conditions to get four relationships among the constants (ψ0(0) = ψ1(0), ψ0'(0) = ψ1'(0), ψ1(a) = ψ2(a), and ψ1'(a) = ψ2'(a)). Actually solving for the constants is impossible given just these conditions (five unknowns but only four equations), but we can find the probability that the particle reflects off the barrier, and the probability that it tunnels through the barrier. Recall that the probability function of a particle with wave function ψ is
P(x) = |ψ(x)|2
Since we know that the first portion of ψ0 (with amplitude A) represents the incident particle, and the second portion (with amplitude B) represents the reflected particle, the ratio of the two wave functions |B|2/|A|2 is the fraction of the time that the incident particle will reflect off the barrier. Similarly, the ratio |E|2/|A|2 is the fraction of the time that the particle will tunnel through the barrier. After a bit of extraordinarily ugly algebra (don't try this at home), we find that:
|E|2/|A|2 = -----------------------
1 + 1/4 ---------------
E (U0 - E)
It shouldn't be too hard to convince yourself that since the particle has to do something after hitting the barrier, the probability that it will reflect off is just 1 - |E|2/|A|2. This probability decreases exponentially with a (since sinh(x) = (ex - e-x)/2), so the largest factor in determining tunneling probability is the width of the potential barrier. tdent notes that since the probability also depends exponentially on k1, there's a large dependence on the difference between the barrier height and the energy of the particle, but since the dependence on (U0 - E) is under a square root, this still has less of an effect than a.
Note: For time-independent potentials (∂U/∂t = 0), the time-dependent solution to the Schrödinger equation is just ψ(x)*e-iωt, where ψ is the time-independent wave function and ω = E/h. So, the time-dependent form of the solution ends up looking like:
As t increases then, for the first part of the function to remain constant x must increase and for the second part to remain constant x must decrease. So the first portion of the equation represents a wave travelling towards increasing x (the right), and the second portion represents a wave travelling towards decreasing x (the left).
From personal notes, Modern Physics by Kenneth Krane, and (for the solution to |E|2/|A|2).
Potential Barriers and Quantum Tunneling - A Layman's Introduction
Note: This is a layman's introduction to quantum tunneling only. For a general introduction to quantum mechanics, please see Mauler's Layman's Guide to Quantum Mechanics
Quantum tunneling is a concept from quantum mechanics, a branch of modern physics. The concept is explained using the following anecdote.
Suppose there is a hill, a real-world hill which you might walk up, if you were so inclined (no pun intended). Also suppose that three identical balls are rolling at different speeds towards the hill*. Due to this speed difference, each ball has a different energy of motion to the others. As the balls begin to roll up the hill, they also begin to slow down. The slowest ball does not have enough energy of motion to make it up the hill. It slows and slows, and eventually stops somewhere below the top for an instant in time, before rolling back down the hill. The second ball has enough energy to make it to the top of the hill, but no more. It comes to a stop on top of the hill. The last ball has more energy of motion than it actually needs to make it to the top of the hill. So when it makes it to the top, it still has some motion energy, and it rolls over the top, and down the other side.
This is all perfectly normal behaviour for balls on hills - nothing new there. However scientists (more specifically quantum physicists) discovered earlier last century, that when the balls are very very small, something very strange happens.
In the world of the very very small, balls usually behave in the same well-known manner described in the anecdote above. However, sometimes they don't. Sometimes balls which DO have enough energy to roll right up that hill and keep going down the other side, don't make it up the hill. That's weird. Imagine taking a bowling ball, and hurling it with all your might up a gentle hill. You know it's got enough energy to go over the top, but you blink, and when you open your eyes again, the bowling ball is rolling back down the hill towards you.
What's even stranger though, is that in this world of the very very small (and it is the REAL world, inhabited by you and I), sometimes balls which DON'T have enough energy to get up the hill, still do so (and continue down the other side). So it's like your bowling ball comes back out of the return shute, and you take it and roll it ever so gently up that same hill. You know it doesn't have enough energy to make it to the top, but then you blink, and when you open your eyes, there it is, rolling down the other side.
This puzzling behaviour has actually been observed to happen, many many times, by scientists. The phenomenon has been given the name "tunneling", for it is as if the ball (or 'particle' as we call it) digs a tunnel through that hill, to get to the other side. In such quantum experiments, scientists fire very small bullets at very small walls, and sometimes those bullets which do not have enough energy to break through the wall, are observed a short time later, on the other side (where it would seem, they have no right to be!).
Regarding this strange behaviour, I stress that THIS IS A REAL PHENOMENON. It actually applies to everything in the universe, but the chance of it happening to something as large as an elephant, or even a baseball, or a marble, is very small indeed. So small in fact, that it will probably never be seen to happen by a human on this planet. The smaller a thing is, the greater the chance of quantum tunneling occuring to it. Things that you can see with the naked eye are far too big. The kinds of particles to which tunneling commonly occurs can only be seen with special microscopes**.
As a final point, please note that it is probably a good thing that quantum tunneling is almost never observed to happen to everyday objects. It would not be too much fun if that butchers' knife you just placed safely on the table, suddenly tunneled through and found its way into the top of your foot. Of course it might tunnel through your foot as well, but.....well......if you ever see that happen, please let me know.
* In quantum physics, the hill is known as a 'potential barrier'
** The kind of microscopes necessary to see the particles to which tunneling routinely occurs, are know as Scanning Tunneling Microscopes (S.T.M.). In an ironic twist, the technology which drives the S.T.M., itself relies on the principle of quantum tunneling to operate.
Log in or registerto write something here or to contact authors. |
0daa63440ea63578 | Fractional calculus
From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Fractional derivative" redirects here.
Fractional calculus is a branch of mathematical analysis that studies the possibility of taking real number powers or complex number powers of the differentiation operator
D = \dfrac{d}{dx},
and the integration operator J. (Usually J is used instead of I to avoid confusion with other I-like glyphs and identities.)
In this context the term powers refers to iterative application or function composition, in the same sense that f2(x) = f(f(x)). For example, one may ask the question of meaningfully interpreting
\sqrt{D} = D^{\frac{1}{2}}
as a functional square root of the differentiation operator (an operator half iterated), i.e., an expression for some operator that when applied twice to a function will have the same effect as differentiation. More generally, one can look at the question of defining
for real-number values of a in such a way that when a takes an integer value n, the usual power of n-fold differentiation is recovered for n > 0, and the −nth power of J when n < 0.
The motivation behind this extension to the differential operator is that the semigroup of powers Da will form a continuous semigroup with parameter a, inside which the original discrete semigroup of Dn for integer n can be recovered as a subgroup. Continuous semigroups are prevalent in mathematics, and have an interesting theory. Notice here that fraction is then a misnomer for the exponent a, since it need not be rational; the use of the term fractional calculus is merely conventional.
Fractional differential equations (also known as extraordinary differential equations) are a generalization of differential equations through the application of fractional calculus.
Nature of the fractional derivative[edit]
Not to be confused with Fractal derivative.
An important point is that the fractional derivative at a point x is a local property only when a is an integer; in non-integer cases we cannot say that the fractional derivative at x of a function f depends only on values of f very near x, in the way that integer-power derivatives certainly do. Therefore it is expected that the theory involves some sort of boundary conditions, involving information on the function further out. To use a metaphor, the fractional derivative requires some peripheral vision.
As far as the existence of such a theory is concerned, the foundations of the subject were laid by Liouville in a paper from 1832. The fractional derivative of a function to order a is often now defined by means of the Fourier or Mellin integral transforms.[1]
A fairly natural question to ask is whether there exists an operator H, or half-derivative, such that
H^2 f(x) = D f(x) = \dfrac{d}{dx} f(x) = f'(x) .
It turns out that there is such an operator, and indeed for any a > 0, there exists an operator P such that
(P ^ a f)(x) = f'(x),
or to put it another way, the definition of dny/dxn can be extended to all real values of n.
Let f(x) be a function defined for x > 0. Form the definite integral from 0 to x. Call this
( J f ) ( x ) = \int_0^x f(t) \; dt .
Repeating this process gives
( J^2 f ) ( x ) = \int_0^x ( J f ) ( t ) dt = \int_0^x \left( \int_0^t f(s) \; ds \right) \; dt,
and this can be extended arbitrarily.
The Cauchy formula for repeated integration, namely
(J^n f) ( x ) = { 1 \over (n-1) ! } \int_0^x (x-t)^{n-1} f(t) \; dt,
leads in a straightforward way to a generalization for real n.
Using the gamma function to remove the discrete nature of the factorial function gives us a natural candidate for fractional applications of the integral operator.
This is in fact a well-defined operator.
It is straightforward to show that the J operator satisfies
(J^\alpha) (J^\beta f)(x) = (J^\beta) (J^\alpha f)(x) = (J^{\alpha+\beta} f)(x) = { 1 \over \Gamma ( \alpha + \beta) } \int_0^x (x-t)^{\alpha+\beta-1} f(t) \; dt
This relationship is called the semigroup property of fractional differintegral operators. Unfortunately the comparable process for the derivative operator D is significantly more complex, but it can be shown that D is neither commutative nor additive in general.[citation needed]
Fractional derivative of a basic power function[edit]
The half derivative (purple curve) of the function f(x) = x (blue curve) together with the first derivative (red curve).
The animation shows the derivative operator oscillating between the antiderivative (α=−1: y=12x2) and the derivative (α=+1: y=1) of the simple power function y=x continuously.
Let us assume that f(x) is a monomial of the form
The first derivative is as usual
f'(x)=\dfrac{d}{dx}f(x)=k x^{k-1}\;.
Repeating this gives the more general result that
Which, after replacing the factorials with the gamma function, leads us to
\dfrac{d^a}{dx^a}x^k=\dfrac{\Gamma(k+1)}{\Gamma(k-a+1)}x^{k-a}, \qquad k \ge 0
For k=1 and \textstyle a=\frac{1}{2}, we obtain the half-derivative of the function x as
\dfrac{d^{\frac{1}{2}}}{dx^{\frac{1}{2}}}x=\dfrac{\Gamma(1+1)}{\Gamma(1-\frac{1}{2}+1)}x^{1-\frac{1}{2}}=\dfrac{1!}{\Gamma(\frac{3}{2})}x^{\frac{1}{2}} = \dfrac{2x^{\frac{1}{2}}}{\sqrt{\pi}}.
Repeating this process yields
\dfrac{d^{\frac{1}{2}}}{dx^{\frac{1}{2}}} \dfrac{2x^{\frac{1}{2}}}{\sqrt{\pi}}=\frac{2}{\sqrt{\pi}}\dfrac{\Gamma(1+\frac{1}{2})}{\Gamma(\frac{1}{2}-\frac{1}{2}+1)}x^{\frac{1}{2}-\frac{1}{2}}=\frac{2}{\sqrt{\pi}}\dfrac{\Gamma(\frac{3}{2})}{\Gamma(1)}x^{0}=\dfrac{2 \sqrt{\pi}x^0}{2 \sqrt{\pi}0!}=1,
which is indeed the expected result of
For negative integer power k, the gamma function is undefined and we have to use the following relation:[2]
\dfrac{d^a}{dx^a}x^{-k}=(-1)^a\dfrac{\Gamma(k+a)}{\Gamma(k)}x^{-(k+a)} for k \ge 0
This extension of the above differential operator need not be constrained only to real powers. For example, the (1 + i)th derivative of the (1 − i)th derivative yields the 2nd derivative. Also notice that setting negative values for a yields integrals.
For a general function f(x) and 0 < α < 1, the complete fractional derivative is
For arbitrary α, since the gamma function is undefined for arguments whose real part is a negative integer, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example,
Laplace transform[edit]
We can also come at the question via the Laplace transform. Noting that
\mathcal L \left\{Jf\right\}(s) = \mathcal L \left\{\int_0^t f(\tau)\,d\tau\right\}(s)=\frac1s(\mathcal L\left\{f\right\})(s)
\mathcal L \left\{J^2f\right\}=\frac1s(\mathcal L \left\{Jf\right\} )(s)=\frac1{s^2}(\mathcal L\left\{f\right\})(s)
etc., we assert
J^\alpha f=\mathcal L^{-1}\left\{s^{-\alpha}(\mathcal L\{f\})(s)\right\}.
For example
J^\alpha\left(t^k\right) = \mathcal L^{-1}\left\{\dfrac{\Gamma(k+1)}{s^{\alpha+k+1}}\right\} = \dfrac{\Gamma(k+1)}{\Gamma(\alpha+k+1)}t^{\alpha+k}
as expected. Indeed, given the convolution rule
\mathcal L\{f*g\}=(\mathcal L\{f\})(\mathcal L\{g\})
and shorthanding p(x) = xα−1 for clarity, we find that
(J^\alpha f)(t) &= \frac{1}{\Gamma(\alpha)}\mathcal L^{-1}\left\{\left(\mathcal L\{p\}\right)(\mathcal L\{f\})\right\}\\
&=\frac{1}{\Gamma(\alpha)}\int_0^t p(t-\tau)f(\tau)\,d\tau\\
which is what Cauchy gave us above.
Laplace transforms "work" on relatively few functions, but they are often useful for solving fractional differential equations.
Fractional integrals[edit]
Riemann–Liouville fractional integral[edit]
The classical form of fractional calculus is given by the Riemann–Liouville integral, which is essentially what has been described above. The theory for periodic functions (therefore including the 'boundary condition' of repeating after a period) is the Weyl integral. It is defined on Fourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on the unit circle whose integrals evaluate to 0).
_aD_t^{-\alpha} f(t)={}_aI_t^\alpha f(t)=\frac{1}{\Gamma(\alpha)}\int_a^t (t-\tau)^{\alpha-1}f(\tau)d\tau
By contrast the Grünwald–Letnikov derivative starts with the derivative instead of the integral.
Hadamard fractional integral[edit]
The Hadamard fractional integral is introduced by J. Hadamard [3] and is given by the following formula,
_a\mathbf{D}_t^{-\alpha} f(t) = \frac{1}{\Gamma(\alpha)}\int_a^t \left(\log\frac{t}{\tau}\right)^{\alpha -1} f(\tau)\frac{d\tau}{\tau}, \qquad t > a.
Fractional derivatives[edit]
Not like classical Newtonian derivatives, a fractional derivative is defined via a fractional integral.
Riemann–Liouville fractional derivative[edit]
The corresponding derivative is calculated using Lagrange's rule for differential operators. Computing n-th order derivative over the integral of order (nα), the α order derivative is obtained. It is important to remark that n is the nearest integer bigger than α.
_aD_t^\alpha f(t)=\frac{d^n}{dt^n} {}_aD_t^{-(n-\alpha)}f(t)=\frac{d^n}{dt^n} {}_aI_t^{n-\alpha} f(t)
Caputo fractional derivative[edit]
There is another option for computing fractional derivatives; the Caputo fractional derivative. It was introduced by M. Caputo in his 1967 paper.[4] In contrast to the Riemann Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows.
{}_a^C D_t^\alpha f(t)=\frac{1}{\Gamma(n-\alpha)} \int_a^t \frac{f^{(n)}(\tau)d\tau}{(t-\tau)^{\alpha+1-n}}.
Erdélyi–Kober operator[edit]
The Erdélyi–Kober operator is an integral operator introduced by Arthur Erdélyi (1940).[5] and Hermann Kober (1940)[6] and is given by
\frac{x^{-\nu-\alpha+1}}{\Gamma(\alpha)}\int_0^x (t-x)^{\alpha-1}t^{-\alpha-\nu}f(t) dt,
which generalizes the Riemann fractional integral and the Weyl integral. A recent generalization is the following, which generalizes the Riemann-Liouville fractional integral and the Hadamard fractional integral. It is given by,[7]
\left ({}^\rho \mathcal{I}^\alpha_{a+}f \right )(x) = \frac{\rho^{1- \alpha }}{\Gamma({\alpha})} \int^x_a \frac{\tau^{\rho-1} f(\tau) }{(x^\rho - \tau^\rho)^{1-\alpha}}\, d\tau, \qquad x > a.
Functional calculus[edit]
In the context of functional analysis, functions f(D) more general than powers are studied in the functional calculus of spectral theory. The theory of pseudo-differential operators also allows one to consider powers of D. The operators arising are examples of singular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory of Riesz potentials. So there are a number of contemporary theories available, within which fractional calculus can be discussed. See also Erdélyi–Kober operator, important in special function theory (Kober 1940), (Erdélyi 1950–51).
Fractional conservation of mass[edit]
As described by Wheatcraft and Meerschaert (2008),[8] a fractional conservation of mass equation is needed to model fluid flow when the control volume is not large enough compared to the scale of heterogeneity and when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is:
-\rho \left (\nabla^{\alpha} \cdot \vec{u} \right ) = \Gamma(\alpha +1)\Delta x^{1-\alpha}\rho \left (\beta_s+\phi \beta_w \right ) \frac{\part p}{\part t}
Fractional advection dispersion equation[edit]
This equation has been shown useful for modeling contaminant flow in heterogenous porous media.[9][10][11]
Time-space fractional diffusion equation models[edit]
Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models.[12][13] The time derivative term is corresponding to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as
\frac{\partial^\alpha u}{\partial t^\alpha}=K (-\triangle)^\beta u.
A simple extension of fractional derivative is the variable-order fractional derivative, the α, β are changed into α(x, t), β(x, t). Its applications in anomalous diffusion modeling can be found in reference.[14]
Structural damping models[edit]
Fractional derivatives are used to model viscoelastic damping in certain types of materials like polymers.[15]
Acoustical wave equations for complex media[edit]
The propagation of acoustical waves in complex media, e.g. biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives:
\nabla^2 u -\dfrac 1{c_0^2} \frac{\partial^2 u}{\partial t^2} + \tau_\sigma^\alpha \dfrac{\partial^\alpha}{\partial t^\alpha}\nabla^2 u - \dfrac {\tau_\epsilon^\beta}{c_0^2} \dfrac{\partial^{\beta+2} u}{\partial t^{\beta+2}} = 0.
See also [16] and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in [17] and in the survey paper,[18] as well as the acoustic attenuation article. See [19] for a recent paper which compares fractional wave equations which model power-law attenuation.
Fractional Schrödinger equation in quantum theory[edit]
The fractional Schrödinger equation, a fundamental equation of fractional quantum mechanics discovered by Nick Laskin,[20] has the following form:[21]
i\hbar \frac{\partial \psi (\mathbf{r},t)}{\partial t}=D_\alpha (-\hbar^2\Delta )^{\frac{\alpha}{2}}\psi (\mathbf{r},t)+V(\mathbf{r},t)\psi (\mathbf{r},t).
where the solution of the equation is the wavefunction ψ(r, t) - the quantum mechanical probability amplitude for the particle to have a given position vector r at any given time t, and ħ is the reduced Planck constant. The potential energy function V(r, t) depends on the system.
Further, Δ = 2/r2 is the Laplace operator, and Dα is a scale constant with physical dimension [Dα] = erg1 − α·cmα·secα, (at α = 2, D2 = 1/2m for a particle of mass m), and the operator (−ħ2Δ)α/2 is the 3-dimensional fractional quantum Riesz derivative defined by
\left (-\hbar ^2\Delta \right )^{\frac{\alpha}{2}}\psi (\mathbf{r},t)=\frac 1{(2\pi \hbar)^3}\int d^3pe^{\frac{i}{\hbar} \mathbf{p}\cdot\mathbf{r}}|\mathbf{p}|^\alpha \varphi (\mathbf{p},t).
The index α in the fractional Schrödinger equation is the Lévy index, 1 < α ≤ 2.
See also[edit]
1. ^ For the history of the subject, see the thesis (in French): Stéphane Dugowson, Les différentielles métaphysiques (histoire et philosophie de la généralisation de l'ordre de dérivation), Thèse, Université Paris Nord (1994)
2. ^ Bologna, Mauro, Short Introduction to Fractional Calculus, Universidad de Tarapaca, Arica, Chile
3. ^ Hadamard, J., Essai sur l'étude des fonctions données par leur développement de Taylor, Journal of pure and applied mathematics, vol. 4, no. 8, pp. 101–186, 1892.
4. ^ Caputo, Michele (1967). "Linear model of dissipation whose Q is almost frequency independent-II". Geophys. J. R. Astr. Soc. 13: 529–539.
5. ^ Erdélyi, Arthur (1950–51). "On some functional transformations". Rendiconti del Seminario Matematico dell'Università e del Politecnico di Torino 10: 217–234. MR 0047818.
6. ^ Kober, Hermann (1940). "On fractional integrals and derivatives". The Quarterly Journal of Mathematics (Oxford Series) 11 (1): 193–211. doi:10.1093/qmath/os-11.1.193.
7. ^ Katugampola, U.N., New Approach To A Generalized Fractional Integral, Appl. Math. Comput. Vol 218, Issue 3, 1 October 2011, pages 860–865
8. ^ Wheatcraft, S., Meerschaert, M., (2008). "Fractional Conservation of Mass." Advances in Water Resources 31, 1377–1381.
9. ^ Benson, D., Wheatcraft, S., Meerschaert, M., (2000). "Application of a fractional advection-dispersion equation." Water Resources Res 36, 1403–1412.
10. ^ Benson, D., Wheatcraft, S., Meerschaert, M., (2000). "The fractional-order governing equation of Lévy motion." Water Resources Res 36, 1413–1423.
11. ^ Benson, D., Schumer, R., Wheatcraft, S., Meerschaert, M., (2001). "Fractional dispersion, Lévy motion, and the MADE tracer tests." Transport Porous Media 42, 211–240.
12. ^ Metzler, R., Klafter, J., (2000). "The random walk's guide to anomalous diffusion: a fractional dynamics approach." Phys. Rep., 339, 1-77.
13. ^ Chen, W., Sun, H.G., Zhang, X., Korosak, D., (2010). "Anomalous diffusion modeling by fractal and fractional derivatives." Computers and Mathematics with Applications, 59(5), 1754-1758. [1]
14. ^ Sun, H.G., Chen, W., Chen, Y.Q., (2009). "Variable-order fractional differential operators in anomalous diffusion modeling." Physica A, 2009, 388: 4586-4592.[2]
15. ^ Nolte, Kempfle and Schäfer (2003). "Does a Real Material Behave Fractionally? Applications of Fractional Differential Operators to the Damped Structure Borne Sound in Viscoelastic Solids", Journal of Computational Acoustics (JCA), Volume 11, Issue 3.
16. ^ S. Holm and S. P. Näsholm, "A causal and fractional all-frequency wave equation for lossy media," Journal of the Acoustical Society of America, Volume 130, Issue 4, pp. 2195–2201 (October 2011)
17. ^ S. P. Näsholm and S. Holm, "Linking multiple relaxation, power-law attenuation, and fractional wave equations," Journal of the Acoustical Society of America, Volume 130, Issue 5, pp. 3038-3045 (November 2011).
19. ^ Holm S., Näsholm, S. P., "Comparison of Fractional Wave Equations for Power Law Attenuation in Ultrasound and Elastography," Ultrasound Med. Biol., 40(4), pp. 695-703, DOI: 10.1016/j.ultrasmedbio.2013.09.033 Link to e-print
20. ^ N. Laskin, (2000), Fractional Quantum Mechanics and Lévy Path Integrals. Physics Letters 268A, 298-304.
21. ^ N. Laskin, (2002), Fractional Schrödinger equation, Physical Review E66, 056108 7 pages. (also available online:
Further reading[edit]
History of fractional calculus[edit]
• B. Ross, "A brief history and exposition of the fundamental theory of fractional calculus", in Fractional Calculus and Its Applications. Lecture Notes in Mathematics. Vol.457. (1975) 1-36.
• J. Tenreiro Machado, V. Kiryakova, F. Mainardi, "Recent history of fractional calculus", Communications in Nonlinear Science and Numerical Simulation. Vol.16. No.3. (2011) 1140–1153.
• L. Debnath, "A brief historical introduction to fractional calculus", International Journal of Mathematical Education in Science and Technology. Vol.35. No.4. (2004) 487-501.
• J.A. Tenreiro Machado, A.M.S.F. Galhano, J.J. Trujillo, "On development of fractional calculus during the last fifty years", Scientometrics. Vol.98. No.1. (2014) 577-582.
• J.A. Tenreiro Machado, A.M. Galhano, J.J. Trujillo, "Science metrics on fractional calculus development since 1966", Fractional Calculus and Applied Analysis. Vol.16. No.2. (2013) 479-500.
External links[edit] |
671f1e6b84f6dc8a | LOG#050. Why riemannium?
This special 50th log-entry is dedicated to 2 special people and scientists who inspired (and guided) me in the hard task of starting and writing this blog.
These two people are
1st. John C. Baez, a mathematical physicist. Author of the old but always fresh/brand new This Week Finds in Mathematical Physics, and now involved in the Azimuth blog. You can visit him here
and here
I was a mere undergraduate in the early years of the internet in my country when I began to read his TWF. If you have never done it, I urge to do it. Read him. He is a wonderful teacher and an excellent lecturer. John is now worried about global warming and related stuff, but he keeps his mathematical interests and pedagogical gifts untouched. I miss some topics about he used to discuss often before in his hew blog, but his insights about virtually everything he is involved into are really impressive. He also manages to share his entusiastic vision of Mathematics and Science. From pure mathematics to physics. He is a great blogger and scientist!
2nd. The professor Francis Villatoro. I am really grateful to him. He tries to divulge Science in Spain with his excellent blog ( written in Spanish language)
He is a very active person in the world of Spanish Science (and its divulgation). In his blog, he also tries to explain to the general public the latest news on HEP and other topics related with other branches of Physics, Mathematics or general Science. It is not an easy task! Some months ago, after some time reading and following his blog (as I do now yet, like with Baez’s stuff), I realized that I could not remain as a passive and simple reader or spectator in the web, so I wrote him and I asked him some questions about his experience with blogging and for advice. His comments and remarks were incredibly useful for me, specially during my first logs. I have followed several blogs the last years (like those by Baez or Villatoro), and I had no idea about what kind of style/scheme I should addopt here. I had only some fuzzy ideas about what to do, what to write and, of course, I had no idea if I could explain stuff in a simple way while keeping the physical intuition and the mathematical background I wanted to include. His early criticism was very helpful, so this post is a tribute for him as well. After all, he suggested me the topic of this post! I encourage you to read him and his blog (as long as you know Spanish or you can use a good translator).
Finally, let me express and show my deepest gratitude to John and Francis. Two great and extraordinary people and professionals in their respective fields who inspired (and yet they do) me in spirit and insight in my early and difficult steps of writing this blog. I am just convinced that Science is made of little, ordinary and small contributions like mine, and not only the greatest contributions like those making John and Francis to the whole world. I wish they continue making their contributions in the future for many, many years yet to come.
Now, let me answer the question Francis asked me to explain here with further details. My special post/log-entry number 50…It will be devoted to tell you why this blog is called The Spectrum of Riemannium, and what is behind the greatest unsolved problem in Number Theory, Mathematics and likely Physics/Physmatics as well…Enjoy it!
The Riemann zeta function is a device/object/function related to prime numbers.
In general, it is a function of complex variable s=\sigma+i\tau defined by the next equation:
\boxed{\displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p,\; prime}\dfrac{1}{1-p^{-s}}}}
\boxed{\displaystyle{\zeta (s)=\dfrac{1}{1-2^{-s}}\dfrac{1}{1-3^{-s}}\ldots\dfrac{1}{1-137^{-s}}\ldots}}
Generally speaking, the Riemann zeta function extended by analytical continuation to the whole complex plane is “more” than the classical Riemann zeta function that Euler found much before the work of Riemann in the XIX century. The Riemann zeta function for real and entire positive values is a very well known (and admired) series by the mathematicians. \zeta (1)=\infty due to the divergence of the harmonic series. Zeta values at even positive numbers are related to the Bernoulli numbers, and it is still lacking an analytic expression for the zeta values at odd positive numbers.
The Riemann zeta function over the whole complex plane satisfy the following functional equation:
\boxed{\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s)=\pi^{-\frac{(1-s)}{2}}\Gamma \left(\dfrac{1-s}{2}\right)\zeta (1-s)}
Equivalently, it can be also written in a very simple way:
\boxed{\xi (s)=\xi (1-s)}
where we have defined
\xi (s)=\pi^{-\frac{s}{2}}\Gamma \left(\dfrac{s}{2}\right)\zeta (s)
Riemann zeta values are an example of beautiful Mathematics. From \displaystyle{\zeta (s)=\sum_{n=1}^{\infty}n^{-s}}, then we have:
1) \zeta (0)=1+1+\ldots=-\dfrac{1}{2}.
2) \zeta (1)=1+\dfrac{1}{2}+\dfrac{1}{3}+\ldots =\infty. The harmonic series is divergent.
3) \zeta (2)=1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\ldots =\dfrac{\pi^2}{6}\approx 1.645. The famous Euler result.
4) \zeta (3)=1+\dfrac{1}{2^3}+\dfrac{1}{3^3}+\ldots \approx 1.202. And odd zeta value called Apery’s constant that we do not know yet how to express in terms of irrational numbers.
5) \zeta (4)=\dfrac{\pi^4}{90}\approx 1.0823.
6) \zeta (-2n)=-\dfrac{\pi^{-n}}{2\Gamma (-n+1)}=0,\;\;\forall n=1,2,\ldots ,\infty. Trivial zeroes of zeta.
7) \zeta (2n)=\dfrac{(-1)^{n+1}(2\pi)^{2n}B_{2n}}{2(2n)!}\;\;\forall n=1,2,\ldots ,\infty, where B_{2n} are the Bernoulli numbers. The first 13 Bernoulli numbers are:
B_0=1, B_1=-\dfrac{1}{2}, B_2=\dfrac{1}{6}, B_3=0, B_4=-\dfrac{1}{30}, B_5=0, B_6=\dfrac{1}{42}
B_7=0, B_8=-\dfrac{1}{30}, B_9=0, B_{10}=\dfrac{5}{66}, B_{11}=0, B_{12}=-\dfrac{691}{2730}, B_{13}=0
8) We note that B_{2n+1}=0,\;\; \forall n\geq 1.
9) \zeta (-2n+1)=-\dfrac{B_{2n}}{2n}, \;\; \forall n=1,2,\ldots ,\infty.
For instance, \zeta (-1)=-\dfrac{1}{12}=1+2+3+\ldots, \zeta (-3)=\dfrac{1}{120}, and \zeta (-5)=-\dfrac{1}{252}. Indeed, \zeta (-1) arises in string theory trying to renormalize the vacuum energy of an infinite number of harmonic oscillators. The result in the bosonic string is \dfrac{2}{2-D}. In order to match with Riemann zeta function regularization of the above series, the bosonic string is asked to live in an ambient spacetime of D=26 dimensions. We also have that
\sum \vert n\vert^3=-\dfrac{1}{60}
10) \zeta (\infty)=1. The Riemann zeta value at the infinity is equal to the unit.
11) The derivative of the zeta function is \displaystyle{\zeta '(s)=-\sum_{n=1}^{\infty}\dfrac{\log n}{n^s}}. Particularly important of this derivative are:
\displaystyle{\zeta '(0)=-\sum_{n=1}^\infty \log n=-\log \prod_{n=1}^\infty n=\zeta (0)\log (2\pi)=-\dfrac{1}{2}\log (2\pi)=-\log \sqrt{2\pi}=\log \dfrac{1}{\sqrt{2\pi}}}
or \zeta '(0)=\log \sqrt{\dfrac{1}{2\pi}}
This allow us to define the factorial of the infinity as
\displaystyle{\infty !=\prod_{n=1}^{\infty}n=1\cdot 2\cdots \infty=e^{-\zeta '(0)}=\sqrt{2\pi}}
and the renormalized infinite dimensional determinant of certain operator A as:
\det _\zeta (A)=a_1\cdot a_2\cdots=\exp \left(-\zeta_A '(0)\right), with \displaystyle{\zeta _A (s)=\sum_{n=1}^\infty \dfrac{1}{a_n^s}}
12) \zeta (1+\varepsilon )=\dfrac{1}{\varepsilon}+\gamma_E +\mathcal{O} (\varepsilon ). This is a result used by theoretical physicists in dimensional renormalization/regularization. \gamma_E\approx 0.577 is the so-called Euler-Mascheroni constant.
The alternating zeta function, called Dirichlet eta function, provides interesting values as well. Dirichlet eta function is defined and related to the Riemann zeta fucntion as follows:
\boxed{\displaystyle{\eta (s)=\sum_{n=1}^\infty \dfrac{(-1)^{n+1}}{n^s}=\left(1-2^{1-s}\right)\zeta (s)}}
This can be thought as “bosons made of fermions” or “fermions made of bosons” somehow. Special values of Dirichlet eta function are given by:
\eta (0)=-\zeta (0)=\dfrac{1}{2} \eta (1)=\log 2 \eta (2)=\dfrac{1}{2}\zeta (2)=\dfrac{\pi^2}{12}
\eta (3)=\dfrac{3}{4}\zeta (3)\approx \dfrac{3}{4}(1.202) \eta (4)=\dfrac{7}{8}\zeta (4)=\dfrac{7}{8}\left(\dfrac{\pi^4}{90}\right)
Remark(I): \zeta(2) is important in the physics realm, since the spectrum of the hydrogen atom has the following aspect
and the Balmer formula is, as every physicist knows
\Delta E(n,m)=K\left(\dfrac{1}{n^2}-\dfrac{1}{m^2}\right)
Remark (II): The fact that \zeta (2) is finite implies that the energy level separation of the hydrogen atom in the Böhr level tends to zero AND that the sum of ALL the possible energy levels in the hydrogen atom is finite since \zeta (2) is finite.
Remark(III): What about an “atom”/system with spectrum E(n)=\kappa n^{-s}? If s=2, we do know that is the case of the Kepler problem. Moreover, it is easy to observe that s=-1 corresponds to tha harmonic oscillator, i.e., E(n)=\hbar \omega n. We also know that s=-2 is the infinite potential well. So the question is, what about a n^{-3} spectrum and so on?
In summary, does the following spectrum
with energy separation/splitting
\boxed{\Delta E(n,m;s)=\mathbb{K}\left(\dfrac{1}{n^{s}}-\dfrac{1}{m^{s}}\right)}
exist in Nature for some physical system beyond the infinite potential well, the harmonic oscillator or the hydrogen atom, where s=-2, s=-1 and s=2 respectively?
It is amazing how Riemann zeta function gets involved with a common origin of such a different systems and spectra like the Kepler problem, the harmonic oscillator and the infinite potential well!
The Riemann Hypothesis (RH) is the greatest unsolved problem in pure Mathematics, and likely, in Physics too. It is the statement that the only non-trivial zeroes of the Riemann zeta function, beyond the trivial zeroes at s=-2n,\;\forall n=1,2,\ldots,\infty have real part equal to 1/2. In other words, the equation or feynmanity has only the next solutions:
\boxed{\mbox{RH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1}{2}\pm i\lambda_n, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}}
I generally prefer the following projective-like version of the RH (PRH):
\boxed{\mbox{PRH:}\;\;\zeta (s)=0\leftrightarrow \begin{cases} s_n=-2n,\;\forall n=1,\ldots,\infty\;\;\mbox{Trivial zeroes}\\ s_n=\dfrac{1\pm i\overline{\lambda}_n}{2}, \;\;\forall n=1,\ldots,\infty \;\;\mbox{Non-trivial zeroes}\end{cases}}
The Riemann zeta function can be sketched on the whole complex plane, in order to obtain a radiography about the RH and what it means. The mathematicians have studied the critical strip with ingenious tools an frameworks. The now terminated ZetaGrid project proved that there are billions of zeroes IN the critical line. No counterexample has been found of a non-trivial zeta zero outside the critical line (and there are some arguments that make it very unlikely). The RH says that primes “have music/order/pattern” in their interior, but nobody has managed to prove the RH. The next picture shows you what the RH “say” graphically:
If you want to know how the Riemann zeroes sound, M. Watkins has done a nice audio file to see their music.
You can learn how to make “music” from Riemann zeroes here http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/munafo-zetasound.htm
And you can listen their sound here
Riemann zeroes are connected with prime numbers through a complicated formula called “the explicit formula”. The next equation holds \forall x\geq 2 integer numbers, and non-trivial Riemann zeroes in the complex (upper) half-plane with \tau>0:
\boxed{\displaystyle{\pi (x)+\sum_{n=2}^\infty \dfrac{\pi \left( x^{1/n}\right)}{n}=\text{Li} (x)-\sum_{\lambda =\sigma+i\tau }\left(\text{Li}(x^\lambda)+\text{Li}\left( x^{1-\lambda}\right)\right)+\int_x^\infty\dfrac{du}{u(u^2-1)\ln u}-\ln 2}}
and where \pi (x) is the celebrated Gauss prime number counting function, i.e., \pi (x) represents the prime numbers that are equal than x or below. This explicit formula was proved by Hadamard. The explicit formula follows from both product representations of \zeta (s), the Euler product on one side and the Hadamard product on the other side.
The function \text{Li} (x), sometimes written as \text{li} (x), is the logarithmic integral
\displaystyle{\text{Li} (x) =\text{li} (x)= \int_2^x\dfrac{du}{\ln x}}
The explicit formula comes in some cool variants too. For instance, we can write
\pi (x)=\pi_0 (x)+\pi_1 (x)=\pi_{\mbox{smooth}}+\pi_{\mbox{osc-chaotic}}
\displaystyle{\pi_0 (x)=\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\left[\mbox{Li}(x^{1/n})-\sum_{k=1}^\infty\mbox{Li}(x^{-2k/n})\right]}
\displaystyle{\pi_1 (x)=-2\mbox{Re}\sum_{n=1}^\infty\dfrac{\mu (n)}{n}\sum_{\alpha=1}^\infty\mbox{Li}(x^{(\sigma_\alpha+i\tau_\alpha)/n})}
For large values of x, we have the asymptotics
\pi_0 (x)\approx \mbox{Li} (x)
\displaystyle{\pi_1 (x)\approx -\dfrac{2}{\ln x}\sum_{\alpha=1}^\infty\dfrac{x^{\sigma_\alpha}}{\sigma_\alpha^2+\tau_\alpha^2}\left(\sigma_\alpha\cos (\tau_\alpha \ln x)+\tau_\alpha \sin (\tau_\alpha \ln x)\right)}
Remark: Please, don’t confuse the logarithmic integral with the polylogarithm function \text{Li}_x (s).
Gauss also conjectured that
\pi (x)\sim \text{Li} (x)
Date: January 3, 1982. Andrew Odlyzko wrote a letter to George Pólya about the physical ground/basis of the Riemann Hypothesis and the conjecture associated to Polya himself and David Hilbert. Polya answered and told Odlyzko that while he was in Göttingen around 1912 to 1914 he was asked by Edmund Landau for a physical reason that the Riemann Hypothesis should be true, and suggested that this would be the case if the imaginary parts, say T of the non-trivial zeros
of the Riemann zeta function corresponded to eigenvalues of an unbounded and unknown self adjoint operator \hat{T}. That statement was never published formally, but it was remembered after all, and it was transmitted from one generation to another. At the time of Pólya’s conversation with Landau, there was little basis for such speculation. However, Selberg, in the early 1950s, proved a duality between the length spectrum of a Riemann surface and the eigenvalues of its Laplacian. This so-called Selberg trace formula shared a striking resemblance to the explicit formula of certain L-function, which gave credibility to the speculation of Hilbert and Pólya.
Dialogue(circa 1970). “(…)Dyson: So tell me, Montgomery, what have you been up to? Montgomery: Well, lately I’ve been looking into the distribution of the zeros of the Riemann zeta function. Dyson: Yes? And? Montgomery: It seems the two-point correlations go as….(…) Dyson: Extraordinary! Do you realize that’s the pair-correlation function for the eigenvalues of a random Hermitian matrix? It’s also a model of the energy levels in a heavy nucleus, say U-238.(…)”
A step further was given in the 1970s, by the mathematician Hugh Montgomery. He investigated and found that the statistical distribution of the zeros on the critical line has a certain property, now called Montgomery’s pair correlation conjecture. The Riemann zeros tend not to cluster too closely together, but to repel. During a visit to the Institute for Advanced Study (IAS) in 1972, he showed this result to Freeman Dyson, one of the founders of the theory of random matrices. Dyson realized that the statistical distribution found by Montgomery appeared to be the same as the pair correlation distribution for the eigenvalues of a random and “very big/large” Hermitian matrix with size NxN. These distributions are of importance in physics and mathematics. Why? It is simple. The eigenstates of a Hamiltonian, for example the energy levels of an atomic nucleus, satisfy such statistics. Subsequent work has strongly borne out the connection between the distribution of the zeros of the Riemann zeta function and the eigenvalues of a random Hermitian matrix drawn from the theoyr of the so-calle Gaussian unitary ensemble, and both are now believed to obey the same statistics. Thus the conjecture of Pólya and Hilbert now has a more solid fundamental link to QM, though it has not yet led to a proof of the Riemann hypothesis. The pair-correlation function of the zeros is given by the function:
R_2(x)=1-\left(\dfrac{\sin \pi x}{\pi x}\right)^2
In a posterior development that has given substantive force to this approach to the Riemann hypothesis through functional analysis and operator theory, the mathematician Alain Connes has formulated a “trace formula” using his non-commutative geometry framework that is actually equivalent to certain generalized Riemann hypothesis. This fact has therefore strengthened the analogy with the Selberg trace formula to the point where it gives precise statements. However, the mysterious operator believed to provide the Riemann zeta zeroes remain hidden yet. Even worst, we don’t even know on which space the Riemann operator is acting on.
However, some trials to guess the Riemann operator has been given from a semiclassical physical environtment as well. Michael Berry and Jon Keating have speculated that the Hamiltonian/Riemann operator H is actually some kind of quantization of the classical Hamiltonian XP where P is the canonical momentum associated with the position operator X. If that Berry-Keating conjecture is true. The simplest Hermitian operator corresponding to XP is
H = \dfrac1{2} (xp+px) = - i \left( x \dfrac{\mathrm{d}}{\mathrm{d} x} + \dfrac{1}{2} \right)
At current time, it is still quite inconcrete, as it is not clear on which space this operator should act in order to get the correct dynamics, nor how to regularize it in order to get the expected logarithmic corrections. Berry and Germán Sierra, the latter in collaboration with P.K.Townsed, have conjectured that since this operator is invariant under dilatations perhaps the boundary condition f(nx)=f(x) for integer n may help to get the correct asymptotic results valid for big n. That it, in the large n we should obtain
s_n=\dfrac{1}{2} + i \dfrac{ 2\pi n}{\log n}
Indeed, the Berry-Keating conjecture opened another striking attack to prove the RH. A topic that was popular in the 80’s and 90’s in the 20th century. The weird subject of “quantum chaos”. Quantum chaos is the subject devoted to the study of quantum systems corresponding to classically chaotic systems. The Berry-Keating conjecture shed light further into the Riemann dynamics, sketching some of the properties of the dynamical system behind the Riemann Hypothesis.
In summary, the dynamics of the Riemann operator should provide:
1st. The quantum hamiltonian operator behind the Riemann zeroes, in addition to the classical counterpart, the classical hamiltonian H, has a dynamics containing the scaling symmetry. As a consequence, the trajectories are the same at all energy scale.
2nd. The classical system corresponding to the Riemann dynamics is chaotic and unstable.
3rd. The dynamics lacks time-reversal symmetry.
4th. The dynamics is quasi one-dimensional.
A full dictionary translating the whole correspondence between the chaotic system corresponding to the Riemann zeta function and its main features is presented in the next table:
In 2001, the following paper emerged, http://arxiv.org/abs/nlin/0101014. The Riemannium arxiv paper was published later (here: Reg. Chaot. Dyn. 6 (2001) 205-210). After that, Brian Hayes wrote a really beautiful, wonderful and short paper titled The Spectrum of Riemannium in 2003 (American Scientist, Volume 91, Number 4 July–August, 2003,pages 296–300). I remember myself reading the manuscript and being totally surprised. I was shocked during several weeks. I decided that I would try to understand that stuff better and better, and, maybe, make some contribution to it. The Spectrum of Riemannium was an amazing name, an incredible concept. So, I have been studying related stuff during all these years. And I have my own suspitions about what the riemannium and the zeta function are, but this is not a good place to explain all of them!
The riemannium is the mysterious physical system behind the RH. Its spectrum, the spectrum of riemannium, are given by the RH and its generalizations.
Moreover, the following sketch from Hayes’ paper is also very illustrative:
What do you think? Isn’t it suggestive? Is it amazing?
Riemann zeta function also arises in the renormalization of the Standard Model and the regularization of determinants with “infinite size” (i.e., determinants of differential operators and/or pseudodifferential operators). For instance, the \infty-dimensional regularized determinant is defined through the Riemann zeta function as follows:
\displaystyle{\det _\zeta \mathcal{P}=e^{-\zeta_{\mathcal{P}}^{'}(0)}}
The dimensional renormalization/regularization of the SM makes use of the Riemann zeta function as well. It is ubiquitous in that approach, but, as far as I know, nobody has asked why is that issue important, as I have suspected from long time ago.
Riemann zeta function is also used in the theory of Quantum Statistics. Quantum Statistics are important in Cosmology and Condensed Matter, so it is really striking that Riemann zeta values are related to phenomena like Bose-Einstein condensation or the Cosmic Microwave Background and also the yet to be found Cosmic Neutrino Background!
Let me begin with the easiest quantum (indeed classical) statistics, the Maxwell-Boltzmann (MB) statistics. In 3 spatial dimensions (3d) the MB distribution arises ( we will work with units in which \hbar =1):
f(p)_{MB}=\dfrac{1}{(2\pi)^3}e^{\frac{\mu -E}{k_BT}}
Usually, there are 3 thermodynamical quantities that physicists wish to compute with statistical distributions: 1) the number density of particles n=N/V, 2) the energy density \varepsilon=U/V and 3) the pressure P. In the case of a MB distribution, we have the following definitions:
\displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{\mu -E}{k_BT}}}
\displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p Ee^{\frac{\mu -E}{k_BT}}}
\displaystyle{\varepsilon =\dfrac{1}{(2\pi)^3}\int d^3p \dfrac{1}{3}\dfrac{\vert\mathbf{p}\vert^2}{E}e^{\frac{\mu -E}{k_BT}}}
We can introduce the dimensionless variables $late z=\dfrac{mc^2}{k_BT}$, \tau =\dfrac{E}{k_BT}=\dfrac{\sqrt{p^2+m^2c^4}}{k_BT}. In this way,
\vert p\vert=\dfrac{k_BT}{c}\sqrt{\tau^2-z^2}
c^2\vert\mathbf{p}\vert d\vert \mathbf{p}\vert=k_B^2T^2\tau d\tau
With these definitions, the particle density becomes
\displaystyle{n=\dfrac{4\pi k_B^3T^3}{(2\pi)^3}e^{\frac{\mu}{k_BT}}\int_z^\infty d\tau (\tau^2-z^2)^{1/2}\tau e^{-\tau}}
This integral can be calculated in closed form with the aid of modified Bessel functions of the 2th kind:
K_n (z)=\dfrac{2^nn!}{(2n)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-1/2}e^{-\tau} or equivalently
K_n (z)=\dfrac{2^{n-1}(n-1)!}{(2n-2)!z^n}\int_z^\infty d\tau (\tau^2-z^2)^{n-3/2}\tau e^{-\tau}
K_{n+1} (z)=\dfrac{2nK_n (z)}{z}+K_{n-1} (z)
\displaystyle{K_2 (x)=\dfrac{1}{z^2}\int_z^\infty (\tau^2-z^2)^{1/2}\tau e^{-\tau}d\tau}
And thus, we have the next results (setting c=1 for simplicity):
\mbox{Particle number density}\equiv n=\dfrac{N}{V}=\dfrac{k_B^3T^3}{2\pi^2}z^2K_2 (z)=\dfrac{k_B^3T^3}{2\pi^2}\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)e^{\frac{\mu}{k_BT}}
\mbox{Energy density}\equiv\varepsilon=\dfrac{k_B^4T^4}{2\pi^2}\left[ 3\left(\dfrac{m}{k_BT}\right)^2K_2\left(\dfrac{m}{k_BT}\right)+\left(\dfrac{m}{k_BT}\right)^3K_1\left(\dfrac{m}{k_BT}\right)\right]e^{\frac{\mu}{k_BT}}
Even entropy density is easiy to compute:
\mbox{Entropy density}\equiv s=\dfrac{m^3}{2\pi^2}e^{\frac{\mu}{k_BT}}\left[ K_1\left(\dfrac{m}{k_BT}\right)+\dfrac{4k_BT-\mu}{m}K_2\left(\dfrac{m}{k_BT}\right)\right]
These results can be simplified in some limit cases. For instance, in the massless limit z=m/k_BT\rightarrow 0. Moreover, we also know that \displaystyle{\lim_{z\rightarrow 0}z^nK_n (z)=2^{n-1}(n-1)!}. In such a case, we obtain:
n\approx \dfrac{k_B^3T^3}{\pi^2}e^{\frac{\mu}{k_BT}}
\varepsilon \approx \dfrac{3k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}}
P\approx \dfrac{k_B^4T^4}{\pi^2}e^{\frac{\mu}{k_BT}}
We note that \varepsilon=3P in this massless limit.
Remark (I): In the massless limit, and whenever there is no degeneracy, \varepsilon =3P holds.
Remark (II): If there is a quantum degeneracy in the energy levels, i.e., if g\neq 1, we must include an extra factor of g_j=2j+1 for massive particles of spin j. For massless photons with helicity, there is a g=2 degeneracy.
Remark (III): In the D-dimensional (D=d+1) Bose gas with dispersion relationship \varepsilon_p=cp^{s}, it can be shown that the pressure is related with the energy density in the following way
\mbox{Pressure}\equiv P=\dfrac{s}{d}\dfrac{U}{V}=\dfrac{s}{d}\varepsilon
Remark (IV): Let us define p^s (n) as the number of ways an integer number can be expressed as a sum of the sth powers of integers. For instance,
p^1 (5)=7 because 5=4+1=3+2=3+1+1=2+2+1=2+1+1+1=1+1+1+1+1
p^2 (5)=2 because 5=2^2+1^2=1^2+1^2+1^2+1^2+1^2
If E_n=n^s with n\geq 1 and s>0, then x=e^{-\beta} and the partition function is
\displaystyle{Z=\prod_{k}\left( 1+e^{\frac{\mu-E}{k_BT}}\right)}
We will see later that \displaystyle{\sum_{N=0}^\infty x^N=\begin{cases}1+x, FD \\ \dfrac{1}{1-x}, BE\end{cases}}
with \mu =0 is nothing but the generatin function of the partitions p^s (n)
\displaystyle{Z(x=e^{-\beta})=\prod_{n=1}^\infty \dfrac{1}{1-x^{n^s}}=\sum_{n=1}^\infty p^s (n) x^n\approx \int_1^\infty dn p^s (n) e^{-\beta n}}
The Hardy-Ramanujan inversion formula reads (for the case s=1 only):
p(n) \approx \dfrac{1}{4\sqrt{3}N}e^{\pi\sqrt{2N/3}}
Remark (V): There are some useful integrals in quantum statistics. They are the so-called Bose-Einstein/Fermi-Dirac integrals
\displaystyle{\int_0^\infty dx \dfrac{x^{n-1}}{e^x\mp 1}=\begin{cases}\Gamma (n) \zeta (n), \;\; BE\\ \Gamma (n)\eta (n)=\Gamma (n) (1-2^{1-n})\zeta (n),\;\; FD\end{cases}}
The BE-FD quantum distributions in 3d are defined as follows:
where the minus sign corresponds to FD and the plus sign to BE.
We will firstly study the BE distribution in 3d. We have:
\displaystyle{n=\dfrac{1}{(2\pi)^3}\int d^3p \left(e^{\frac{\mu-E}{k_BT}}-1\right)^{-1}=\dfrac{1}{(2\pi)^3}\int d^3p \sum_{n=1}^{\infty}(+1)^{n+1}e^{\frac{n\mu-nE}{k_BT}}}
Introducing a scaled temperature T'=T/n, we get
\displaystyle{n=\sum_{n=1}^{\infty}\left[\dfrac{1}{(2\pi)^3}\int d^3p e^{\frac{n\mu-nE}{k_BT'}}\right]=\sum_{n=1}^{\infty}\dfrac{k_B^3T^3}{2\pi^2}\dfrac{1}{n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{n\mu}{k_BT}}}
Again, we can study a particularly simple case: the massless limit m\rightarrow 0 with \mu\rightarrow 0. In this case, we get:
\displaystyle{n=\dfrac{k_B^3T^3}{\pi^2}\sum_{n=1}^\infty \dfrac{1}{n^3}=\dfrac{k_B^3T^3}{\pi^2}\zeta (3)\approx 1.202\dfrac{k_B^3T^3}{\pi^2}}
\displaystyle{\varepsilon=\sum_{n=1}^\infty\dfrac{3(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{3(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2}{30}(k_BT)^4}
\displaystyle{P=\sum_{n=1}^\infty\dfrac{(k_BT)^4}{\pi^2}\dfrac{1}{n^4}=\dfrac{(k_BT)^4\zeta (4)}{\pi^2}=\dfrac{\pi^2(k_BT)^4}{90}}
The FD distribution in 3d can be studied in a similar way. Following the same approach as the BE distribution, we deduce that:
\displaystyle{n=\sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^3}{2\pi^2n^3}\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)e^{\frac{\mu n}{k_BT}}}
\displaystyle{\varepsilon= \sum_{n=1}^\infty (-1)^{n+1}\dfrac{(k_BT)^4}{2\pi^2}\left[3\left(\dfrac{nm}{k_BT}\right)^2K_2\left(\dfrac{nm}{k_BT}\right)+\left(\dfrac{nm}{k_BT}\right)^3K_1\left(\dfrac{nm}{k_BT}\right)\right]e^{\frac{\mu n}{k_BT}}}
and again the massless limit m=0 and \mu\rightarrow 0 provide
\displaystyle{n\approx \dfrac{(k_BT)^3}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^3}=\dfrac{(k_BT)^3}{\pi^2}\eta (3)=\dfrac{(k_BT)^3}{\pi^2}\left(\dfrac{3}{4}\right)\zeta (3)}
\displaystyle{\varepsilon\approx \dfrac{3(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=3(k_BT)^4\eta (4)=3(k_BT)^4\dfrac{7}{8}\zeta (4)=\dfrac{\pi^2(k_BT)^4}{30}\left(\dfrac{7}{8}\right)}
\displaystyle{P\approx \dfrac{(k_BT)^4}{\pi^2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{1}{n^4}=\left(\dfrac{7}{8}\right)\dfrac{\pi^2(k_BT)^4}{90}}
Remark (I): For photons \gamma with degeneracy g=2 we obtain
n_\gamma =\dfrac{2\zeta (3) (k_BT)^3}{\pi^2}
\varepsilon_\gamma= 3P_\gamma =\dfrac{\pi^2 (k_BT)^4}{15}
s_\gamma =P'(T)=\dfrac{4}{3}\left(\dfrac{\pi^2}{15}\right)(k_BT)^3=\dfrac{2\pi^4}{45\zeta (3)}n
Remark (II): In Cosmology, Astrophysics and also in High Energy Physics, the following units are used
1eV=1.602\cdot 10^{-19}J
\hbar=1=6.58\cdot 10^{-22}MeVs=7.64\cdot 10^{-12}Ks
\hbar c=1=0.19733GeV\cdot fm=0.2290 K\cdot cm
1 K=0.1532\cdot 10^{-36}g\cdot c^2
The Cosmic Microwave Background is the relic photon radiation of the Big Bang, and thus it has a temperature due to photons in the microwave band of the electromagnetic spectrum. Its value is:
T_\gamma \approx 2.725K
Indeed, it also implies that the relic photon density is about n_\gamma =410\dfrac{1}{cm^3}
It is also speculated that there has to be a Cosmic Neutrino Background relic from the Big Bang. From theoretical Cosmology, it is related to the photon CMB temperature in the following way:
T_\nu =\left(\dfrac{4}{11}\right)^{1/3}2.7K or equivalently
T_\nu\approx 1.9K
This temperature implies a relic neutrino density (per species, i.e., with g_\nu=1) about
The cosmological density entropy due to these particles is
s_0=\dfrac{S_0}{V}=\dfrac{4\pi^2}{45}\left[1+\dfrac{2\cdot 3}{2}\left(\dfrac{7}{8}\right)\left(\dfrac{4}{11}\right)\right]T_{0\gamma}^3=2810\dfrac{1}{cm^3}\left( \dfrac{T_{0\gamma}}{2.7K}\right)^3
and then we get
s_0\approx 7.03n_{0\gamma}
Remark (III): In Cosmology, for fermions in 3d ( note that BE implies \varepsilon=3P, and that we must drop the factors \left( 7/8\right), \left( 3/4\right), \left( 7/6\right) in the next numerical values) we can compute
n=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)\dfrac{2\zeta (3)}{\pi^2}(k_BT)^3\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)31.700\left(\dfrac{k_BT}{GeV}\right)^3\dfrac{1}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{3}{4}\right)20.288\left(\dfrac{T}{K}\right)^3\dfrac{1}{cm^3}\end{cases}
\varepsilon=3P=\begin{cases}\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{\pi^2}{15}\right)(k_BT)^4\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)(85.633)\left(\dfrac{k_BT}{GeV}\right)\dfrac{GeV}{fm^3}\\ \left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(0.841\cdot 10^{-36}\right)\left(\dfrac{T}{K}\right)^4\dfrac{g}{cm^3}\end{cases}
s=\dfrac{S}{V}=\left(\dfrac{g}{2}\right)\left(\dfrac{7}{8}\right)\left(\dfrac{4\pi^2}{45}\right)(k_BT)^3=\dfrac{7}{6}\left[\dfrac{2\pi^4}{45\zeta (3)}\right] n
Remark (IV): An example of the computation of degeneracy factor is the quark-gluon plasma degeneracy g_{QGP}. Firstly we compute the gluon and quark degeneracies
g_g=(\mbox{color})(\mbox{spin})=2^3\cdot 2=8\cdot 2=16
g_q=(p\overline{p})(\mbox{spin})(\mbox{color})(\mbox{flavor})=2\cdot 2\cdot 3\cdot N_f=12N_f
Then, the QG plasma degeneracy factor is
g_{QGP}=g_g+\dfrac{7}{8}g_q=16+\dfrac{7}{8}12N_f=16+\dfrac{21}{2}N_f \leftrightarrow \boxed{g_{QGP}=16+\dfrac{21}{2}N_f}
In general, for charged leptons and nucleons g=2, g=1 for neutrinos (per species, of course), and g=2 for gluons and photons. Remember that massive particles with spin j will have g_j=2j+1.
Remark (V): For the Planck distribution, we also get the known result for the thermal distribution of the blackbody radiation
\displaystyle{I(T)=\int_0^\infty f(\nu ,T)d\nu=\dfrac{8\pi h}{c^3}\int_0^\infty \dfrac{\nu^3d\nu}{e^{\frac{h\nu}{k_BT}}-1}=\dfrac{8\pi^5k_B^4T^4}{15c^3h^3}}
Remark (VI): Sometimes the following nomenclature is used
i) Extremely degenerated gas if \mu>>k_BT
ii) Non-degenerated gas if \mu <<-k_BT
iii) Extremely relativistic gas ( or ultra-relativistic gas) if p>> mc
iv) Non-relativistic gas if p<<mc
Let us define the following shift operator \hat{T}:
where \sigma\in \mathbb{R}. Moreover, there is certain isomorphism between the shift operator space and the space of functions through the map \hat{T}\leftrightarrow x^\sigma.
We define the generalized logarithm as the image under the previous map of \hat{T}. That is:
\displaystyle{\mbox{Log}_G(x)\equiv \dfrac{1}{\sigma}\sum_{n=l}^{m}k_n x^{\sigma n}}
where l,m\in \mathbb{Z}, with l<m, m-l=r and x>0. Furthermore, the next contraints are also given for every generalized logarithm:
1st. \displaystyle{\sum_{n=1}^m k_n=0}.
2nd. \displaystyle{\sum_{n=l}^m nk_n=c}, k_m\neq 0, and k_l\neq 0.
3rd. \displaystyle{\sum_{n=l}^m\vert n\vert^l k_n=K_l}, \forall l=2,3,\ldots ,m-l and where K_l \in \mathbb{R}.
With these definitions we also have that
A) \mbox{Log}_G(x)=\ln (x)
B) \mbox{Log}_G(1)=0
Examples of generalized logarithms are:
1) The Tsallis logarithm.
2) The Kaniadakis logarithm.
3) The Abe logarithm.
\mbox{Log}_A(x)=\dfrac{x^{\sigma -1}-x^{\sigma^{-1}-1}}{\sigma-\sigma^{-1}}
4) The biparametric logarithm.
with a=\sigma-1 and b=\sigma^{-1}-1 in the case of the Abe logarithm.
Group entropies are defined through the use of generalized logarithms. Define some discrete probability distribution \left[ p_i\right]_{i=1,\ldots,W} with normalization \displaystyle{\sum_{i=1}^Wp_i=1}. Therefore, the group entropy is the following functional sum:
\boxed{\displaystyle{S_G=-k_B\sum_{i=1}^{W}p_i \mbox{Log}_G \left(\dfrac{1}{p_i}\right)}}
where we have used the previous definition of generalized logarithm and the Boltzmann’s constant k_B is a real number. It is called group entropy due to the fact that S_G is connected to some universal formal group. This formal group will determine some correlations for the class of physical systems under study and its invariant properties. In fact, the Tsallis logarithm itself is related to the Riemann zeta function through a beautiful equation! Under the Tsallis group exponential, the isomorphism x\leftrightarrow e^t is defined to be e_G^t=\dfrac{e^{(1-q)t}-1}{1-q}, and thus we easily get:
\displaystyle{\dfrac{1}{\Gamma (s)}=\int_0^\infty\dfrac{1}{\dfrac{e^{(1-q)t}-1}{1-q}}t^{s-1}dt=\dfrac{\zeta (s)}{(1-q)^{s-1}}}
\forall s such as Re (s)>1 and q<1.
The primon gas/free Riemann gas is a statistical mechanics toy model illustrating in a simple way some correspondences between number theory and concepts in statistical physics, quantum mechanics, quantum field theory and dynamical systems.
The primon gas IS a quantum field theory (QFT) of a set of non-interacting particles, called the “primons”. It is also named a gas or a free model because the particles are non-interacting. There is no potential. The idea of the primon gas was independently discovered by Donald Spector (D. Spector, Supersymmetry and the Möbius Inversion Function, Communications in Mathemtical Physics 127 (1990) pp. 239-252) and Bernard Julia (Bernard L. Julia, Statistical theory of numbers, in Number Theory and Physics, eds. J. M. Luck, P. Moussa, and M. Waldschmidt, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990, pp. 276-293). There have been later works by Bakas and Bowick (I. Bakas and M.J. Bowick, Curiosities of Arithmetic Gases, J. Math. Phys. 32 (1991) p. 1881) and Spector (D. Spector, Duality, Partial Supersymmetry, and Arithmetic Number Theory, J. Math. Phys. 39 (1998) pp.1919-1927) in which it was explored the connection of such systems to string theory.
This model is based on some simple hypothesis:
1st. Consider a simple quantum Hamiltonian, H, having eigenstates \vert p\rangle labelled by the prime numbers “p”.
2nd. The eigenenergies or spectrum are given by E_p and they have energies proportional to \log p. Mathematically speaking,
H\vert p\rangle = E_p \vert p\rangle with E_p=E_0 \log p
Please, note the natural emergence of a “free” scale of energy E_0. What is this scale of energy? We do not know!
3rd. The second quantization/second-quantized version of this Hamiltonian converts states into particles, the “primons”. Multi-particle states are defined in terms of the numbers k_p of primons in the single-particle states p:
|N\rangle = |k_2, k_3, k_5, k_7, k_{11}, \ldots, k_{137},\ldots, k_p \ldots\rangle
This corresponds to the factorization of N into primes:
N = 2^{k_2} \cdot 3^{k_3} \cdot 5^{k_5} \cdot 7^{k_7} \cdot 11^{k_{11}} \cdots 137^{k_{137}}\cdots p^{k_p} \cdots
The labelling by the integer “N” is unique, since every number has a unique factorization into primes.
The energy of such a multi-particle state is clearly
\displaystyle{E(N) = \sum_p k_p E_p = E_0 \cdot \sum_p k_p \log p = E_0 \log N}
4th. The statistical mechanics partition function Z IS, for the (bosonic) primon gas, the Riemann zeta function!
\displaystyle{Z_B(T) \equiv\sum_{N=1}^\infty \exp \left(-\dfrac{E(N)}{k_B T}\right) = \sum_{N=1}^\infty \exp \left(-\dfrac{E_0 \log N}{k_B T}\right) = \sum_{N=1}^\infty \dfrac{1}{N^s} = \zeta (s)}
with s=E_0/k_BT=\beta E_0, and where k_B is the Boltzmann’s constant and T is the absolute temperature. The divergence of the zeta function at the value s=1 (corresponding to the harmonic sum) is due to the divergence of the partition function at certain temperature, usually called Hagedorn temperature. The Hagedorn temperature is defined by:
This temperature represents a limit beyond the system of (bosonic) primons can not be heated up. To understand why, we can calculate the energy
E=-\dfrac{\partial}{\partial \beta}\ln Z_B=-\dfrac{E_0}{\zeta (\beta E_0)}\dfrac{\partial \zeta (\beta E_0)}{\partial \beta}\approx \dfrac{E_0}{s-1}
A similar treatment can be built up for fermions rather than bosons, but here the Pauli exclusion principle has to be taken into account, i.e. two primons cannot occupy the same single particle state. Therefore m_i can be 0 or 1 for all single particle state. As a consequence, the many-body states are labeled not by the natural numbers, but by the square-free numbers. These numbers are sieved from the natural numbers by the Möbius function. The calculation is a bit more complex, but the partition function for a non-interacting fermion primon gas reduces to the relatively simple form
Z_F(T)=\dfrac{\zeta (s)}{\zeta (2s)}
The canonical ensemble is of course not the only ensemble used in statistical physics. Julia extended the Riemann gas approach to the grand canonical ensemble by introducing a chemical potential \mu (Julia, B. L., 1994, Physica A 203(3-4), 425), and thus, he replaced the primes p with new primes pe^{-\mu}. This generalisation of the Riemann gas is called the Beurling gas, after the Swedish mathematician Beurling who had generalised the notion of prime numbers. Examining a boson primon gas with fugacity -1, it shows that its partition function becomes
\overline{Z}_B=\dfrac{\zeta (2s)}{\zeta (s)}
Remarkable interpretation: pick a system, formed by two sub-systems not interacting with each other, the overall partition function is simply the product of the individual partition functions of the subsystems. From the previous equation of the free fermionic riemann gas we get exactly this structure, and so there are two decoupled systems. Firstly, a fermionic “ghost” Riemann gas at zero chemical potential and, secondly, a boson Riemann gas with energy-levels given by E(N)=2E_0\ln p_N. Julia also calculated the appropriate Hagedorn temperatures and analysed how the partition functions of two different number theoretical gases, the Riemann gas and the “log-gas” behave around the Hagedorn temperature. Although the divergence of the partition function hints the breakdown of the canonical ensemble, Julia also claims that the continuation across or around this critical temperature can help understand certain phase transitions in string theory or in the study of quark confinement. The Riemann gas, as a mathematically tractable model, has been followed with much attention because the asymptotic density of states grows exponentially, \rho (E)\sim e^E, just as in string theory. Moreover, using arithmetic functions it is not extremely hard to define a transition between bosons and fermions by introducing an extra parameter, called kappa \kappa, which defines an imaginary particle, the non-interacting parafermions of order \kappa. This order parameter counts how many parafermions can occupy the same state, i.e. the occupation number of any state falls into the interval \left[0,\kappa-1\right], and therefore \kappa=2 belongs to normal fermions, while \kappa\rightarrow\infty are the usual bosons. Furthermore, the partition function of a free, non-interacting κ-parafermion gas can be defined to be (Bakas and Bowick,1991, in the paper Bakas, I., and M. J. Bowick, 1991, J. Math. Phys. 32(7), 1881):
Z_\kappa=\dfrac{\zeta (s)}{\zeta (\kappa s)}
Indeed, Bakas et al. proved, using the Dirichlet convolution \star, how one can introduce free mixing of parafermions with different orders which do not interact with each other
\displaystyle{f\star g=\sum_{d\vert n}f(d)g\left(\dfrac{n}{d}\right)}
where the symbol d\vert n means d is a divisor of n. This operation preserves the multiplicative property of the classically defined partition functions, i.e., Z_{\kappa_1\star \kappa_2}=Z_{\kappa_1}\star Z_{\kappa_2}. It is even more intriguing how interaction can be incorporated into the mixing by modifying the Dirichlet convolution with a kernel function or twisting factor
\displaystyle{f\odot g=\sum_{d\vert n}f(d)g\left( \dfrac{n}{d}\right) K(n,d)}
Using the unitary convolution Bakas establishes a pedagogically illuminating case, the mixing of two identical boson Riemann gases. He shows that
Z_\infty\star Z_\infty=\zeta ^2(s)\zeta(2s)=\dfrac{\zeta (s)}{\zeta(2s)}\zeta (s)=Z_2Z_\infty=Z_FZ_B
This result has an amazing meaning. Two identical boson Riemann gases interacting with each other through the unitary twisting, are equivalent to mixing a fermion Riemann gas with a boson Riemann gas which do not interact with each other. Therefore, one of the original boson components suffers a transmutation/mutation into a fermion gas!
Remark (I): the Möbius function, which is the identity function with respect to the \star operation (i.e. free mixing), reappears in supersymmetric quantum field theories as a possible representation of the (-1)^F operator, where F is the fermion number operator! In this context, the fact that \mu (n)=0 for square-free numbers is the manifestation of the Pauli exclusion principle itself! In any QFT with fermions, (-1)^F is a unitary, hermitian, involutive operator where F is the fermion number operator and is equal to the sum of the lepton number plus the baryon number, i.e., F=B+L, for all particles in the Standard Model and some (most of) SUSY QFT. The action of this operator is to multiply bosonic states by 1 and fermionic states by -1. This is always a global internal symmetry of any QFT with fermions and corresponds to a rotation by an angle 2\pi. This splits the Hilbert space into two superselection sectors. Bosonic operators commute with (-1)^F whereas fermionic operators anticommute with it. This operator really is, therefore, more useful in supersymmetric field theories.
Remark (II): potential attacks on the Riemann Hypothesis may lead to advances in physics and/or mathematics, i.e., progress in Physmatics!
Remark (III): the energy of the ground state is taken to be zero and the energy spectrum of the excited state is E(n)=E_0\ln (p_n), where p_n, n=2,3,5,\ldots, runs over the prime numbers. Let N and E denote now the number of particles in the ground state and the total energy of the system, respectively. The fundamental theorem of arithmetic allows only one excited state configuration for a given energy
E=\ln (n) \;\; mod E_0
where n is an integer. It immediately means that this gas preserves its quantum nature at any temperature, since only one quantum state is permitted to be occupied. The number fluctuation of any state (even the ground state) is therefore zero. In contrast, the changes in the number of particles in the ground state \delta n_0 predicted by the canonical ensemble is a smooth non-vanishing function of the temperature, while the grand-canonical ensemble still exhibits a divergence. This discrepancy between the microcanonical (combinatorial) and the other two ensembles remains even in the thermodynamic limit.
One could argue that the Riemann gas is fictitious/unreal and its spectrum is unrealisable/unphysical. However, we, physicists, think otherwise, since the spectrum E_N=\ln (N) does not increase with N more rapidly than n^2, therefore the existence of a quantum mechanical potential supporting this spectrum is possible (e.g., via inverse scattering transform or supplementary tools). And of course the question is: what kind of system has such an spectrum?
Some temptative ideas for the potential based on elementary Quantum Mechanics will be given in the next section.
Instead of considering the free Riemann gas, we could ask to Quantum Mechanics if there is some potential providing the logarithmic spectrum of the previous section. Indeed, there exists such a potential. Let us factorize any natural number in terms of its prime “atoms”:
N=p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m}
Take the logarithm
\log N=\log \left(p_1^{n_1}p_2^{n_2}\cdots p_m^{n_m}\right)=n_1\log p_1+n_2\log p_2+\ldots+n_m\log p_m
\displaystyle{\log N=\sum_{i=1}^{m}n_i\log p_i}
where p_i are prime numbers (note that if we include “1” as a prime number it gives a zero contribution to the sum).
Now, suppose a logarithmic oscillator spectrum, i.e.,
\varepsilon_i=\log p_i with p_i=(1),2,3,5,7,11,13,\ldots,137,\ldots,\infty
with i=0,1,2,3,4,\ldots,\infty. In order to have a “riemann gas”/riemannium, we impose an spectrum labelled in the following fashion
\varepsilon_s =\log (2s+1) \forall s=0,1,2,3,\ldots,\infty
Equivalently, we could also define the spectrum of interacting riemannium gas as
\varepsilon_s=\log (s) \forall s=1,2,3,\ldots,\infty
In addition to this, suppose the next quantum postulates:
1st. Logarithmic potential:
V(x)=V_0\ln\dfrac{\vert x\vert}{L} with positive constants V_0, L>0
From the physical viewpoint, the positive constant V_0 means repulsive interaction (force).
2nd. Bohr-Sommerfeld quantization rule:
a) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar \left(s+\dfrac{1}{2}\right)}\; \forall s=0,1,\ldots,\infty
or equivalently we could also get
b) \displaystyle{I=\dfrac{1}{2\pi}\oint pdx=\hbar s}\; \forall s=1,2,\ldots,\infty
3rd. Turning point condition:
x_s=L\exp \left(\dfrac{\varepsilon_s}{V_0}\right)
In the case of 2a) we would deduce that
\displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_s}dx\sqrt{2m\left(\varepsilon_s-V_0\ln \dfrac{x}{L}\right)}}
\displaystyle{\dfrac{\hbar \pi}{2}\left(s+\dfrac{1}{2}\right)=\int_0^{x_x}dx\sqrt{-\ln \left(\dfrac{x}{x_s}\right)}=\sqrt{2mV_0}x_s\Gamma \left(\dfrac{3}{2}\right)}
and then
x_s=\sqrt{\dfrac{\pi}{2mV_0}}\hbar \left( s+\dfrac{1}{2}\right)
Then, using the turning point condition in this equation, we finally obtain
\boxed{\dfrac{\varepsilon_s}{V_0}=\ln (2s+1)+\ln \left(\dfrac{\hbar}{2L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=0,1,\ldots,\infty
In the case of 2b) we would obtain
\boxed{\dfrac{\varepsilon_s}{V_0}=\ln (s)+\ln \left(\dfrac{\hbar}{L}\sqrt{\dfrac{\pi}{2mV_0}}\right)} \forall s=1,2,\ldots,\infty
In summary, the logarithmic potential provides a model for the interacting Riemann gas!
Massive elementary particles (with mass m) can be understood as composite particles made of confined particles moving with some energy pc inside a sphere of radius R. We note that we do not define futher properties of the constituent particles (e.g., if they are rotating strings, particles, extended objects like branes, or some other exotic structure moving in circular orbits or any other pattern as trajectory inside the composite particle).
Let us make the hypothesis that there is some force F needed to counteract the centrifugal force F_c=\dfrac{\kappa c^2}{R}. The centrifugal force is equal to pc/R, i.e., the balancing force F is F=pc/R. Then, assuming the two forces are equal in magnitude, we get
where A_1 is some constant, and that equation holds regardless the origin of the interaction. The potentail energy U necessary to confine a constituent particle will be, in that case,
\displaystyle{U=\int \dfrac{A_1}{R}dR=A_1\int \dfrac{1}{R}dR=A_1\ln \dfrac{R}{R_\star}}
with R_\star some integration constant to be determined later. The center of mass of the “elementary particle”, truly a composite particle, from the external observer and the mass assinged to the composited system is:
The logarithmic potential energy is postulated to be proportional to m/R, and it provides
U=\dfrac{A_2 m}{R}
with A_2 is another constant. In fact, A_1, A_2 are parameters that don’t depend, a priori, on the radius R but on the constitutent particle properties and coupling constants, respectively. Indeed, for instance, we could set and fix the ratio A_2/A_1 to the constant c^2/G_N, where G_N is the gravitational constant. However, such a constraint is not required from first principles or from a clear physical reason. From the following equations:
m=\dfrac{\hbar}{cR} and U=\dfrac{A_2 m}{R}
we get \boxed{U=\dfrac{A_2 \hbar}{cR^2}}
Quantum Mechanics implies that the angular momentum should be quantized, so we can make the following generalization
U=\dfrac{A_2 m}{cR^2}\rightarrow U_n=\dfrac{A_2 \hbar}{cR_n^2}=\dfrac{A_2 (n+1)\hbar}{cR_0^2}
\forall n=0,1,2,\ldots,\infty
so R_n^2=\dfrac{R_0^2}{n+1}\leftrightarrow R_n=\dfrac{R_0}{\sqrt{n+1}}
Using the previous integral and this last result, we obtain
\ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2}
This is due to the fact that U_n=A_2\dfrac{\hbar}{cR_n^2}=\dfrac{A_2\hbar (n+1)}{cR_0^2} and U=A_2\ln \dfrac{R}{R_\star}
Combining these equations, we deduce the value of R_\star as a function of the parameters A_1,A_2
\boxed{R_\star=\sqrt{\dfrac{A_2\hbar}{A_1 c}}}
The ratio R_\star/R_0 can be calculated from the above equations as well, since
\ln \left(\dfrac{R_\star}{R_0}\right)=-(n+1)\dfrac{R_\star^2}{R_0^2} for the case n=0 implies that
\ln \left(\dfrac{R_\star}{R_0}\right)=-\dfrac{R_\star^2}{R_0^2}, and after exponentiation, it yields
Introducing the variable x=\dfrac{R_\star}{R_0} we have to solve the equation x=e^{-x^2}
The solution is \phi=\dfrac{1}{x}=1.53158 from which the relationship between R_\star and R_0 can be easily obtained. Indeed, we can make more deductions from this result. From \ln \phi=1/\phi^2, then
R_n=R_\star e^{(n+1)\ln\phi}
If we take R_\star=\alpha R_0, with R_0=\hbar/mc, then
\alpha=m_0\sqrt{\dfrac{A_2 c}{A_1\hbar}} so
R_n=R_0e^{K\varphi_n} with K=\dfrac{1}{2\pi}\ln \phi and \varphi_n=2\pi (n+1)+\varphi_s \varphi_s=2\pi \left(\dfrac{\ln \alpha}{\ln \phi}\right)
Equivalently, the masses would be dynamically generated from the above equations, since
m_n=\dfrac{\hbar}{R_nc} and m_0=\dfrac{\hbar}{R_0c}
so we would deduce a particle spectrum given by a logarithmic spiral, through the equation
Remark: The shift K\rightarrow -K implies that the spiral would begin with m_0 as the lowest mass and not the biggest mass, turning the spiral from inside to the outside region and vice versa.
In summary, the logarithmic oscillator is also related to some kind of confined particles and it provides a toy model of confinement!
Is the link between classical statistical mechanics and Riemann zeta function unique or is it something more general? C. Tsallis explained long ago the connection of non-extensive Tsallis entropies an the Riemann zeta function, given supplementary arguments to support the idea of a physical link between Physics, Statistical Mechanics and the Riemann hypothesis. His idea is the following.
A) Consider the harmonic oscillator with spectrum
E_n=\hbar\omega n
E(n),\;\forall n=0,1,2,\ldots,\infty, are the H.O. eigenenergies.
B) Consider the Tsallis partition function
\displaystyle{Z_q (\beta )=\sum_{n=0}^{\infty}e_q^{-\beta E_n}=\sum_{n=0}^{\infty}e_q^{-\beta\hbar\omega n}}
where q>1 and the deformed q-exponential is defined as
e_q^z\equiv \left[1+(q-1)z\right]_+^{\frac{1}{1-q}}
and \left[\alpha\right]=\begin{cases}\alpha, \alpha>0\\ 0,otherwise\end{cases}
and the inverse of the deformed exponential is the q-logarithm
\ln_q z=\dfrac{z^{1-q}-1}{1-q}
It implies that
\boxed{\displaystyle{Z_q=\sum_{n=0}^{\infty}\dfrac{1}{\left[1+(q-1)\beta\hbar\omega n\right]^{\frac{1}{q-1}}}=\dfrac{1}{\left[(q-1)\beta\hbar \omega\right]^{\frac{1}{q-1}}}\sum_{n=0}^{\infty}\dfrac{1}{\left[\left(\dfrac{1}{(q-1)\beta\hbar\omega}\right)+n\right]^{\frac{1}{q-1}}}}}
Now, defining the Hurwitz zeta function as:
\displaystyle{\zeta (s,Q)=\sum_{n=0}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}=\dfrac{1}{Q^s}+\sum_{n=1}^{\infty}\dfrac{1}{\left(Q+n\right)^{s}}}
the last equation can be rewritten in a simple and elegant way:
\boxed{\displaystyle{Z_q=\dfrac{1}{\left[(q-1)\beta\hbar\omega\right]^{\frac{1}{q-1}}}\zeta \left(\dfrac{1}{q-1},\dfrac{1}{(q-1)\beta\hbar\omega}\right)}}
This system can be called the Tsallis gas or the Tsallisium. It is a q-deformed version (non-extensive) of the free Riemann gas. And it is related to the harmonic oscillator! The issue, of course, is the problematic limit q\rightarrow 1.
In the limit Q\rightarrow 1 we get the Riemann zeta function from the Hurwitz zeta function:
\displaystyle{\zeta (s,1)\equiv \zeta (s)=\sum_{n=1}^{\infty}n^{-s}=\sum_{n=1}^{\infty}\dfrac{1}{n^s}=\prod_{p=2}^{\infty}\dfrac{1}{1-p^{-s}}=\prod_{p}\dfrac{1}{1-p^{-s}}}
The above equation, the partition function of the Tsallis gas/Tsallisium, connects directly the Riemann zeta function with Physics and non-extensive Statistical Mechanics. Indeed, C.Tsallis himself dedicated a nice slide with this theme to M.Berry:
Remark (I): The link between Riemann zeta function and the free Riemann gas/the interacting Riemann gas goes beyond classical statistical mechanics and it also appears in non-extensive statistical mechanics!
Remark (II): In general, the Riemann hypothesis is entangled to the theory of harmonic oscillators with non-extensive statistical mechanics!
For readers not familiarized with Tsallis generalized entropies, I would like to expose you the main definitions of such a generalization of classical statistical entropy (Boltzmann-Gibbs-Shannon), in a nutshell! I have to discuss more about this kind of statistical mechanics in the future, but today, I will only anticipate some bits of it.
Tsallis entropy (and its Statistical Mechanics/Thermodynamics) is based on the following entropy functionals:
1st. Discrete case.
\boxed{\displaystyle{S_q=k_B\dfrac{1-\displaystyle{\sum_{i=1}^W p_i^q}}{q-1}=-k_B\sum_{i=1}^Wp_i^q\ln_q p_i=k_B\sum_{i=1}^Wp_i\ln_q \left(\dfrac{1}{p_i}\right)}}
plus the normalization condition \boxed{\displaystyle{\sum_{i=1}^Wp_i=1}}
2nd. Continuous case.
\boxed{\displaystyle{S_q=-k_B\int dX\left[p(X)\right]^q\ln_q p(X)=k_B\int dX p(X)\ln_q\dfrac{1}{p(X)}}}
plus the normalization condition \boxed{\displaystyle{\int dX p(X)=1}}
3rd. Quantum case. Tsallis matrix density.
\boxed{\displaystyle{S_q=-k_BTr\rho^q\ln _q\rho\equiv k_BTr\rho \ln_q\dfrac{1}{\rho}}}
plus the normatlization condition \boxed{Tr\rho=1}
In all the three cases above, we have defined the q-logarithm as \ln_q z\equiv\dfrac{z^{1-q}-1}{q-1}, \ln_1 z\equiv \ln z, and the 3 Tsallis entropies satisfy the non-additive property:
\boxed{\dfrac{S_q(A+B)}{k_B}=\dfrac{S_q (A)}{k_B}+\dfrac{S_q (B)}{k_B}+(1-q)\dfrac{S_q (A)}{k_B}\dfrac{S_q (B)}{k_B}}
Theoretical physicsts suspect that Physics of the spacetime at the Planck scale or beyond will change or will be meaningless. There, the spacetime notion we are familiarized to loose its meaning. Even more, we could find those changes in the fundamental structure of the Polyverse to occur a higher scales of length. Really, we don’t know yet where the spacetime “emerges” as an effective theory of something deeper, but it is a natural consequence from our current limited knowledge of fundamental physics. Indeed, it is thought that the experimental device making measurements and the experimenter can not be distinguished at Planck scale. At Planck scale, we can not know at this moment how the framework of cosmology and the Hilbert space tool of Quantum Mechanics could be obtained with some unified formalism. It is one of the challenges of Quantum Gravity.
Many people and scientists think that geometry and topology of sub-Planckian lengths should not have any relation with our current geometry or topology. We say and believe that geometry, topology, fields and the main features of macroscopic bodies “emerge” from the ultra-Planckian and “subquantum” realm. It is an analogue to the colours of the rainbow emerging from the atoms or how Thermodynamics emerge from Statistical Mechanics.
There are many proposed frameworks to go beyond the usual notions of space and time, but the p-adic analysis approach is a quite remarkable candidate, having several achievements in its favor.
Motivations for a p-adic and adelic approaches as the ultimate substructure of the microscopic world arise from:
1) Divergences of QFT are believed to be absent with such number structures. Renormalization can be found to be unnecessary.
2) In an adelic approach, where there is no prime with special status in p-adic analysis, it might be more natural and instructive to work with adeles instead a pure p-adic approach.
3) There are two paths for a p-adic/adelic QM/QFT theory. The first path considers particles in a p-adic potential well, and the goal is to find solutions with smoothly varying complex-valued wavefunctions. There, the solutions share certain kind of familiarity from ordinary life and ordinary QM. The second path allows particles in p-adic potential wells, and the goal is to find p-adic valued wavefunctions. In this case, the physical interpretation is harder. Yet the math often exhibits surprising features and properties, and some people are trying to explores those novel and striking aspects.
Ordinary real (or even complex as well) numbers are familiar to everyone. Ostroswski’s theorem states that there are essentially only two possible completions of the rational numbers ( “fractions” you do know very well). The two options depend on the metric we consider:
1) The real numbers. One completes the rationals by adding the limit of all Cauchy sequences to the set. Cauchy sequences are series of numbers whose elements can be arbitrarily close to each other as the sequence of numbers progresses. Mathematically speaking, given any small positive distance, all but a finite number of elements of the sequence are less than that given distance from each other. Real numbers satisfy the triangle inequality \vert x+y\vert \leq \vert x\vert +\vert y\vert.
2) The p-adic numbers. The completions are different because of the two different ways of measuring distance. P-adic numbers satisfy an stronger version of the triangle inequality, called ultrametricity. For any p-adic number is shows
\vert x+y\vert _p\leq \mbox{max}\{\vert x\vert_p ,\vert y \vert_p\}
Spaces where the above enhanced triangle inequality/ultrametricity arises are called ultrametric spaces.
In summary, there exist two different types of algebraic number systems. There is no other posible norm beyond the real (absolute) norm or the p-adic norm. It is the power of Mathematics in action.
Then, a question follows inmediately. How can we unify such two different notions of norm, distance and type of numbers. After all, they behave in a very different way. Tryingo to answer this questions is how the concept adele emerges. The ring of adeles is a framework where we consider all those different patterns to happen at equal footing, in a same mathematical language. In fact, it is analogue to the way in which we unify space and time in relativistic theories!
Adele numbers are an array consisting of both real (complex) and p-adic numbers! That is,
\mathbb{A}=\left( x_\infty, x_2,x_3,x_5,\ldots,x_p,\ldots\right)
where x_\infty is a real number and the x_p are p-adic numbers living in the p-adic field \mathbb{Q}_p. Indeed, the infinity symbol is just a consequence of the fact that real numbers can be thought as “the prime at infinity”. Moreover, it is required that all but finitely many of the p-adic numbers x_p lie in the entire p-adic set \mathbb{Z}_p. The adele ring is therefore a restricted direct (cartesian) product. The idele group is defined as the essentially invertible elements of the adelic ring:
\mathbb{I}=\mathbb{A}^\star =\{ x\in \mathbb{A}, \mbox{where}\;\; x_\infty \in \mathbb{R}^{\star} \;\; \mbox{and} \;\; \vert x_p\vert _p=1,\; \mbox{for all but finitely many primes p.}\}
We can define the calculus over the adelic ring in a very similar way to the real or complex case. For instance, we define trigonometric functions, e^X, logarithms \log (x) and special functions like the Riemann zeta function. We can also perform integral transforms like the Mellin of the Fourier transformation over this ring. However, this ring has many interesting properties. For example, quadratic polynomials obey the Hasse local-global principle: a rational number is the solution of a quadratic polynomial equation if and only if it has a solution in \mathbb{R} and \mathbb{Q}_p for all primes p. Furthermore, the real and p-adic norms are related to each other by the remarkable adelic product formula/identity:
\displaystyle{\vert x\vert_\infty \prod_p\vert x\vert_p=1}
and where x is a nonzero rational number.
Beyond complex QM, where we can study the particle in a box or in a ring array of atoms, p-adic QM can be used to handle fractal potential wells as well. Indeed, the analogue Schrödinger equation can be solved and it has been useful, for instance, in the design of microchips and self-similar structures. It has been conjectured by Wu and Sprung, Hutchinson and van Zyl,here http://arXiv.org/abs/nlin/0304038v1 , that the potential constructed from the non-trivial Riemann zeroes and prime number sequences has fractal properties. They have suggested that D=1.5 for the Riemann zeroes and D=1.8 for the prime numbers. Therefore, p-adic numbers are an excellent method for constructing fractal potential wells.
By the other hand, following Feynman, we do know that path integrals for quantum particles/entities manifest fractal properties. Indeed we can use path integrals in the absence of a p-adic Schrödinger equation. Thus, defining the adelic version of Feynman’s path integral is a necessary a fundamental object for a general quantum theory beyond the common textbook version. However, we need to be very precise with certain details. In particular, we have to be careful with the definition of derivatives and differentials in order to do proper calculations. Indeed we can do it since both, the adelic and idelic rings have a well defined translation-invariant Haar measure
Dx=dx_\infty dx_2dx_3\cdots dx_p\cdots and Dx^\star=dx_\infty^\star dx_2^\star dx_3^\star\cdots dx_p^\star\cdots
These measures provide a way to compute Feynman path integrals over adelic/idelic spaces. It turns out that Gaussian integrals satisfy a generalization of the adelic product formula introduced before, namely:
\displaystyle{\int_{\mathbb{Q}_p}\chi_\infty (ax_\infty^2+bx_\infty)dx_\infty \prod_p \int_{\mathbb{Q}_p}\chi_p (ax_p^2+bx_p)dx_p=1}
where \chi is an additive character from the adeles to complex numbers \mathbb{C} given by the map:
\displaystyle{\chi (x)=\chi_\infty (x_\infty)\prod_p \chi_p (x_p)\rightarrow e^{-2\pi ix_\infty}\prod_p e^{2\pi i\{p\}_p}}
and \{x_p\}_p is the fractional part of x_p in the ordinary p-adic expression for x. This can be thought of as a strong generalization of the homomorphism \mathbb{Z}/\mathbb{Z}_n\rightarrow e^{2\pi i/n}.Then, the adelic path integral, with input parameters in the adelic ring \mathbb{A} and generating complex-valued wavefunctions follows up:
\displaystyle{K_{\mathbb{A}} (x'',t'';x',t') =\prod_\alpha \int_{(x' _\alpha ,t' _\alpha)}^{(x'' _\alpha ,t'' _\alpha)}\chi_\alpha \left(-\dfrac{1}{h}\int_{t' _\alpha}^{t''_\alpha}L(\dot{q} _\alpha ,q_\alpha ,t_\alpha )dt_\alpha \right) Dq_\alpha}
The eigenvalue problem over the adelic ring is given by:
U(t) \psi_\alpha (x)=\chi (E_\alpha (t))\psi_\alpha (x)
where U is the time-development operator, \psi_\alpha are adelic eigenfunctions, and E_\alpha is the adelic energy. Here the notation has been simplified by using the subscript \alpha, which stands for all primes including the prime at infinity. One notices the additive character \chi which allows these to be complex-valued integrals. The path integral can be generalized to p-adic time as well, i.e., to paths with fractal behaviour!
How is this p-adic/adelic stuff connected to the Riemannium an the Riemann zeta function? It can be shown that ground state of adelic quantum harmonic oscillator is
\displaystyle{\vert 0\rangle =\Psi_0 (x)=2^{1/4}e^{-\pi x_\infty^2}\prod_p \Omega (\vert x_p\vert_p)}
where \Omega \left(\vert x_p \vert _p\right) is 1 if \vert x_p\vert_p is a p-adic integer and 0 otherwise. This result is strikingly similar to the ordinary complex-valued ground state. Applying the adelic Mellin transform, we can deduce that
\Phi (\alpha)=\sqrt{2}\Gamma \left(\dfrac{\alpha}{2}\right)\pi^{-\alpha/2}\zeta (\alpha)
where \Gamma, \zeta are, respectively, the gamma function and the Riemann zeta function. Due to the Tate formula, we get that
\Phi (\alpha)=\Phi (1-\alpha).
and from this the functional equation for the Riemann zeta function naturally emerges.
In conclusion: it is fascinating that such simple physical system as the (adelic) harmonic oscillator is related to so significant mathematical object as the Riemann zeta function.
The Veneziano amplitude is also related to the Riemann zeta function and string theory. A nice application of the previous adelic formalism involves the adelic product formula in a different way. In string theory, one computes crossing symmetric Veneziano amplitudesA(a,b) describing the scattering of four tachyons in the 26d open bosonic string. Indeed, the Veneziano amplitude can be written in terms of Riemann zeta function in this way:
A_\infty (a,b)=g_\infty^2 \dfrac{\zeta (1-a)}{\zeta (a)}\dfrac{\zeta (1-b)}{\zeta (b)}\dfrac{\zeta (1-c)}{\zeta (c)}
These amplitudes are not easy to calculate. However, in 1987, an amazingly simple adelic product formula for this tachyonic scattering was found to be:
\displaystyle{A_\infty (a,b)\prod_p A_p (a,b)=1}
Using this formula, we can compute and calculate the four-point amplitudes/interacting vertices at the tree level exactly, as the inverse of the much simpler p-adic amplitudes. This discovery has generated a quite a bit of activity in string theory, somewhat unknown, although it is not very popular as far as I know. Moreover, the whole landscape of the p-adic/adelic framework is not as easy for the closed bosonic string as the open bosonic strings (note that in a p-adic world, there is no “closure” but “clopen” segments instead of naive closed intervals). It has also been a source of controversy what is the role of the p-adic/adelic stuff at the level of the string worldsheet. However, there is some reasearch along these lines at current time.
Another nice topic is the vacuum energy and its physical manifestations. There are some very interesting physical effects involving the vacuum energy in both classical and quantum physics. The most important effects are the Casimir effect (vacuum repulsion between “plates”) , the Schwinger effect ( particle creation in strong fields) , the Unruh effect ( thermal effects seen by an uniformly accelerated observer/frame) , the Hawking effect (particle creation by Black Holes, due to Black Hole Thermodynamcis in the corresponding gravitational/accelerated environtment) , and the cosmological constant effect (or vacuum energy expanding the Universe at increasing rate on large scales. Itself, does it gravitate?). Riemann zeta function and its generalizations do appear in these 4 effects. It is not a mere coincidence. It is telling us something deeper we can not understand yet. As an example of why zeta function matters in, e.g., the Casimir effect, let me say that zeta function regularizes the following general sum:
\boxed{\displaystyle{\sum_{n\in \mathbb{Z}}\vert n\vert^d =2\zeta (-d)}}
Remark: I do know that I should have likely said “the cosmological constant problem”. But as it should be solved in the future, we can see the cosmological constant we observe ( very, very smaller than our current QFT calculations say) as “an effect” or “anomaly” to be explained. We know that the cosmological constant drives the current positive acceleration of the Universe, but it is really tiny. What makes it so small? We don’ t know for sure.
Remark(II): What are the p-adic strings/branes? I. Arefeva, I. Volovich and B. Dravogich, between other physicists from Russia and Eastern Europe, have worked about non-local field theories and cosmologies using the Riemann zeta function as a model. It is a relatively unknown approach but it is remarkable, very interesting and uncommon. I have to tell you about these works but not here, not today. I went too far, far away in this log. I apologize…
I have explained why I chose The Spectrum of Riemannium as my blog name here and I used the (partial) answer to explain you some of the multiple connections and links of the Riemann zeta function (and its generalizations) with Mathematics and Physics. I am sure that solving the Riemann Hypothesis will require to answer the question of what is the vibrating system behind the spectral properties of Riemann zeroes. It is important for Physmatics! I would say more, it is capital to theoretical physics as well.
Let me review what and where are the main links of the Riemann zeta function and zeroes to Physmatics:
1) Riemann zeta values appear in atomic Physics and Statistical Physics.
2) The Riemannium has spectral properties similar to those of Random Matrix Theory.
3) The Hilbert-Polya conjecture states that there is some mysterious hamiltonian providing the zeroes. The Berry-Keating conjecture states that the “quantum” hamiltonian corresponding to the Riemann hypothesis is the corresponding or dual hamiltonian to a (semi)classical hamiltonian providing a classically chaotic dynamics.
4) The logarithmic potential provides a realization of certain kind of spectrum asymptotically similar to that of the free Riemann gas. It is also related to the issue of confinement of “fundamental” constituents inside “elementary” particles.
5) The primon gas is the Riemann gas associated to the prime numbers in a (Quantum) Statistical Mechanics approach. There are bosonic, fermionic and parafermionic/parabosonic versions of the free Riemann gas and some other generalizations using the Beurling gas and other tools from number theory.
6) The non-extensive Statistical Mechanics studied by C. Tsallis (and other people) provides a link between the harmonic oscillator and the Riemann hypothesis as well. The Tsallisium is the physical system obtained when we study the harmonic oscillator with a non-extensive Tsallis approach.
7) An adelic approach to QM and the harmonic oscillator produces the Riemann’s zeta function functional equation via the Tate formula. The link with p-adic numbers and p-adic zeta functions reveals certain fractal patterns in the Riemann zeroes, the prime numbers and the theory behind it. The periodicity or quasiperiodicity also relates it with some kind of (quasi)crystal and maybe it could be used to explain some behaviour or the prime numbers, such as the one behind the Goldbach’s conjecture.
8) A link between entropy, information theory and Riemann zeta function is done through the use of the notion of group entropy. Connections between the Veneziano amplitudes, tachyons, p-adic numbers and string theory arise after the Veneziano amplitude in a natural way.
9) Riemann zeta function also is used in the regularization/definition of infinite determinants arising in the theory of differential operators and similar maps. Even the generalization of this framework is important in number theory through the uses of generalizations of the Riemann zeta function and other arithmetical functions similar to it. Riemann zeta function is, thus, one of the simplest examples of arithmetical functions.
10) There are further links of the Riemann zeta function and “vacuum effects” like the Schwinger effect ( pair creating in strong fields) or the Casimir effect ( repulsive/atractive forces between close objects with “nothing” between them). Riemann zeta function is also related to SUSY somehow, either by the striking similarity between the Dirichlet eta function used in Fermi-Dirac statistics or directly with the explicit relationship between the Möbius function and the (-1)^F operator appearing in supersymmetric field theories.
In summary, Riemann zeta function is ubiquitious and it appears alone or with its generalizations in very different fields: number theory, quantum physics, (semi)classical physics/dynamics, (quantum) chaos theory, information theory, QFT, string theory, statistical physics, fractals, quasicrystals, operator theory, renormalization and many other places. Is it an accident or is it telling us something more important? I think so. Zeta functions are fundamental objects for the future of Physmatics and the solution of Riemann Hypothesis, perhaps, would provide such a guide into the ultimate quest of both Physics and Mathematics (Physmatics) likely providing a complete and consistent description of the whole Polyverse.
Then, the main unanswered questions to be answered are yet:
A) What is the Riemann zeta function? What is the riemannium/tsallisium and what kind of physical system do they represent really? What is the physical system behind the Riemann non-trivial zeroes? What does it mean for the Riemann zeroes arising from the Riemann zeta function generalizations in form of L-functions?
B) What is the Riemann-Hilbert-Polya operator? What is the space over the Riemann operator is acting?
C) Are Riemann zeta function and its generalization everywhere as they seem to be inside the deepest structures of the microscopic/macroscopic entities of the Polyverse?
I suppose you will now understand better why I decided to name my blog as The Spectrum of Riemannium…And there are many other reasons I will not write you here since I could reveal my current research.
However, stay tuned!
Physmatics is out there and everywhere, like fractals, zeta functions and it is full of lots of wonderful mathematical structures and simple principles!
About these ads
7 Comments on “LOG#050. Why riemannium?”
1. Uauhh¡¡ A really complex post, but very interesting. I´d like to add my two cents.
It´s a video where Zwiebach(string theory physicist) explains very didactically the zeta function.[Sorry, it´s in spanish]
2. amarashiki says:
The music of prime numbers, by M.Du Satoy.
3. It is surprising (or not) that H. M. Edwards in his monograph “Riemann’s Zeta Function” (a compulsory book for anyone going beyond light introductions to the topic) does not include a review of the Polya/Hilbert conjecture. The book is from 1974, so it seems that it was really a forgotten topic.
• amarashiki says:
I own that book, and yes, it is surprising, but only in part. Remember that, as far as I know, the Riemann hypothesis and the Hilbert-Polya approach were not so popular in those times. It was a time where numerical computation and computers were in their early stages and becoming more popular, and also a time were the random matrix theory appeared, or the numerical computation of Riemann zeroes was the main task… You know…Anyway, no one has been able till now of guessing the Riemann operator behind the Hilbert-Polya approach/idea. I have tried myself to learn everything about it, every trial and proposal, older and newer,… My conclusion: the operator has to exist but it must be highly non-trivial, otherwise the Berry-Keating XP conjecture or some modification like some proposal by Connes, Okubo, Sierra-Towsend and others would have succeeded. I have my own suspitions about how to focus the problem, but I am in a point where I need to create “new Mathematics” and you know it is not an easy task for a humble theoretical physicist like me, living between two worlds apart.
4. Can someone help: from the functional equation,
I get ζ(0) = ζ(1), which should be infinite not, -1/2?
• amarashiki says:
Strictly speaking, you can not apply the functional equation to s=1 and 1-s=1-1=0 because the Riemann zeta function has a pole in s=1 (it is infinite there). The fact that you can calculate \zeta (1) and \zeta (0) is due to the magic of analytic continuation in the whole complex plane, and hence you guess “a regularized” value for an infinite sum like those zeta values or similar results.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
fdbc3b5c4451c0e6 | Latest Post
Cultural Intolerance
• Atomic Vortex Theory - Kelvin You can see in the quotes below from Wiki how the truth of chaos and fluid dynamics in particle vortices was evident in 1877 but was then buried repeatedly in the next 100 years – for the wiki article to state that Kelvins Atomic Vortex Theory was ‘wrong-headed’ and replaced by Quantum physics with its innate particle wave duality paradox which no-one can solve without recourse to the aether which was made taboo or Kelvins chaos vortices. How for 100 years we are stuck with an unworkable particle physics model called QED with a Central Paradox instead of using a more realistic and natural general systems theory approach to the explanation of matter in terms of natural chaos forms. http://www.youtube.com/watch?v=QgNq3n508ec Kelvin in 1902 may not have had the computers to work on chaos theory but in 1992 at the Santa Fe Institute supercomputing and complexity modelling although seeming to toe the prescribed line re NOT articulating the newly discovered chaos law of emergence and its contradiction of the 2nd law of thermodynamics, predicting rewarming after the Big bang and not heat death. http://www.andrewhennessey.co.uk/thenewphysics.pdf At this time Nikola Tesla was attempting to print his Theory of Environmental Energy which indicated that the aether would outpour free energy if disturbed by rotating magnets/magnetic field, empirical proof of that these days coming from the spinning NASA satellites that get more energy than they appear to have been entitled to by the known (or allowed) issues in physics when they engage a gravitational slingshot around a planet and its magnetosphere. ‘Real Scientists’ today are attempting to sell us the Higgs Boson or ‘God Particle’ as the final ultimate smallest building block – homogenous, identical in every detail, reproducible in every detail and as standard as a billiard ball. Real Chaos Theory though would suggest that everything every one item in the Universe was as unique as a fingerprint, with no two identical items, all having variations to some degree, and that also the aether and its array of particles is infinitely divisible with no upper or lower limit on scale or function in any given context. Here are the wiki notes on Vortex Dynamics which give an indication of the reasonable steps in natural modelling that chaos and fluid dynamics were producing before Science with a big ‘S’ in 1938 in Copenhagen decided that a physics paradigm with an inexplicable paradox at its heart was better than anything that natural events could teach us. Vortex dynamics is a vibrant subfield of fluid dynamics, commanding attention at major scientific conferences and precipitating workshops and symposia that focus fully on the subject. A curious diversion in the history of vortex dynamics was the vortex atom theory of William Thomson, later Lord Kelvin. His basic idea was that atoms were to be represented as vortex motions in the ether. This theory predated the quantum theory by several decades and because of the scientific standing of its originator received considerable attention. Many profound insights into vortex dynamics were generated during the pursuit of this theory. Other interesting corollaries were the first counting of simple knots by P. G. Tait, today considered a pioneering effort in graph theory, topology and knot theory. Ultimately, Kelvin's vortex atom was seen to be wrong-headed but the many results in vortex dynamics that it precipitated have stood the test of time. Kelvin himself originated the notion of circulation and proved that in an inviscid fluid circulation around a material contour would be conserved. This result — singled out by Einstein as one of the most significant results of Kelvin's work[citation needed] — provided an early link between fluid dynamics and topology. The history of vortex dynamics seems particularly rich in discoveries and re-discoveries of important results, because results obtained were entirely forgotten after their discovery and then were re-discovered decades later. Thus, the integrability of the problem of three point vortices on the plane was solved in the 1877 thesis of a young Swiss applied mathematician named Walter Gröbli. In spite of having been written in Göttingen in the general circle of scientists surrounding Helmholtz and Kirchhoff, and in spite of having been mentioned in Kirchhoff's well known lectures on theoretical physics and in other major texts such as Lamb's Hydrodynamics, this solution was largely forgotten. A 1949 paper by the noted applied mathematician J. L. Synge created a brief revival, but Synge's paper was in turn forgotten. A quarter century later a 1975 paper by E. A. Novikov and a 1979 paper by H. Aref on chaotic advection finally brought this important earlier work to light. The subsequent elucidation of chaos in the four-vortex problem, and in the advection of a passive particle by three vortices, made Gröbli's work part of "modern science". Another example of this kind is the so-called "localized induction approximation" (LIA) for three-dimensional vortex filament motion, which gained favor in the mid-1960s through the work of Arms, Hama, Betchov and others, but turns out to date from the early years of the 20th century in the work of Da Rios, a gifted student of the noted Italian mathematician T. Levi-Civita. Da Rios published his results in several forms but they were never assimilated into the fluid mechanics literature of his time. In 1972 H. Hasimoto used Da Rios' "intrinsic equations" (later re-discovered independently by R. Betchov) to show how the motion of a vortex filament under LIA could be related to the non-linear Schrödinger equation. This immediately made the problem part of "modern science" since it was then realized that vortex filaments can support solitary twist waves of large amplitude. For thousands of years, knots have been used for basic purposes such as recording information, fastening and tying objects together. Over time people realized that different knots were better at different tasks, such as climbing or sailing. Knots were also regarded as having spiritual and religious symbolism in addition to their aesthetic qualities. The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often symbolizing unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. Knots were studied from a mathematical viewpoint by Carl Friedrich Gauss, who in 1833 developed the Gauss linking integral for computing the linking number of two knots. His student Johann Benedict Listing, after whom Listing's knot is named, furthered their study. Trivial knots The early, significant stimulus in knot theory would arrive later with Sir William Thomson (Lord Kelvin) and his theory of vortex atoms. (Sossinsky 2002, p. 1–3) In 1867 after observing Scottish physicist Peter Tait's experiments involving smoke rings, Thomson came to the idea that atoms were knots of swirling vortices in the æther. Chemical elements would thus correspond to knots and links. Tait's experiments were inspired by a paper of Helmholtz's on vortex-rings in incompressible fluids. Thomson and Tait believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do. For example, Thomson thought that sodium could be the Hopf link due to its two lines of spectra. (Sossinsky 2002, p. 3–10) Tait subsequently began listing unique knots in the belief that he was creating a table of elements. He formulated what are now known as the Tait conjectures on alternating knots. (The conjectures were proved in the 1990s.) Tait's knot tables were subsequently improved upon by C. N. Little and Thomas Kirkman. (Sossinsky 2002, p. 6) James Clerk Maxwell, a colleague and friend of Thomson's and Tait's, also developed a strong interest in knots. Maxwell studied Listing's work on knots. He re-interpreted Gauss' linking integral in terms of electromagnetic theory. In his formulation, the integral represented the work done by a charged particle moving along one component of the link under the influence of the magnetic field generated by an electric current along the other component. Maxwell also continued the study of smoke rings by considering three interacting rings.
The Archives
Scottish Andrew on Twitter
Join 257 other followers
Flickr Photos
More Photos |
1f52080d9ccf2ed3 | ATD 219-242
Page 219
they would have little clue . . . their more or less ambushed keesters
One of half a dozen Pynchonian circumlocutions for "wouldn't know [blank] if it bit them in the ass."
The Tetractys
True Worshippers of the Ineffable Tetractys
The Tetractys is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row. As a mystical symbol, it was very important to the followers of the secret worship of the Pythagoreans, Kabbalists, and nutbars of other affiliations since. It has all kinds of symbological meaning, including the four elements, the organization of space, the Tarot, etc. Wikipedia entry;
In the Pythagorean tetractys — the supreme symbol of universal forces and processes — are set forth the theories of the Greeks concerning color and music. The first three dots represent the threefold White Light, which is the Godhead containing potentially all sound and color. The remaining seven dots are the colors of the spectrum and the notes of the musical scale. The colors and tones are the active creative powers which, emanating from the First Cause, establish the universe. The seven are divided into two groups, one containing three powers and the other four a relationship also shown in the tetractys. The higher group — that of three — becomes the spiritual nature of the created universe; the lower group — that of four — manifests as the irrational sphere, or inferior world. [1]
This division (three/four) has to be related to the "trivium" (grammar, rhetoric, logic) and "quadrivium" (arithmetic, geometry, music, astronomy) of the Medieval liberal arts.
More effably, if you flip the Tetractys left to right, it gives the positions of the pins in ten-pin bowling.
The acronym T.W.I.T is most appropriate: a twit is an ineffectual buffoon. Neville and Nigel are certainly twits.
I believe the above really misses the Big Symbol, i.e., Pynchon's linking of T.W.I.T. with the vagina, i.e., the female sex organ. "T.W.I.T." sounds like — no, is — a cross between "clit" and "twat." And, natch, it's headed up by Nookshaft. And, let's face it, that tetractys is surely an inverted beaver, yes? (See "Beavers of the Brain"). Its male counterpart is Candlebrow U., to be encountered down the road apiece (and that ain't no spoiler!).
"The Tetractys" is also the name of a poem by the Quaternionist prophet Hamilton. I can't imagine Pynchon didn't find it fairly interesting reading. Read it for yourself here
Chunxton Crescent
Invented by Pynchon. "Crescent" is a female symbol in many mythologies and cultures, and it reinforces T.W.I.T.'s association with the female sex. But "Chunxton"?
Eliphas Levi's Baphomet
The crescent is also said "to represent silver (the metal associated with the moon) in alchemy, where, by inference, it can also be used to represent qualities that silver possesses." (Alchemy and Symbols, By M. E. Glidewell, Epsilon.)
Additionally, the crescent was an important symbol for Eliphas Levi, occultist, magician, and spiritual antecedent to the Hermetic Order of the Golden Dawn, and, in turn, the T.W.I.T.
Chunxton may be derived from "chunk stone" or "chunk(s) town." I'm inclined to favor the first. "Chunk stone" has two main meanings: (1) stone that's quarried in chunks instead of blocks, slabs or crystals; (2) a magical stone that figures in some American Indian stories. Turquoise and amethyst chunk stones are often made into jewelry as-is, or larger chunks of (say) marble can be used as decoration. Here are links to two Indian stories in which people use chunk stones in finding or tracking: first, second. Of course it's also possible that "chunk" is the verb meaning "throw," in which case there ought to be a "glass houses" connection somewhere; I can't find it.
Pure speculation here, but our own moon is a giant "chunk" of "stone". And how did that "chunk" get there? Well, this being Thomas Pynchon's universe, sometime early in the solar system's history, this proto-planet called Orpheus comes along and smacks into the Earth so violently that it not only creates the moon, but at the same time expels enough water and gas to make "it possible for life on Earth to evolve as we currently know it." Seems to me like something worthy of Occultist reverence
In CoL49, TRP states at least twice that the Pacific Ocean is "the hole left by the moon's tearing-free and the monument to her exile." (The Crying of Lot 49, p.41)
"Tyburnia occupies the ground on the north side of Hyde-park and Kensington-gardens, and stretches from Edgware-road on the east to about Inverness-terrace on the west. This is not, strictly speaking, a fashionable quarter; but it is not absolutely unfashionable, and is a very favourite part with those — lawyers, merchants, and others—who have to reside in town the greater part of the year." Charles Dickens (Jr.), Dickens's Dictionary of London, 1879.
Sir John Soane
(1753 – 1837) was an English architect who specialised in the Neo-Classical style. Wikipedia entry
Madame Blavatsky
Helena Petrovna Blavatsky (1831-1891), Russian-born founder of the Theosophical Society. Madame Blavatsky claimed that all religions were both true in their inner teachings and false or imperfect in their external conventional manifestations. Wikipedia
Theosophical Society Seal
Theosophical Society
The Theosophical Society was founded in New York City, USA, in 1875 by H.P. Blavatsky, Henry Steel Olcott, William Quan Judge and others. Its initial objective was the investigation, study and explanation of mediumistic phenomena. After a few years Olcott and Blavatsky moved to India and established the International Headquarters at Adyar, Madras (Chennai). There, they also became interested in studying Eastern religions, and these were included in the Society's agenda. Wikipedia entry "Its post-blavatskian fragments" refers to the schism that occured between some of the founding members after the passing of H.P. Blavatsky in 1891.
Society for Psychical Research
The Society for Psychical Research (SPR) is a non-profit organization which started in the United Kingdom and was later imitated in other countries. Its stated purpose is to understand "events and abilities commonly described as psychic or paranormal by promoting and supporting important research in this area" and to "examine allegedly paranormal phenomena in a scientific and unbiased way."[1] It was founded in 1882 by a group of eminent thinkers including Edmund Gurney, Frederic William Henry Myers, William Fletcher Barrett, Henry Sidgwick, and Edmund Dawson Rogers. The Society's headquarters are in Marloes Road, London. Wikipedia entry
Rosy Cross of the Golden Dawn
Order of the Golden Dawn
The Hermetic Order of the Golden Dawn (or, more commonly, the Golden Dawn) was a magical order of the late 19th and early 20th centuries, practicing a form of theurgy and spiritual development. William Wynn Westcott, also a member of the Theosophical Society, appears to have been the initial driving force behind the establishment of the Golden Dawn. See also the aforementioned schism within the Theosophical Society. Wikipedia entry
of whom there seemed an ever-increasing supply
Supply of seekers, not of "arrangements." (Well, this contributor read it wrong . . . twice.)
century had rushed . . . out the other side
An instant of zero, not a whole year, because they aren't yet "out the other side" of 1900. ??? A century is 100 years. The one referred to here lasted from 1800-1899 and, since it's 1900, it has "rushed to its end."
Missing the point. The image focuses on the zero. And please, let's not have that sterile argument about when a century begins!
Don't know if this is of any significance, but in the Tarot the Fool (or Jester), says Wikipedia, is "often numbered 0." [2]
Page 220
not even if that tartan were authentic
It's a solecism in England, but is (or was—at least until well up in the 19th century) a prosecutable offense in Scotland, to wear the tartan of a clan one doesn't belong to. At the time of the action, Lew's offense against taste is not to wear tartan (see below in this entry) but to wear a tartan he isn't entitled to wear.
The previous statement doesn't quite jibe. In the late 17 cent. it was prosecutable for any Scot (read Highlander) to wear a tartan. Those tartans we see ascribed to clans were creations made to please Queen Victoria. Tartans and the Kilt are from Scottish and Irish Clans; from the oppressed. Thus, the fun in the line comes from the fact that an authentic tartan was false to begin with, but that doesn't keep Nigel from lording the fact that Lew's argyle sox are not up to snuff.
Kilts came from an earlier garment which covered more of the body than today's piece, and those in plaid were called Breacan, meaning partially colored or speckled. The plaids also came in trews (trousers), and ruanas (shawls). Many had uniformity in design, but probably because those were the colors available and thus recognized as part of a family, clan or sept.
Caen stone
A cream-colored limestone for building, found near Caen, France.
a primitive wind instrument consisting of several parallel pipes bound together; panpipes.
an ancient form of harp, so syrinx and lyre are like flute and harp. A famous Concerto for flute and harp is the work of G. F. Handel, who also composed the Messiah.
Ten sideshow acts for one admission. Wikipedia
Also, a description of the Tetractys.
masses of shadow . . . bright presences
We've had suggestions, at least, that shadow is more hospitable than brightness.
humans reincarnated as cats, dogs, and mice
Do the T.W.I.T. members just take the word of the creatures, or do they have some way to be sure?
Nicholas Nookshaft
Grand Cohen Nicholas Nookshaft's name reinforces the linking of T.W.I.T. to the female sex organ, "Nooky shaft" being a vulgarism for the vagina. Interestingly, "shaft" is both a rod or pole, or penis, as well as a vertical passageway, thus its connations are bisexual.
Anyone familiar with Ceremonial Magick is aware of Aleister Crowley. Crowley was famously bisexual, responsible for one of the most famous Tarot Decks — the "Thoth" deck — and was involved in spycraft for British Intelligence and, it is rumored, was a double agent for the Germans as well. Nicholas Nookshaft is a parody of Crowley.
Actually, given the chronology and the alliterative name, this is much more likely a parody of MacGregor Mathers. Mathers was the head of the Golden Dawn from 1896 or so until 1900--Crowley never was. Furthermore, Tarot references in AtD do not follow the names from Crowley's Thoth deck; Crowley renamed certain cards, and those names are not the ones used in AtD (i.e. in the Thoth deck, the "Temperance" card is renamed "Art").
Grand Cohen
'Cohen' is Hebrew for 'priest'.
Page 221
Couldn't have been the same world as the one you're in now
We can infer that Lew got blown up in one world and shifted to another. A review of the explosion episode, particularly with the annotations to p. 188, will be worthwhile.
Could this be the explanation for some of the most inexplicable scenes from the book thus far: Lew Basnight's mysterious offense, causing him to lose his wife, and his first encounter with the Drave group (around page 39); and Hunter Penhallow's escape from the mysterious creature (around page 154)? Parallel worlds?
Yashmeen Halfcourt
Her initials YH are the first half of the Tetragrammaton -- YHVH or YHWH in English.
seventeenth degree Adept
Masonic and other esoteric mystery schools have differing number of degrees. Attaining a degree shows that one has sufficiently mastered the material, undergone the tests and passed through any initiations involved with that degree.
The Masonic system has three degrees. These are extended to 32 in the Scottish Rite and a 33rd degree is the ultimate akin to a Distinguished Service award. By comparison, the Golden Dawn has 11 degrees divided in three orders; and the Order of the Temple of the East (Order Templi Orientis, O.T.O) has 12. In TWIT, the 17th appears to be the final degree where one becomes a Master TWIT or a Grand TWIT, I suppose.
Why 17 degrees? Other than 17 being prime, there seems to be no symbolic or geometric significance to 17. Since the Crowley-associated systems do not reach 17, whereas the Masonic system does, looking to the Masonic A & A Scottish Rite 17th degree we find it is the "Knight of the East and West" which teaches that loyalty to God is man's primary allegiance, and the temporal governments not founded upon God and His righteousness will inevitably fall. Compare this to the Bogomils later in AtD.
On the other hand, T.W.I.T. is centered on Tarot cards, so the relationship between number and any correspondences to the Tarot would be very much to the point. In this case, the Major Arcana assigned to the number 17 is the Star. The Crowley-associated system for Tarot consists of the Thoth Tarot deck, along with Crowley's "explanatory" 'Book of Thoth':
The full text can be found at's site on the Thoth "Star" card, albiet with the wrong card illustrated, in this case atu 18, "The Moon".
Symbolic and Cultural Meanings of 17:
Because 17 has no symbolic significance, it does! In The Illuminatus! Trilogy, the symbol for Discordianism includes a pyramid with 17 steps because 17 has "virtually no interesting geometric, arithmetic, or mystical qualities."
Described at MIT as 'the most random number', according to hackers' lore. This is supposedly because in a study where respondents were asked to choose a random number from 1 to 20, 17 was the most common choice.
The number of syllables in a haiku (5+7+5).
The number of special significance to Yellow Pig's Day and Hampshire College Summer Studies in Mathematics.
and on and on.....
A righteous Jew. Wikipedia "One whose merit surpasses his iniquity." The Talmud says that at least 36 anonymous tzadikim are living among us at all times; they are anonymous, and it is for their sake alone that the world is not destroyed.
The common theme between the Masonic 17th degree and Tzaddik seems to be righteousness.
Page 222
The Tetractys isn't the only thing round here that's ineffable
Schoolyard joke. "F" a euphemism for fuck, so "ineffable" = unfuckable also describes Yashmeen.
squadron commander
A squadron of hussars would number 100-200 troopers commanded by a major. (The linked page concerns Baden-Powell's regiment—the 13th, not the 18th—in the South African War.)
Auberon Halfcourt
Auberon means royal or noble bear.
Punning, "Au" is the chemical symbol for gold, thus, "Golden Bear", mascotte of UC Berkeley.
Eighteenth Hussars
Prestigious British cavalry regiment. Stationed in India 1864-76 and 1890-98; Halfcourt's secondment must have taken place at one of these times.
Summer capital of the British Raj in India in the Himalayas. Wikipedia.
A terminus of the Kalka-Simla railway line (built 1906) aka the "British Jewel of the Orient."
Named for the goddess Shyamala Devi, an incarnation of the Hindu Goddess Kali.
Smartly taken at silly point
A cricketing reference. Silly point is a fielding position very close to the batsman. examples
There are dozens of named fielding positions, but those called 'silly' (silly mid on, silly mid off, and silly point) are all close to the batsman, and therefore dangerous - fielders in these positions often wear protective helmets. The (very British) concept of sillyness was much explored by Monty Python's Flying Circus.
To know, to dare, to will, to keep silent
Mystical formula. examples The four precepts of Western Magick, extensively discussed in the writings of Aleister Crowley.
In the States, "detective" doesn't mean—
. . . An agent who solves criminal cases. The major "detective" bureaus hired personnel out as bodyguards and muscle.
"There is but one 'case' which occupies us"
This echoes the famous quote from Wittgenstein's Tractatus Logico-Philosophicus: "The world is all that is the case." (See the full text of the Tractatus here.) This quote also factors in heavily in V. (Specifically, in two places: there's the P's and Q's love song, and also in Captain Weissman's repeating, encoded, hallucinated message over the telegraph in Africa.)
The Number 22
I found it interesting that the significance of the number 22 was first brought up on page 222. might be nothing, really. 22 is the number of cards in the Major Arcana of the Tarot deck, the section of the deck that has been removed from the modern playing deck which only has the suits (elements) and the Court cards. The 22 Major Arcana are numbers 0 to 21 and move from The Fool card to the Universe. Purportedly and symbolically, the progression of cards tell a tale of the evolutionary path of the Soul in its course. The 22 cards also, in some systems, map onto the 22 paths that connect the spheres of the Kabalistic Tree of Life (which also is mentioned in this chapter). An understanding of the Tarot cards cannot be achieved with an understanding how they relate to the Tree of Life. They are the relationships between the Sephiroths which in turn at 10 in number, just like the Tetractys and portray the energies that flow from the highest monad of Divinity (Kether) down into the manifested world (Malkuth). Pynchon makes use of both the Tarot and the Kabalah in Against the Day as well as Gravity's Rainbow.
See also the novel The Greater Trumps by Charles Williams for a similar intrusion of the characters of the Major Arcana into everyday English life.
22 is two times two, so a quaternion...
Page 223
"And the crime... just what would be the nature of that?"
Might Lew himself be one of the 22 suspects? Perhaps the ineffable crime is what made people treat him like a pariah earlier in the book.
Page 224
"'walking out'"
A walking date.
the veil of maya
In Hinduism, maya is the phenomenal world of separate objects and people, which creates for some the illusion that it is the only reality. In Hindu philosophy, maya is believed to be an illusion, a veiling of the true, unitary Self. Many philosophies or religions seek to "pierce the veil" in order to glimpse the transcendent truth. Arthur Schopenhauer used the term "Veil of Maya" to describe his view of The World as Will and Representation. Wikipedia entry
the ancient London landscape . . . known to the Druids
Peter Ackroyd's recent London, the Biography devotes many pages to sacred and magical features of the city. "Druid".
London's royal barbers since 1875. site
And what other barber would you mention in a passage about the Greater Trumps . . . .
On this island [...] all English, spoken or written, is looked down on as no more than strings of text cleverly encrypted
A sentiment echoed in the first sentence of Pynchon's December 2006 letter written in defense of novelist Ian McEwan: "Given the British genius for coded utterance..." Image of Letter
crosswords in newspapers
The first crossword to appear in a newspaper was in 1913. Cryptic crosswords in British newspapers certainly match Pynchon's description. See, for example, the Listener crossword.
Page 225
Girton College
Of Cambridge University, for women, founded 1869. history
Next they'll be letting you folks vote.
Women over the age of 30 were, subject to certain qualifications, granted the right to vote in the UK by the Representation of the People Act 1918. The Representation of the People (Equal Franchise) Act 1928 granted women the vote on the same basis as men (i.e. from the age 21).
"the vast jangling thronged somehow monumental London evening"
This kind of eschewing of punctuation might be expected in Joyce but it's not typical of Pynchon and seems to serve no special purpose here. A typo?
Purposive or no, that ain't no typo. First, numerous compound adjectives reminiscent of Faulknerian portmanteau words are sprinkled throughout the book. Second, this particular deployment of zero-degree punctuation and massing of modifiers jibes with TRP's obvious delight in tripping us readers up and sending us back into sentences for another looksee. Finally, the musicality of this phrase sounds properly Pynchonlike t'me.
Pamela Colman Smith
Illustrator of the Rider-Waite-Smith Tarot deck "Wikipedia".
Arthur Edward Waite
Occultist and co-creator of the Rider-Waite Tarot deck. Wikipedia
four stone
56 pounds.
Ucken-fay is "pig latin" for 'fucken'.
gaver du visage
A literal translation of "stuff one's face", though this is not how it is said in French (it would be se gaver or se baffrer). cite
A smoking salon (divan) for cigar smokers.
Interestingly, a work by Robert Louis Stevenson, from 1903, entitled The Dynamiter begins with a "Prologue of the Cigar Divan".
Page 226
Seven Dials
bad area in London, see Wikipedia entry
The Devil by Colman-Smith
Four-wheeled carriage drawn by four horses. Supplanted by the Hansom cab.
Renfrew at Cambridge and Werfner at Göttingen
Note that each Professor's name is the other's spelled backward.
Also notice the theme of dual natures or forces. The two professors are "bound and ... could not separate even if they wanted to." They become rivals within the broader conflict of the 'Great Game' -- the political rivalry over Central Asia being played out by the various European powers, but especially by Great Britain and the Russian Empire.
Pynchon toys with the idea that World War I was really just the extension of an academic rivalry. This secret scholastic conspiracy also references the role supposedly played in the US policy establishment by neoconservatives [3] (or "neocons") in the run up to the US invasion of Iraq in 2003. Just as Pynchon's professors held great influence over a number of their students, "[s]ome of whom found employment with the Foreign Services", etc., neoconservative professors such as Leo Strauss [4] had a number of disciplines who came to occupy key positions in government and business (for example, Deputy Secretary of Defense (2001-2005) Paul Wolfowitz [5]). This interpretation is further bolstered by the geographic positioning of the "Bagdad" (sic) railway, and the Ottoman territories as the region "where Renfrew and Werfner have often found their best opportunities to make mischief".
Cambridge University is one of the oldest and the best universities in the world. In 2009 it will be celebrating its 800th Anniversary. In its early day, Cambridge was a center of the new learning of the Renaissance and of the theology of the Reformation; in modern times it has excelled in science. It is now a confederation of 31 Colleges (such as King's, Girton, St.John, Trinity and others mentioned in ATD), consists of over 100 departments and faculties, and other institutions. Since 1904, 81 affiliates of Cambridge have won Nobel Prize in every category: 29 in Physics, 22 in Medicine, 19 in Chemistry, 7 in Economics, 2 in Literature and 2 in Peace.
Göttingen University, one of the most famous universities in Europe, founded in Göttingen, Germany, in 1737 by King George II of England in his capacity as Elector of Hanover. At the end of the 19th century, it became world famous because of its Departments of Mathematics and Physics and rivaled Cambridge for eminence. The reputation of the university was founded by many eminent professors who are commemorated by statues and plaques all over the campus. It claimed 44 Nobel Laureates. But it suffered from the 1933 Great Purge of the Nazi crackdown on "Jewish Physics" and never recovered its original fame. David Hilbert, one of the greatest mathematicians of the 20th century and a professor at Göttingen, was asked in the 40s about the state of mathematics there now that the Jewish influence had been purged; he replied that there was no mathematics left at Göttingen now that the Jewish influence had been purged.
Berlin Conference of 1878
Divided Balkans after Russo-Turkish War. Wikipedia
bickering-at-a-distance A play on the idea of "action at a distance" theories in physics, a topic that came under much scrutiny at the time of AtD owing to its pertinence in the theories of electromagnetism and gravitation. See wikipedia for a further discussion and its relevance in quantum mechanics.
English, . . . , Japanese—not to mention indigenous—components
Not to mention them was exactly the point as the Great Powers sorted out the Ottoman possessions.
Page 227
"The Great Game"
The Great Game was a term used to describe the rivalry and strategic conflict between the British Empire and the Tsarist Russian Empire for supremacy in Central Asia. The term was later popularized by Rudyard Kipling in his novel, Kim. The classic Great Game period is generally regarded as running from approximately 1813 to the Anglo-Russian Convention of 1907. Wikipedia entry Also the name of Padzhitnoff's airship.
I believe the great game stands for Espionage in the Age of Gentlemen, the substance of Pynchon's Under the Rose.
mamluk lamps
A mosque lamp from the mamluk era.
...the Kabbalist Tree of Life, with the names of the Sephiroth spelled out in Hebrew, which had brought her more than enough of that uniquely snot-nosed British anti-Semitism...
Kabbalah is the ancient study of Jewish mysticism, long shrouded in mystery and kept from all but a devout few of the most dedicated Talmudic scholars. The Tree of Life is one of the central symbols of Kabbalah, supposedly a physical representation of the path of enlightenment from the most base knowledge of the physical world (at the bottom), to the highest spiritual planes of understanding (at the top). The Sephiroth are the nodes of the Tree, representing the various "stages" of understanding. Of course, tihs is all a very gross oversimplifcation and hardly does justice to the term itself.
The "Quabbalah" or "Cabalah" being studied by Madonna and others in Hollywood is a secularized and co-opted form of the original Kabbalah, which is deeply connected to the Torah and Jewish life.
In Medieval Europe, Kabbalist scholars wore amulets and other symbols on their clothing, and were often misunderstood to be magicians or wizards (think Merlin). The common magician's expression "abra cadabra" has Kabbalistic origins.
"Eskimoff . . . I say what sort of name is that?"
Tiptoeing around the real question, "Is she Jewish?"
English Rose
The phrase "English Rose" or "Bonnie English Rose" when applied to a woman means her skin is unblemished, her coloring subtle, her temper sweet. Madame Eskimoff, in short, is a beauty in a traditional English style.
(Incidentally, an officially unrecognized designation of roses.)
Page 228
Oliver Lodge
English physicist, inventor and writer (1851-1940) involved in the development of wireless telegraphy and radio. After the death of his son in 1915, Lodge became interested in spiritualism and life after death and wrote several books on the subject. Lodge conducted research on lightning, electricity, electromagnetism and wrote about the aether, themes that are repeated throughout ATD. Wikipedia entry.
William Crookes
English chemist and physicist (1832-1919) who worked in spectroscopy and whose work pioneered the construction and use of vacuum tubes. Like Oliver Lodge, Crookes was also a spiritualist, which appears to be Pynchon's reason for grouping him with others in this passage, although his experiments in electricity and light also tie in with these themes in ATD. Wikipedia entry.
Mrs. Piper
Probably Leonora Piper, 1857-1950. Wikipedia entry.
Eusapia Palladino
(1854-1918) Famous italian spiritualist medium. Wikipedia entry. It's fair to say she was often caught cheating.
W.T. Stead
William T. Stead (1849-1912), British writer, poet, social crusader, and spiritualist. He went down with the Titanic. Wikipedia entry.
Mrs. Burchell
The Yorkshire Seeress, investigated by WT Stead. cite
Trouble with the time here. Lew's timeline points pretty strongly to autumn 1900. A séance that's "about to" go on Mme. Eskimoff's résumé, however, leads the murder of the Serbian king and queen by three months, and the murder itself occurred in June 1903, which seems to imply March of that year.
This seems as good an instance as any to question the insistence of some here to pin down the exact date (and season?). Pynchon doesn't knock it to the wall, doesn't find cause to bother and I think the reason for that is obvious... the ambiguity lends a freer hand with which to paint. So don't fuck with the butterfly on the wheel.
Alexander and Draga Obrenovich, the King and Queen of Serbia
According to Wikipedia the assassination occured on 11 June 1903, so the seance at which Mrs. Burchell "witnessed" it, should have taken place in March 1903.
Parsons-Short Auxetophone
pic and info. The Auxetophone appears to have been a sound amplification device, not a recorder. Parsons did not enter the picture till 1903, so the apparatus would not have this name in 1900, but Short demonstrated it as early as 1898.
electros of the original wax impressions
A thin film of metal was electroplated onto the wax, then peeled off and wrapped around a new cylinder.
"Bagdad" railway
Page 229
A term used in both engineering and psychology. Psychology: "Characterized by a high degree of emotional responsiveness to the environment." Electricity: "Of or relating to two oscillating circuits having the same resonant frequency."
The syntonic comma, a small interval in the frequency ratio of 81:80, is a problem in musical temperament.
the Russo-Turkish War
The Russo-Turkish War (1877-1878), the latest Russo-Turkish War of many fought between these two contries since 16th century as a result of Russian attempts to find an outlet on the Black Sea and to conquer the Caucasus, dominate the Balkan Peninsula, gain control of the Dardanelles and Bosporus straits, and retain access to world trade routes. The last Russo-Turkish War came as a result of the anti-Ottoman uprising (1875) in Bosnia and Herzegovina and Bulgaria. On Russian instigation, Serbia and Montenegro joined the rebels; after securing Austrian neutrality, Russia openly entered the War in 1877. The War ended in 1878 resulted in the Treaty of San Stefano which so thoroughly revised the map in favor of Russia and her client, Bulgaria, that the European powers called a conference (the Congress of Berlin) to revise its terms by the Treaty of Berlin.
kilometric guarantee
Money offered by the government to building companies. Apparently, the railroad companies fooled the Ottoman Empire building trails which were much longer than needed. Google books citation
Page 230
King's... Girton
King's College is one of the most famous and historic colleges at Cambridge, founded in 1441. Girton College, Cambridge, was established in 1869 as the first residential college for women in England.
Michaelmas term
The fall term, starting early October (1900 here). Wikipedia
A between-maid.
Edward Oxford
attempted to shoot Queen Victoria and her husband, Prince Albert, at the time of her first pregnancy (1840).Wikipedia
had the young Queen died then without issue
Nookshaft posits two scenarios: (1) The implicit, unmentioned, and not as "interesting" possibility that everything is actual, as it "appears" to be in the "real" world, surrounding Queen Victoria; that she is simply an old, vain regent. (2) "the 'real' Vic is elsewhere," and the current, aged Victoria is a ghostly stand-in. Nookshaft implies that this figure is a proxy or puppet of Ernst-August. If this were "the case," then the question shifts to the following: (a) Is the ruler of the underworld, who holds the "real," eternally young Victoria captive in cahoots with Ernst-August in the "real" world? or: (b) Is the ruler of the underworld, who holds the "real," eternally young Victoria captive NOT in cahoots with Ernst-August, who nevertheless ascends to the throne with real-Vic out of the way, and imposes the stand-in? In which case: What would be the motivation of the underworld-entity third-party? And who, or what, specifically, is it?
sixty years ago
One event of 1840, the attempt on Victoria's life, is referred to as sixty years ago; another, the issue of the first adhesive stamps, as more than sixty years ago.
If it weren't for these nagging problems in Lew's timeline, we could peg the date as 1900.
Salic law
originated in the Late Roman Empire as Germanic tribes invaded and their law codes were translated into Latin and written down. Salic Law was that of the Franks who settled in present-day northern France and the law code of Charlemagne. Over the course of the Middle Ages it was largely replaced by Roman Law. For examples, see [6].
However, Salic Law continued to be used in a number of European areas to decide matters of noble inheritance. Specifically, Salic Law stated that no female could inherit rulership (above by Owl of Minerva 18:03, 4 April 2007 (PDT)) and, indeed, a royal or noble title could be inherited only through the "male line." When King William IV, ruler of both the United Kingdom and Hanover, died, the Crowns separated. Hanover practiced Salic law, while Britain did not. King William's niece Victoria ascended to the throne of Great Britain and Ireland, but the throne of Hanover went to William's brother Ernest Augustus, Duke of Cumberland. Wikipedia entry
Tory despotism
Not necessarily-- it describes Ernest himself. "The Duke of Cumberland had a reputation as one of the least pleasant of the sons of George III. Politically an arch-reactionary, he opposed the 1828 Catholic Emancipation Bill proposed by the government of the Prime Minister, the Duke of Wellington." Wikipedia entry
It can describe Ernst August and still be an allegory of Thatcher. The description of Ireland fits that of some world-views during her time.
All parallels between past and present are worth considering. They don't have to be direct references. The present-day Ernst August - famous for pissing on the Turkish Pavilion at EXPO 2000 - carries on the family tradition.
Someone famously cited James Joyce as proof that Catholics shouldn't get university educations.
Page 231
Orange Lodges
Lodges of the Orange Order, a protestant fraternal organisation based predominantly in Northern Ireland and Scotland. Wikipedia entry. The Orange Order was founded to subvert the United Irishmen of Wolfe Tone by agitating against Protestant and Catholic community. It was hostile to the idea of Irish Home Rule or independence. In the 1880's it developed the Ulster Unionist Party to politically parry Parliamentary attempts at Home Rule for Ireland.
"from the first to the twelfth of July, anniversaries of the Boyne and Aughrim."
i.e. anniversaries of the Battle of the Boyne and the Battle of Aughrim of the Williamite War in Ireland.
This was and still is known as "Marching Season" in Northern Ireland; the time when 'parades' are traditionally a source of fear and violence. Nearly all the parades are organized by the Orange Lodges and hence anti-Catholic.
The first adhesive stamp, 1840
"the first adhesive stamps of 1840"
This stamp has come to be called the Penny Black. Wikipedia entry
Penny Black is also the name of a character (p.18)
"immune to Time, [...] neither of them aging"
Cf Oscar Wilde's only novel The Picture of Dorian Gray, in which Dorian Gray remains young while his portrait ages.
Cf Stray's pregnancy, a "dreamy thing" (page 201). The definition of springtide is springtime.
Page 232
Éliphaz Lévi
A/K/A Eliphas Levi, nom de plume of Alphonse Louis Constant (1810-1875), French occultist and writer who pioneered a revival of Magick in the 19th Century, and was an influence on A.E. Waite, the Order of the Golden Dawn, and Aleister Crowley. An acquaintance of novelist Edward ("It was a dark and stormy night") Bulwer-Lytton. Wikipedia entry.
Punter is being used in the sense of someone who bets, someone who is taking a chance. Or more probably in the common extended sense meaning merely "customer"
Greek: things heard. Good information under "A" in the alpha index.
number twenty-four
Or 25? etext (According to a Greek version, number 4 in the etext above is not included in Iamblichus' list. If my source is correct, Pynchon is right.)
(ca. 245 - ca. 325, Greek) was a neoplatonist philosopher who determined the direction taken by later Neoplatonic philosophy, and perhaps western Paganism itself. He is perhaps best known for his compendium on Pythagorean philosophy.Wikipedia
Make-up, cosmetics; the application of make-up (especially in heavy or theatrical fashion).[2]
Page 233
Inflammation of a mucous membrane; usually restricted to that of the nose, throat, and bronchial tubes, causing increased flow of mucus, and often attended with sneezing, cough, and fever; constituting a common ‘cold’.[3]
Collis Brown's Mixture
Contained morphine, chloroform, and caramel, among other things. Full ingredients (Previous link not working. For info try here.)
Xylene abuse is similar to "glue sniffing"-- xylene is a strong solvent able to cause several damages to health, especially to the brain. wikipedia
a thousand pounds a year
Over $100,000 today. cite
Condy's fluid is pink to purple. Methylated spirits is a kind of denatured alcohol: 95% ethyl alcohol, 5% methyl alcohol. "Pinky" would have a variety of effects, very possibly including blindness.
Page 234
Condy's fluid
A disinfectant used to treat and prevent Scarlet Fever, among other things. Wikipedia
tonight's the night
Considering the content here, probable reference to Neil Young's drug-addled album and its title song, "Tonight's The Night" from 1975. Wiki
an important market street in the City of London.
A street originally for stabling; but in modern times often converted into houses/apartments.
Coombs de Bottle
"comes the bottle" ?
Russian duck
Page 235
sensitive flames
Cf GR p.29-32, 715.
extractors . . . distillation columns
Separatory apparatus. An extractor works on differences in solubility, a distillation column differences in volatility.
tremblers and timers
A trembler is a kind of motion detector used in both bombs and alarms; one kind has a flexible stem with a heavy contact on the free end so that disturbing the package it contains causes a trigger circuit to close. A timer uses a clocklike mechanism to bring two contacts together.
proper solvent procedures
Famous 1960s "Anarchist Cookbook" was infamously inaccurate. Amazon w/author's note
Page 236
Breathless hush in the close tonight
Dr. De Bottle quoting from Henry Newbolt's poem "Vitaï Lampada," which makes school games a metaphor and model for martial bravery.
The Gentleman Bomber of Headingly
Cf Hornung's 'Gentleman Thief' and cricket player, Raffles. info
Reminds me of the Krikkit Robots in Douglas Adams' Life, The Universe, and Everything, where a bomb is put in place of a Cricket Ball at a match between Britain and Australia.
Also, acronymically, the GBH=Grievous Bodily Harm, the British term for felonious assault.
Here and elswhere the spelling of the cricket ground should be 'Headingley'.
The Ashes
An international cricket series between England and Australia dating back to 1882. dates A number of references in this chapter relate to this rivalry. For example, on this page the English cricket ball is compared to the Australian "kookaburra". Kookaburra is the brand name of the balls used in Australia, in England it's Duke. The properties of the English ball was one of the keys to England's success in the summer of 2005. Was Pynchon's writing here influenced by the hype in the UK at the time?
A poison gas used in World War I.Wikipedia
Source of red dye. Wikipedia
A helper, assistant. [4]
Misspelling of exhilaration.
Page 237
beige substance
Presumably Cyclomite.
Happy Birthday! . . . Gemini
Ordinarily you would think this tagged the date as 21 May to 20 June Wikipedia. But other evidence in the text points to deepening autumn.
One of two possible explanations:
1. The T.W.I.T. is perhaps using an ascendent or lunar based astrological system rather than the solar-based system commonly used in the West. This resolves the apparent contradiction of a Gemini in autumn since the ascendent travels through all signs every 24 hours and the moon travels through the entire zodiac once a month. For example, Vedic astrology looks primarily to the ascendent, then the moon, and lastly the sun to study respectively the body, the mind and the spirit of the native. Basnight does have a mind that operates on two planes -- hence a moon in Gemini reading.
2. The explosion carried Lew to a place on the other side of the Sun. Deep autumn would then be November 23 to December 21th, our sign of Sagittarius.
get the Ashes back . . . next year
On page 236 the Ashes (Test Matches, cricket competitions between England and Australia) are "in progress." At some time previous to this conversation Mme. Eskimoff said England will regain the trophy "next year" provided they use the young bowler Bosanquet (next entry). Test Matches took place in (a) December 1901 to March 1902, Australia victorious; (b) May to August 1902, Australia again; (c) December 1903 to March 1904, England bringing back the Ashes and Bosanquet figuring as a key bowler.
If Mme. Eskimoff has foreseen aright, "next year" is 1904 and the time of the action is 1903. The conflict in dates is troubling: In a matter of weeks and a few pages, Lew just misses the 1900 Hurricane and gets information that definitely points to 1903. (And he proves to be a Gemini with an autumn birthday!) I don't think there is anything accidental—or negligible—about the discrepancies.
Another Ashes reference. Bernard Bosanquet invented the bosie (or googly), as described here, around 1900. A major factor in England's 2005 Ashes success was reverse swing, another type of delivery whose physical dynamics are poorly understood.
Check out the "Cricket in Against the Day article by Peter Vernon, which is an in-depth look at, well, cricket in Against the Day.
A somewhat derogatory term for a British person, commonly used in Australian English. Also Pommy or Pommie.
Hebrew letter Shin
Obviously a nod to the Vulcan greeting in Star Trek, with the distinctive hand sign and the phrase, "Live long and prosper." Perhaps also to the Jewish faith of Leonard Nimoy, who played Spock. See The Jewish origin of the Vulcan Salute
Pynchon placed one of these in Mason & Dixon, as well:
Dixon discovers "The Rabbi of Prague, headquarters of a Kabbalistick Faith, in Correspondence with the Elect Cohens of Paris, whose private Salute they now greet Dixon with, the Fingers spread two and two, and the Thumb held away from them likewise, said to represent the Hebrew letter Shin and to signify, 'Live long and prosper.'(M&D p.485)
Might there be a further connection between The Cohen of T.W.I.T., the "Cohens of Paris" and these backwoods Kabbalists?
Also, note the hand on the devil tarot card above.
...and if we look back to the Devil tarot card we see the shin hand sign and the inverted pentagram. Thus through Eliphas Levi and then Coleman-Smith/Waite a connection is created between shin and the inverted pentagram. And then we can make connections with the Jeshimonians and the TWITsters.
This might be why "the cure grows right next to the cause" in Jeshimon. They are under the winged protection of God-the-Destroyer.
British term indicating complete ordinariness. Possible Etymology (Dog's Bollocks, British Or German Standard): Wiktionary
Page 238
Second Law of Thermodynamics
The law of entropy... "The entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium." (Rudolf Clausius) [9]
There's no such thing as a perfectly efficient engine, i.e., a box that does work by taking in heat from where there is lots of heat (e.g., combustion chamber) and throwing off heat where there is not much (exhaust pipe). Something always gets lost. Similarly, the transfer of money from where there is plenty (bank) to where there isn't much (Europe) is never perfectly efficient.
"He began then, bewilderingly, to talk about something called entropy. The word bothered him... But it was too technical for her. She did gather that there were two distinct kinds of this entropy. One having to do with heat engines, the other to do with communication... The two fields were entirely unconnected, except at one point: Maxwell's Demon. As the Demon sat and sorted his molecules into hot and cold, the system was said to lose entropy. But somehow the loss was offset by the information the Demon gained about what molecules were where... Entropy is a figure of speech, then, a metaphor. It connects the world of thermodynamics to the world of information flow." The Crying of Lot 49 (Pages 84 - 85)
morsus fundamento
Latin: A bite on the ass?
The meaning is that he wouldn't know metaphysics if it bit him in the ass. Like "octogenarihexation" ("86"-ing) in Vineland--the vulgar faux fancied up.
three-percent consols
British "consolidated" bonds, for many years the conservative investment par excellence. wikipedia
Page 239
Not mental as in "of the mind" but mental as in "mad". "You're mental, you are" is a common british playground taunt.
Colney Hatch
London lunatic asylum. Wikipedia
Out of the dust . . . beam of morning sunlight
I.e., sometimes your horse wins.
An encyclical is a letter circulated by the pope or other figure of high authority in a body of believers. A comprehensive Wikipedia article explains and adds a list of papal encyclicals. An encyclical usually takes its first 2 or 3 words as its title (Multi et Unus in this case).
Of course, the Vatican would strongly protest that McTaggart, an atheist, should send out an encyclical!
Seems to refer to a historical logician joke. explanation Professor McTaggart was, perhaps, the most famous philosopher who argued that Time did not exist as we seem to experience it. W.H. Hardy was a very famous Cambridge mathematician who knew all the famous philosophers in England.
John McTaggart Ellis (J. M. E.) McTaggart (1866-1925), British philosopher. He was born in London and educated at Clifton College, Bristol and Trinity College, Cambridge. He lectured Philosophy at Trinity College from 1897 to 1923. His brilliant commentaries and studies on Hegel's dialectic (1896), cosmology (1901) and logic (1910) were preliminaries to his own constructive system-building in Nature of Existence (3 vols., 1921-1927). In his 1908 essay "The Unreality of Time" he argued that our perception of time is an illusion (Cf page 412: dismissing . . . the existence of Time).
Godfrey Harold Hardy (1877-1947), English mathematician. He was a lecturer at Cambridge (1906-1919), professor at Oxford (1919-31) and Cambridge (1931-47). Concurrently with Wilhelm Weinberg developed Hardy-Weinberg law (1906) describing genetic distribution and dequilibrium in large populations. He was also known for contributions to complex analysis, Diophantine analysis, Fourier series, distribution of prime numbers, etc.
Multi et Unus
Many and One.
Is the grafitti in Cambridge another cricketing reference? Dukes are the balls used in England (cf. p236). Chucking (or bending the arm when bowling) is an emotive topic in cricket that arises from time to time. It first arose around 1900 [10]. In 2005 it caused administrators to change the rules of the game [11].
"Create More Dukes" has a second meaning, suggested by the odd choice of verb. Duke in Britain refers to the highest rank of nobility, and fittingly there are not many of them. At present only about a dozen people hold the title. Since sometime in the 1870s new dukes have been created (by decree of the monarch) only in the royal family. Most recently at the time of the action, Queen Victoria had promoted a run-of-the-mill marquess to the dukedom of Fife to set the stage for his marriage to one of her granddaughters. If some group of activists thought the nation needed to beef up its peerage, they might adopt the slogan found here as a graffito.
Here is a colorful summary of UK dukes today and through history, although it is unsound on coats of arms and such. This site has more names and fewer pictures, listing all the titles (from dukes to lowly barons) created since the year Dot.
the Laplacian, a relatively remote mathematicians' pub
A little Pynchonian joke? The Laplacian operator is a component of the Schrödinger equation, the basis of quantum mechanics. Quantum mechanics was famously rejected by Albert Einstein (many references on the net but see Stephen Hawking), known for his theories of relativity. Moreover, quantum mechanics deals with the very small and relativity with the very large (this is a simplification of course), so the Laplacian is indeed remote from relativity!
No such pub during my stay in Cambridge (1998-2000). Also not today, according to this list.
Obviously more than a little joke. Refers to Pierre-Simon, marquis de Laplace (1749-1827)Wikipedia Entry, aka "the French Newton", probably the greatest mathematician and astronomer of his time. Most of the scientific principles derived from his findings are explored in AtD (from lumineferous ether to the existence of black holes). Laplace was also instrumental in the advancement of the science of probabilities.
The ever quotable Laplace, much loved by atheists worldwide, famously replied to Napoléon, when he asked why there was no mention of God in his treatise on astronomy: "Sir, there was no need for that hypothesis". Also responsible for what is known as the Laplacian principle: "The weight of evidence for an extraordinary claim must be proportioned to its strangeness."
The literal translation of Laplace is "The Place".
The connection of the Laplace operator to the Schrödinger equation and quantum mechanics is a bit of a stretch -- the Laplace operator is ubiquitous, appearing in the heat equation, the wave equation and the Navier-Stokes equations.
Page 240
Worse than Gordon at Khartoum
Refers to Charles George Gordon, British Major-General, whose attempted defense of Khartoum versus Arabi rebels in 1884-85 ended with his beheading. Wikipedia cf. Basil Dearden's 1966 film Khartoum, in which the role of Gordon is played by Charlton Heston.
Page 241
"You recognize him?"
As, presumably, Webb.
How can that be? Webb is dead, there's nothing to suggest he went to England, the costume is not right for him, and—most tellingly—his medium is dynamite, not phosgene.
Who might Lew recognize in the photo? The "suspects" are Neville, Nigel, the Grand Cohen, Dr. Coombs De Bottle, Clive Crouchmas and Professor Renfrew. If Prof. Werfner looks much like Prof. Renfrew, he goes on the list too. If the "Gentleman Bomber" could possibly be female, add Yashmeen and Mme. Eskimoff. We haven't met anyone else (except members of the Icosadyad, who don't have faces).
Suppose we rule out the ladies and Werfner. Neville or Nigel wouldn't be able to hide their identities with a suit of white flannels. Renfrew is sitting right there when Lew sees the picture, but Lew's reaction (his stomach sinks) does not seem Lew-like if it's Renfrew he has recognized, plus Renfrew himself wants to meet the Bomber. That leaves the Cohen, De Bottle and Crouchmas.
Would Lew experience dread on spotting Crouchmas? He doesn't know much about C.C. at this point, so it isn't clear why he would suppress that recognition.
Seeing the Cohen might lead to this gastric reaction: Lew might think he's on the fringe of an anarchist group again (and look where it got him the last time). The Cohen stays on the list.
Dr. De Bottle not only follows cricket but bets on it; he speaks almost with reverence about phosgene; he knows a nonobvious fact about the bombs; and he dresses like a gentleman. None of these points applies to the Cohen. And recognizing De Bottle would give Lew that sinking feeling because D.B. is purportedly fighting against bombers on behalf of the government. De Bottle goes to the top of the short list.
Alternately, there's no clear answer and not enough clues (especially considering the role of time, forces beyond anyone's control, double agents, etc.). This Gentleman Bomber can be any person from Lew's past or a deja vu from the future. The G. Bomber seems to be England's answer to the Kieselguhr Kid, a nebulous personality working against the forces of history. The important thing about this situation is not the Bomber's identity, but the fact that Lew is being thrown into an assignment much like his last one in America (and we know how that ended...) He's obviously not very happy about it, and not inclined to tell anyone what he knows, or might know.
For what it's worth, my take was also Webb, especially in the context of all the bilocation business. It isn't "Webb" but evil alter-land "Webb"! (no dig on ya'll Brit folk intended, although the " stay at Cambridge" bit was just wonderful :)
--There is a suspect that was left off that list, who occurred to me before any of the others--Lew himself. If we're dealing with bilocation, doubles, and the possibility that 'our' Lew was brought here via the explosion in the creek bed, couldn't the G.B.H. be this world's Lew? Recall that just prior to the explosion, Lew had resolved to choose a side in the Anarchist/plutocrat battle, and had come down on the side of the people. Did this world's Lew make the same choice, somehow ending up in Britain... Also conspicuous is that it is Renfrew showing him the photo with a sly expression, Renfrew who's own double, Werfner, is his very own nemesis? That's how I read it anyway. But, much like the Kieselguhr Kid, we the readers never actually know the identity of this renegade bomb-lobber.
A bosie from a beamer
More cricket! A bosie is now more commonly known as a googly (cf. p237). A beamer is a full-pitched delivery that reaches the batsman above waist height.
Page 242
The northern hemisphere
German: uncanny, sinister.
1. From The Secret Teachings of All Ages by Manly P. Hall (1928)
2. The Oxford English Dictionary. 2nd ed. 1989
3. Def.3. The Oxford English Dictionary. 2nd ed. 1989.
4. Def.1. The Oxford English Dictionary. 2nd ed. 1989.
Annotation Index
Part One:
The Light Over the Ranges
Part Two:
Iceland Spar
Part Three:
Part Four:
Against the Day
Part Five:
Rue du Départ
Personal tools |
cbc50d0d5fe5af36 | Ladder Operators
Mathematically, a ladder operator is defined as an operator which, when applied to a state, creates a new state with a raised or lowered eigenvalue [1]. Their utility in quantum mechanics follows from their ability to describe the energy spectrum and associated wavefunctions in a more manageable way, without solving differential equations. We will discuss the most prominent example of the use of these operators; the quantum harmonic oscillator. Their use does not end there, however, as the mathematics of ladder operators can easily be extended to more complicated problems, including angular momentum and many body problems. In the latter case, the operators serve as creation and annihilation operators; adding or subtracting to the number of particles in a given state.
Quantum Harmonic Oscillator
This diagram shows the energy levels and wavefuntions for the harmonic oscillator potential. Image taken from ref [2]
The one dimensional harmonic oscillator is often referred to in quantum mechanical calculations as many systems can be approximated by that potential when close to an equilibrium point [4]. As we know, for the harmonic oscillator, the potential is given by
In class, we discussed the energy spectrum and solutions for the time-independent Schrödinger equation, which in this case is the following:
In this formulation, our operators are defined using the coordinate basis. Notice the first term represents kinetic energy$\frac{P^2}{2m}$, while the second represents the potential. Accordingly, we have operators for momentum and position as follow:
Of course, other bases exist, including the momentum basis or the energy basis, in which the expression of these operators might be different. The true beauty of the ladder operator method is that we can define the Hamiltonian in the energy basis without specifying the form of the operators. All that is needed is knowledge of their commutator, which is independent of basis. We will return to this idea later. For the moment, we can continue by rewriting the above Schrödinger equation to show explicitly the operation on $\psi$.
The ladder operator method is sometimes referred to as the “method of factorization” because the next step involves defining the factor of the term in brackets [3]. If we were dealing with numbers rather than operators, it would be clear that
In the case of operators, we cannot assume that $cd=dc$. However, we can continue the examination by defining two new operators, corresponding to the two sets of parenthesis above.
Or, in terms of the previously defined position and momentum operators,
These are our ladder operators. To facilitate their use, we need to determine their commutation relation. We can easily show $[\bold{X},\bold{P}]=\italic{i} \hbar$. Using the definition of the commutator $[\bold{X},\bold{P}]=\bold{XP}-\bold{PX}$
Dropping our test function f(x), we see the commutator is indeed $\italic{i} \hbar$. Now we compute the products of our ladder operators.
Notice that the first term is simply the sum of the energies, H.
Also, from above
And therefore
Often times, the ladder operators are each defined with a multiple of $\sqrt{\hbar \omega}$ so as to make this commutator equal to one and describe the energies in units of $\hbar \omega$ [3]. We will continue with the present definition for the moment.
Schrödinger Equation in terms of ladder operators
Note the Schrödinger equation becomes
Here is where the ladder operators become especially useful. If $\psi$ is a solution of the equation, we can demonstrate that $a^+ \italic{\psi}$ is also. Keeping the commutator in mind,
In the same manner,
So, $(a^+\italic{\psi})$ is an eigenvector with an energy one unit $\hbar \omega$ greater than $\italic{\psi}$, and $(a\italic{\psi})$ is a solution of the hamiltonian with one $\hbar \omega$ less energy than $\italic{\psi}$. The operators can be said to have created or annihilated one quanta of energy equal to $\hbar \omega$ . For this reason they are also termed creation $(a^+)$ and annihilation $(a)$ operators [5]. Furthermore, starting with any solution, we can simply apply the ladder operators successively to generate any other solution.
We know the harmonic oscillator contains a ground state with minimum energy, below which no state exists. Then, if we apply the annihilation operator, we must get 0 as the result. In other words, a “lowest rung” must exist on our ladder of allowed energies and states [4].
Or, inserting our first definition for the lowering operator, we can solve for $\psi_0$
This can be solved with simple integration
Where $A_0$ is a normalization constant, in this case $(\frac{m \omega}{\pi \hbar})^{\frac{1}{4}}$ . So, assuming a lowest state allowed us to infer its form, if we chose a basis to express the operator in. Even without specifying a formulation, we can find the energy of that level [3], which clearly should not depend on basis.
We can define the number operator as
Where, again, many formulations of ladder operators incorporate the divisor into the operators themselves. The number operator, when acting on a state, simply returns the number of the current energy level. Using ladder operators, then, we have completely defined the harmonic oscillator states and energy levels
Ladder operators are seen in many facets of quantum mechanics. Earlier, we defined the ladder operators in terms of momentum and position operators. With little effort, we could easily define X and P as linear combinations of the ladder operators. Because many of the potentials we are concerned with are functions of position only, ladder operators for other systems can be defined in a similar way. These formulations offer a method of working with such problems without solving the differential equations.
In the theory of quantum fields, the momentum and potential of a region are simultaneously described in space-time by a state field. The mechanism of creation and annihilation operators is essential in this case, allowing us to describe the state as a combination of these operators, thus quantizing the field [6].
We have seen that ladder operators and their commutator relationship are all that are needed to completely solve the quantum harmonic oscillator. We were able to do so without ever actually addressing the choice of basis or solving the differential equations (though I did both in order to write a recognizable form of the ground state, we could leave our work in terms of the operators). Were this the only case where ladder operators proved useful, they would still merit much study. Fortunately, they find wide use in other application of quantum theory, and often make calculations much easier.
3. Shankar, R. Principles of Quantum Mechanics2nd ed. (chp. 7), New Haven:Plenum Press, 1994
4. Griffiths, D. Introduction to Quantum Mechanics(chp. 2), NJ: Prentice Hall, 1995
6. Schiff, L. Quantum Mechanics3rd ed. (chp. 14)/ New York: McGraw-Hill, 1968 |
2d287b0ee151fdb4 | About this Journal Submit a Manuscript Table of Contents
Advances in Astronomy
Volume 2009 (2009), Article ID 632064, 7 pages
Research Article
Quantum Theory, Noncommutative Gravity, and the Cosmological Constant Problem
Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India
Received 22 January 2009; Accepted 11 November 2009
Academic Editor: Zdzislaw E. Musielak
Copyright © 2009 T. P. Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The cosmological constant problem is principally concerned with trying to understand how the zero-point energy of quantum fields contributes to gravity. Here we take the approach that by addressing a fundamental unresolved issue in quantum theory, we can gain a better understanding of the problem. Our starting point is the observation that the notion of classical time is external to quantum mechanics. Hence there must exist an equivalent reformulation of quantum mechanics which does not refer to an external classical time. Such a reformulation is a limiting case of a more general quantum theory which becomes nonlinear on the Planck mass/energy scale. The nonlinearity gives rise to a quantum-classical duality which maps a “strongly quantum, weakly gravitational” dynamics to a “weakly quantum, strongly gravitational” dynamics. This duality predicts the existence of a tiny nonzero cosmological constant of the order of the square of the Hubble constant, which could be a possible source for the observed cosmic acceleration. Such a nonlinearity could also be responsible for the collapse of the wave function during a quantum measurement.
1. Introduction
The observed notion of time, with which we are so familiar, is external to quantum mechanics. It is part of a classical spacetime geometry, which comprises of a spacetime manifold and the metric. The metric is determined by classical matter fields via the field equations of general relativity. In principle, the Universe could be in a state in which there are no classical matter fields, but only quantum fields. In such a situation, the metric of the Universe will in general no longer be classical, but will undergo quantum fluctuations. It is known from the Einstein hole argument that in order for the spacetime manifold to have a physically meaningful point structure, a well-determined classical metric (which is a solution of the Einstein equations) must reside on the manifold. When the metric is undergoing quantum fluctuations, the point structure of the spacetime manifold is destroyed, and one no longer has a classical notion of time [1].
Nonetheless, one should be able to describe the dynamics of a quantum system, even if an external classical time is not available. Such a description should become equivalent to standard quantum mechanics as and when a dominant part of the Universe becomes classical, so that a classical time now exists. In arguing for the existence of such a reformulation, one is led to conclude that standard linear quantum theory is a limiting case of a more general quantum theory which is nonlinear on the Planck mass/energy scale [2]. This conclusion is independent of any specific mathematical structure which one would like to use to develop the reformulation.
What is our most reliable guideline towards the construction of such a reformulation of quantum mechanics? A natural mathematical structure which forgoes the point structure of spacetime is a noncommutative spacetime. We construct the reformulation by pursuing the following proposal: in the reformulation, relativistic quantum mechanics is the same theory as noncommutative special relativity, with a specific set of commutation relations imposed on noncommuting coordinates and momenta. The physical principle is that the basic laws are invariant under “inertial” coordinate transformations of noncommuting coordinates. One is naturally led to attach an antisymmetric part to the Minkowski metric. The theory is supposed to describe dynamics when gravity can be neglected (like in special relativity). In the present context, this amounts to the requirement that the total mass/energy in the system be much smaller than Planck mass/Planck energy. As and when an external time becomes available, this reformulation should become equivalent to standard quantum mechanics. These aspects will be described in Section 2.
The nonlinear generalization of this reformulation, a hitherto unnoticed feature which arises naturally, describes the dynamics of the system when its energy becomes comparable to Planck energy. The Schrödinger equation becomes nonlinear and the gravitational dynamics is now a noncommutative general relativity. The physical principle now is that basic laws are invariant under general coordinate transformations of noncommuting coordinates. This is supposed to generalize general covariance to the noncommutative case. When the mass/energy becomes much larger than the Planck scale, the dynamics is assumed to reduce to classical general relativity, and classical mechanics. This is discussed in Section 3.
The presence of the nonlinearity has two important consequences. Firstly, the antisymmetric part of the gravitational field associated with this nonlinearity suggests the existence of a quantum-classical duality, as a consequence of which one can match a dominantly quantum sector of the theory to a dominantly classical sector. This is the subject matter of Section 4. In turn this helps us understand why the cosmological constant should be nonzero and yet have the very small value it does. This is the main part of the paper, and it will be presented in Section 5.
The second important consequence of the nonlinearity has to do with the nonlinearity in the Schrödinger equation, which becomes relevant in the vicinity of the Planck mass scale. This can lead to a breakdown of quantum superposition, and it could lead to the collapse of the wave-function during a quantum measurement. What is important for us here is that the parameters influencing the collapse of the wave-function are in principle measurable in the laboratory. These are the same parameters which are responsible for the existence of the quantum-classical duality, and for the nonzero value of the cosmological constant. Thus our explanation for the origin of the dark energy is in principle testable experimentally, via the quantum measurement process. This aspect is investigated in Section 6.
The arguments of this paper suggest that a dynamically evolving “cosmological constant-like” term is present throughout the history of the Universe. At any given epoch, such a term is supposedly of the order of the square of the Hubble constant at that epoch. The cosmological viability of such a scenario will be discussed in Section 7.
In this paper we have attempted to keep the discussion compact, so as to provide an essential overview of the arguments. More detailed discussions can be found in [13].
2. Quantum Mechanics as a Noncommutative Special Relativity
The quantum dynamics of a relativistic particle of mass is described here as a noncommutative special relativity. Gravity is neglected in this small mass limit since this approximation is equivalent to setting . We outline here a proposal for the desired reformulation, using the illustrative case of a two-dimensional noncommutative spacetime described by coordinates . It should be said at the outset that our treatment is heuristic, and a rigorous mathematical description remains to be developed. We assume that associated with the 2d noncommutative spacetime, there is a line element:
which has an antisymmetric component. We call such a spacetime a quantum Minkowski spacetime, and the noncommuting coordinates are assumed to obey the commutation relations
We will comment on the function shortly.
We assume that a suitable differential calculus can be defined on this spacetime. Then, in analogy with special relativity, we introduce a velocity and a momentum . It is evident from the form of the line element (1) that the following Casimir relation holds
The specific structure of the commutation relations above is such that the momenta, as well as the coordinates, do not commute with each other. Moreover, while appears in one of the relations, it is which appears in the other relation. This is motivated by the expectation that one should be able to derive the uncertainty relations of quantum theory, and the quantum commutation relation from these underlying relations [2].
The function in (2) has to be chosen so that the momenta commute with the Casimir relation. It is easy to show that in fact there is no nontrivial solution in two dimensions; the only solution is , which is clearly not of interest. However, in dimensions three or higher, there appears to be no constraint that , although the exact form of remains to be found. Our subsequent discussion here does not depend on the form of , and it suffices to use the 2d example to illustrate our ideas.
Dynamics is defined by assuming that the momenta are gradients of a complex action . This converts the Casimir relation into a noncommutative Hamilton-Jacobi equation, which is the equation of motion. This is the theory we call a noncommutative special relativity.
As and when an external classical spacetime becomes available, the Klein-Gordon equation of standard linear quantum mechanics can be recovered from this reformulation by the correspondence rule
The justification for this rule has been discussed in [2]. On the right hand side of this equation, the momenta are again defined as the gradients of a complex action , and the wave-function defined as . Substituting for the wave function on the right-hand side of (4) and equating this expression to lead to the Klein-Gordon equation. In this sense one can recover standard quantum mechanics from an underlying formulation as a noncommutative special relativity.
3. A Noncommutative General Relativity
When the mass of the particle becomes comparable to Planck mass, its self-gravity can no longer be neglected.The noncommutative line element (1) is modified to the curved noncommutative line element
Correspondingly, the Casimir relation (3) is generalized to
and the correspondence rule is generalized (4) to
It is important now to note that if one rewrites this Hamilton-Jacobi equation in terms of the wave-function, one no longer gets the linear Klein-Gordon equation. This is because the metric appears in the equation. In the simplest case, where is a function of , and the diagonal components of the metric are approximated to unity, we get the equation of motion
which is equivalent to a nonlinear Klein-Gordon equation [2].
The noncommutative metric is assumed to obey a noncommutative generalization of Einstein equations, with the property that goes to one for , and to zero for . Also, as one recovers classical mechanics, and in the limit standard linear quantum mechanics is recovered.
In the mesoscopic domain, where is away from these limits and the mass is comparable to Planck mass, both quantum and gravitational features can be defined simultaneously, and new physics arises. The antisymmetric component of the gravitational field plays a crucial role in what follows.
4. A Proposed Quantum-Classical Duality
4.1. Motivation for the Duality
In general relativity, the Schwarzschild radius of a particle of mass can be written in Planck units as , where is Planck length and gm is the Planck mass. If the same particle was to be treated, not according to general relativity, but according to relativistic quantum mechanics, then one half of the Compton wavelength of the particle can be written in Planck units as . The fact that the product is a universal constant cannot be a coincidence; however, it cannot be explained in the existing theoretical framework of general relativity (because herein ) and quantum mechanics (because herein ).
One could attempt to trivialize this observation by saying that in general relativity, the only length scale that can be constructed is proportional to mass, and in relativistic quantum theory, the only length scale that can be constructed is inversely proportional to mass. However, what is non-trivial is that these both length scales have a fundamental physical meaning attached to them. Hence their inverse relation to each other does call for an explanation and is a signal that both general relativity and relativistic quantum theory must be limiting cases of a deeper underlying theory. In fact we have argued for the existence of such a theory in the previous section for entirely different reasons.
The only plausible way to explain this inverse relation is to propose a duality between a pair of solutions of the theory—a duality which maps the Schwarzschild radius for the first solution to the Compton wavelength for the second solution. Hence we propose and justify the following quantum-classical duality: the weakly quantum, strongly gravitational dynamics of a particle of mass is dual to the strongly quantum, weakly gravitational dynamics of a particle of mass .
It follows that the dimensionless Schwarzschild radius of is four times the dimensionless Compton-wavelength of .
The origin of this duality lies in the requirement that there be a reformulation of quantum mechanics which does not refer to an external classical spacetime manifold. The implied nonlinearity leads to a quantum gravity theory of which general relativity and quantum theory are natural approximations, and the duality is inevitable. Its existence does not depend on the use of noncommutative geometry for the mathematical formulation of the theory. The use of noncommutativity serves to illustrate and justify the duality.
The Planck mass demarcates the dominantly quantum domain from the dominantly classical domain and is responsible for the quantum-classical duality. As is evident from (8), the effective Planck constant is , going to zero for large masses, and to for small masses, as expected. Similarly, the effective Newton gravitational constant is expected to be , going to zero for small masses, and to for large masses.
Thus the parameter space is strongly quantum and weakly gravitational, whereas is weakly quantum and strongly gravitational. The Compton wavelength for a particle of mass gets modified to , and the Schwarzschild radius for a mass gets modified to . We propose that the dynamics of a mass is dual to the dynamics of a mass if . This holds if and
If (9) holds, the solution for the dynamics for a particle of mass can be obtained by first finding the solutions of (8) for mass , then replacing by , and finally writing instead of , wherever appears.
We can deduce the functional form of by noting that the contribution of the symmetric part of the metric, , to the curvature, grows as , whereas the contribution of the antisymmetric part must fall with growing . This suggests that grows linearly with ; thus
and implies ; and we set since this simply defines as the scaling mass. Hence we get , which satisfies (9) and thus establishes the duality. The mapping interchanges the two fundamental length scales in the two solutions: Compton wavelength and Schwarzschild radius.
The duality we observe is holographic, by virtue of the abovementioned relation . Thus, the number of degrees of freedom that a quantum field associated with the particle possesses (bulk property) should be of the order of the area of the horizon of the dual black hole in Planck units (boundary property), that is, . This value of could be interpreted as follows: the infinite number of degrees of freedom associated with a quantum field in the flat spacetime continuum limit (when no artificial high-energy cutoff has been imposed) has been replaced by this finite value. More correctly however, the effective number of degrees of freedom is actually of the order , because we have and so the highest energy associated with a mode of the quantum field cannot be more than Planck mass.
In summary, we see here a new picture for the dynamics of a particle. A particle need not be either quantum or classical, but there is a third possible kind of dynamics, mesoscopic dynamics, which interpolates between quantum and classical. This dynamics is described by a nonlinear Schrödinger equation (see (13) below). The nonlinear term depends on the newly introduced parameter , and its nature is such that the nonlinearity vanishes in the small mass limit, , . On the other hand, the nonlinear Schrödinger equation reduces to Newton's classical laws of motion in the limit , . This interpolating behaviour, where one makes a transition from quantum to classical mechanics via an intermediate nonlinear quantum mechanics, is not ruled out by experiment. Its verification or otherwise in the laboratory will constitute a crucial test of these ideas.
5. The Cosmological Constant Problem
The quantum-classical duality helps understand why there should be a cosmological constant of the order of the observed matter density, a possible explanation for the observed cosmic acceleration. If there is a nonzero cosmological constant term in the Einstein equations, of the standard form , it follows from symmetry arguments that in the noncommutative generalization of gravity, a corresponding term of the form should also be present. This latter term vanishes in the macroscopic limit but is present in the microscopic limit .
However, when , the effective gravitational constant goes to zero, so cannot be sourced by ordinary matter. Its only possible source is the zero-point energy associated with the quantum particle . Since this zero-point energy is necessarily nonzero, it follows that is necessarily nonzero. This same manifests itself on cosmological scales, where is nonvanishing, because is non-vanishing, even though goes to zero on cosmological scales, because goes to zero. Essentially we are saying that we have to examine the two limits of : the microscopic limit and the macroscopic limit; the value of arising at one of the limits will clearly be the same as its value at the other limit.
This solves the vexing problem of the cancellation of (i) a bare coming from general relativity and (ii) a coming from the zero-point energy of quantum fields. This problem arises in the first place because we have allowed ourselves to treat general relativity and quantum theory as completely disconnected theories. The nonlinearity of the theory suggested here, the consequent duality, and the introduction of the antisymmetric component of the metric compel us to treat the two theories as limiting cases of an underlying theory, and to conclude that the so-called bare and the “quantum ” are one and the same thing. The question of their mutual cancellation does not arise any longer.
The value of can be estimated by appealing to the deduced quantum-classical duality. The total mass in the observable Universe is , where is the present value of the Hubble constant. The mass dual to this is , and is roughly the magnitude of the zero-point energy in the ground state. In a higher mode, the energy is a multiple of the ground state energy, and we write it as , recalling that the effective Planck constant runs with energy. To obtain a rough estimate, we take to be one for energies up to and zero for energies beyond . We then see that the total contribution to the zero-point energy is
It is remarkable that Planck's constant drops out of the sum! The vacuum energy density, and hence the value of the cosmological constant, is which is of the order of the observed value of .
We note that the ground state energy is being mapped to a total energy , which is an instance of a UV-IR mixing, or equivalently, a quantum-classical duality. As goes to zero, the IR limit goes to zero, whereas the UV limit diverges.
Clearly, nothing in this argument singles out today's epoch; hence, it follows that there is an ever-present , of the order , at any epoch, with being the Hubble constant at that epoch. This solves the cosmic coincidence and fine-tuning problems. However, issues related to an ever-present will have to be addressed—we will return to this aspect in the last section.
5.1. Understanding
The standard quantum field theoretic cosmological constant problem does not arise here because we have brought in a new scale, the Hubble constant. In effect, we are proposing that since is the age of the Universe, there is a fundamental minimum frequency, that is, . All allowed frequencies are discrete multiples of , with the maximum being at Planck frequency. As a result, the net zero-point energy comes out to be . By itself, this is higher than , but we must recall that duality demands this much to be the classical contribution to the cosmological constant. Hence the energy density is found by dividing the total energy by the volume of the observed Universe, giving a value for that matches with observations. Thus although the argument for obtaining the magnitude of given here draws input from quantum theory, our argument is completely different in concept of what is suggested by quantum field theory. For us, duality is playing a crucial role.
6. Testing for Dark Energy Through Quantum Measurement
We would now like to suggest that the above proposal for the origin of the cosmological constant can in principle be tested in the laboratory by examining the quantum mechanics of mesoscopic systems, because the latter is also affected by the nonlinearity of the underlying theory.
Firstly, as discussed above, the effective Planck constant is , and using the form of that we have, we can write
Thus a measurement of the Planck constant for a “mesoscopic particle” with mass approaching of the Planck mass will show a deviation from the standard value. By particle, we mean a composite object in the required mass range whose internal degrees of freedom can be neglected.
Secondly, the nonlinearity can result in a breakdown of quantum superposition during a quantum measurement, leading to collapse of the wave-function and a finite lifetime for superpositions. A great deal has been written about the physics of quantum measurement over the last century or so. It is fair to say that there are essentially only two possibilities: either the wave function collapses during a quantum measurement, or it does not. If it does not, then the many worlds interpretation holds, and the different worlds do not interfere because of decoherence. If the wave function does collapse, then a modification of the Schrödinger equation in the mesoscopic domain is indicated. We have argued that the nonlinearity resulting from removal of external time favors the collapse picture [1].
As we discussed above, on the Planck mass/energy scale, the Klein-Gordon equation becomes nonlinear. In the nonrelativistic limit, it results in the following nonlinear Schrödinger equation:
This equation can be rewritten as
where and . Norm is preserved during evolution, provided the probability density is defined as .
Since nonlinearity is negligible for the quantum system, prior to the onset of a quantum measurement, evolution is described by
thus preserving superposition. The onset of measurement corresponds to mapping the state to the state of the final system as
where is the state the measuring apparatus would be in, had the initial system been in the state .
Evolution is now described by the equation
where and . is the total mass of the final system, which includes the quantum system as well as the measuring apparatus. The states cannot evolve as a superposition because the evolution is now non-linear. However, the initial state at the onset of measurement is a superposition of the . This superposition must thus break down during further evolution, according to the law
Note that the 's have been set to be different for different states. This is to be expected because will be determined by the quantum state, and setting it as a function only of to begin with was a leading order approximation, applied for simplicity. We thus get
and only the state with the largest survives [1]. In this manner, the inclusion of a nonlinear term breaks superposition.
In order to recover the Born probability rule, it is essential that the 's be random variables, with a suitable probability distribution. Only further development in theory can determine wherether the 's are indeed random, and if so, what their probability distribution is. A highly plausible candidate for a random variable is the phase of the quantum state at the onset of measurement. Although the phase evolves in a deterministic manner, it is effectively random, because the time at which the measurement begins is arbitrary.
From (18) we can define the lifetime of a superposition
Since is strictly equal to one in standard linear quantum mechanics, quantum superposition has an infinite lifetime in the linear theory. However, the situation begins to change in an interesting manner as the value of the mass approaches and exceeds . Since we know that in this limit approaches zero, we can neglect , and the superposition lifetime will then essentially be given by
We can get a numerical estimate by noting that we are close to the classical limit, where the phase coincides with the classical action in the Hamilton-Jacobi equation. To leading order, the magnitude of the classical action is given by , where is the time over which we observe the classical trajectory; approximately, this could be taken to be the value of the phase , and is then roughly given by
For a measuring apparatus, if we take the linear dimension to be, say, cm, and the time of observation to be, say, seconds, we get the superposition lifetime to be seconds. We can get a very rough estimate of for a mesoscopic system using (22), and taking cm, gm, and with . This gives seconds. Thus an experimental detection of dependence of superposition lifetime on the mass (equivalently number of degrees of freedom) of the system could be indicative of the nonlinearity.
The third possible way in which a nonlinearity of this nature can be detected is through rapid successive measurements of a quantum observable. Suppose a certain outcome for an observable results from the random variable being in a certain range . Suppose now that a second measurement is made sufficiently quickly with the eigenbasis slightly rotated. Because the random variable will not have changed to a value sufficiently different from the original one, the result of the second measurement will show a correlation with the result of the first measurement, contrary to what standard quantum mechanics predicts.
A more detailed discussion of the physics of measurement described here will be presented elsewhere [4].
7. Can There Be an Ever Present ?
A positive cosmological “constant” which is of the order of at every epoch is obviously not a constant and does not satisfy the standard equation of state . Furthermore, by increasing the rate of expansion in the early Universe, it spoils the consistency between theory and observation with regard to the abundance of light elements. It also makes galaxy formation more difficult later during the evolution of the Universe. It is thus evident that although the cosmic coincidence problem can be solved by an ever-present , one has to ascertain that the resulting cosmological model must be consistent with observation. A way out, as has been suggested by Sorkin, is to have a cosmological constant whose mean value is zero, but which has fluctuations with a typical magnitude of the order of [5]. As a starting point, this seems like a reasonable possibility for us also, considering that in our model, the origin of lies in the zero-point energy contribution coming from quantum theory. However, the development of a cosmological model in the context of our scenario is an issue we have not yet addressed, and we leave this for future investigation.
A phenomenological model for an ever-present has been partially developed in the context of the causal set approach to quantum gravity [6]. In this approach, a fluctuating of the order is predicted because is conjugate to the spacetime four volume, and this volume itself is subject to quantum fluctuations. The phenomenological model is specified by choosing a suitable equation of state for and expressing as a stochastic function of the four volume. A numerical study by the authors shows tracking behavior in , as well as fluctuations. For a suitable choice of a free parameter, a consistent with the present observed value is reproduced. It has however been pointed out by Barrow [7] that the model is very strongly constrained by the magnitude of the CMB anisotropy on the last scattering surface. It remains to be seen whether a way can be found out to overcome this constraint, by constructing an inhomogeneous version of the phenomenological model, or otherwise. An alternative investigation on the origin of based on quantum gravitational fluctuations has been carried out by Padmanabhan [8, 9].
On a more general note, we observe that the theoretical prediction of an ever-present nonzero cosmological “constant” of the order of is independent of the details of the cosmological model. Essentially, all we have assumed is a homogeneous and isotropic cosmology, but we have placed no a priori restrictions on the evolutionary history of the scale factor. Thus although we originally set out to seek an explanation for the observed cosmic acceleration in the framework of the standard Big Bang cosmology, we could turn things around and ask the following question: given an ever-present , does it admit a nonstandard cosmology consistent with observations? To us, the answer to this question is not obvious, and in our view the question merits further careful examination.
8. Concluding Remarks
Our use of noncommutative spacetime has a conceptually different origin as compared to applications based on the seminal work of Doplicher, Fredenhagen, and Roberts [10]. In the latter, spacetime noncommutation relations are deduced as a consequence of the joint application of quantum uncertainty relations and the rules of general relativity on the Planck length scale. One then envisages that quantum field theories exhibit effects induced by these spacetime commutation relations, on the Planck length scale. Also, on these scales general relativity could be assumed to be replaced by a noncommutative gravity theory which should eventually be quantized.
For us, the starting point has been that there should be a reformulation of quantum mechanics which does not refer to a classical time. This leads to the conclusion that linear quantum theory is a limiting case of an underlying theory which becomes nonlinear on the Planck energy scale. This is the principle difference from the theories referred to in the previous paragraph—the latter assume a strict validity of linear quantum theory at all scales. For us, this nonlinearity is responsible for the explanation of the tiny observed cosmological constant, and possibly also the collapse of the wave-function during a quantum measurement. In order to arrive at the proposed reformulation of quantum mechanics, we are led to suggest noncommutativity not only in spacetime, but also in momentum space. While the detailed theory remains to be developed, some consequences of the heuristic discussions given here can be tested in the laboratory.
The author would like to thank Aruna Kesavan, Kinjalk Lochan, and Aseem Paranjape for useful discussions.
1. T. P. Singh, “Quantum mechanics on a noncommutative geometry,” Bulgarian Journal of Physics, vol. 33, no. 3, pp. 217–229, 2006. View at Google Scholar
2. T. P. Singh, “Quantum measurement and quantum gravity: many-worlds or collapse of the wavefunction?” Journal of Physics: Conference Series, vol. 174, Article ID 012024, 19 pages, 2009. View at Publisher · View at Google Scholar
3. T. P. Singh, “Noncommutative gravity, a ‘no strings attached’ quantum-classical duality, and the cosmological constant puzzle,” General Relativity and Gravitation, vol. 40, no. 10, pp. 2037–2042, 2008. View at Publisher · View at Google Scholar · View at Scopus
4. K. Lochan and T. P. Singh, in preparation.
5. R. D. Sorkin, “Is the cosmological “constant” a nonlocal quantum residue of discreteness of the causal set type?” in Proceedings of the 13th International Symposium on Particles, Strings, and Cosmology (PASCOS '07), pp. 142–153, London, UK, November 2007. View at Publisher · View at Google Scholar
6. M. Ahmed, S. Dodelson, P. B. Greene, and R. Sorkin, “Everpresent Λ,” Physical Review D, vol. 69, no. 10, Article ID 103523, 8 pages, 2004. View at Publisher · View at Google Scholar · View at Scopus
7. J. D. Barrow, “Strong constraint on ever-present Λ,” Physical Review D, vol. 75, no. 6, Article ID 067301, 3 pages, 2007. View at Publisher · View at Google Scholar · View at Scopus
8. T. Padmanabhan, “Why do we observe a small but nonzero cosmological constant?” Classical and Quantum Gravity, vol. 19, no. 17, pp. L167–L173, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
9. T. Padmanabhan, “Vacuum fluctuations of energy density can lead to the observed cosmological constant,” Classical and Quantum Gravity, vol. 22, no. 17, pp. L107–L112, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
10. S. Doplicher, K. Fredenhagen, and J. E. Roberts, “The quantum structure of spacetime at the Planck scale and quantum fields,” Communications in Mathematical Physics, vol. 172, no. 1, pp. 187–220, 1995. View at Publisher · View at Google Scholar · View at Scopus |
ca0a6afee5c994f9 | Applications of Quantum Chemistry
Image Credits: vchal/
Quantum chemistry, also known as molecular quantum mechanics, is a division of chemistry that employs quantum mechanics to the study of chemical systems to mathematically describe the fundamental properties of atoms and molecules.
At the level of atoms and sub-atomic particles, objects behave very differently to how they might behave normally; quantum theory is an attempt to describe the behavior of matter and energy in this sub-atomic state. Quantum chemistry enables scientists to understand matter at this most fundamental level by using quantum mechanics in physical models and experiments of chemical systems.
Quantum chemistry offers a complete knowledge of the chemical properties of a system and implies the computation of the wave function that describes the electronic structure of atoms and molecules.
There are two aspects of quantum mechanics which makes quantum chemistry different from previous models of matter;
1. Wave-particle duality – the need to think of very small objects such as electrons as having characteristics of both waves and particles.
2. Quantum mechanical models correctly predict that the energy of atoms and molecules is always quantized, in other words, they only have specific amounts of energy.
Quantum chemistry is a powerful tool to study the ground state of individual atoms and molecules, and the excited and transition states that arise during chemical reactions. Quantum chemical theories allow scientists to explain the structure of the Periodic Table and quantum chemical calculations allow them to accurately predict the structures of molecules and the spectroscopic behavior of atoms. It can be employed to understand, model and forecast molecular properties and their reactions, properties of nanometer materials, and reactions and processes occurring in biological systems.
Schrödinger and Theoretical Quantum Chemistry
In 1925, Erwin Schrödinger investigated what an electron might look like as a wave-particle around the nucleus of an atom. The result was an equation for particle waves, which now acts as a starting point for the quantum mechanical study of the properties of atoms and molecules.
Theoretical quantum chemistry aims to calculate predictions of quantum theory as atoms and molecules can only have discrete energies. Chemists employ the Schrödinger equation to determine the allowed energy levels of quantum mechanical systems and solving the equation usually the first phase of solving a quantum chemical problem with the result inferring the chemical properties of the material.
However, a precise solution for the Schrödinger equation can only be calculated for hydrogen; because all other atomic, or molecular systems, involve three or more particles, their Schrödinger equations cannot be solved exactly and so estimated solutions are given.
Quantum Chemistry Methods
There are two commonly used methods to solve Schrödinger’s equation – ab initio and semi-empirical methods.
1. Ab initio: A solution to the equation is obtained from the first principles of quantum chemistry using rigorous mathematical approximations and without using empirical methods. It utilizes two strategies to solve the equation: the first is wavefunction based, and the second is density functional-based, which involves the study of the properties of the system through its electronic density, but avoids the explicit resolve of the electronic wavefunction.
2. Semi-empirical methods: these are less accurate and use experimental results to avoid the solution of some terms that appear in ab initio methods.
Experimental Quantum Chemistry
Experimental quantum chemists rely heavily on spectroscopy – IR spectroscopy, NMR spectroscopy, and scanning probe microscopy – to obtain information about the quantization of energy on a molecular scale. It has great value in supporting and interpreting experimental spectroscopic data. A close collaboration between theoretical calculations and experiments has produced many chances for quantum chemistry calculations to classify species found in spectra and to propose new avenues for experimental study.
Sources and Further Reading
Kerry Taylor-Smith
Written by
Kerry Taylor-Smith
• APA
Taylor-Smith, Kerry. (2019, August 26). Applications of Quantum Chemistry. AZoQuantum. Retrieved on July 03, 2020 from
• MLA
Taylor-Smith, Kerry. "Applications of Quantum Chemistry". AZoQuantum. 03 July 2020. <>.
• Chicago
Taylor-Smith, Kerry. "Applications of Quantum Chemistry". AZoQuantum. (accessed July 03, 2020).
• Harvard
Taylor-Smith, Kerry. 2019. Applications of Quantum Chemistry. AZoQuantum, viewed 03 July 2020,
Tell Us What You Think
Leave your feedback |
75929795d3080d63 | Let's consider the Schroedinger equation \begin{equation} i\hbar\frac{\partial}{\partial t}\psi=-\frac{\hbar}{2m}\nabla^2\psi \end{equation} If I have a wavefunction $\psi$ as a solution, then its complex conjugate $\psi^*$ is not a solution. If I'm not mistaken this means that the Schroedinger equation is NOT invariant under charge conjugation C, am i right? And what does $\psi^*$ represent, if $\psi$ is the wave function describing my particle?
When I move to Klein-Gordon equation my book says that $\psi$ and $\psi^*$ are both solutions of KG equation, where $\psi$ contains the operators for destruction of particle and creation of antiparticle and $\psi^*$ for creation of particle and destruction of antiparticle. Does this mean that KG equation is invariant if I "complex-conjugate" it? And if this is the case what is the physical meaning of S equation being not invariant under charge conjugation, differently from KG equation?
• $\begingroup$ That Schrödinger equation there knows nothing about "charge". Why do you think $\psi\mapsto \psi^\ast$ is "charge conjugation"? $\endgroup$ – ACuriousMind May 30 '16 at 18:12
• $\begingroup$ Found a question here on stackexchange (physics.stackexchange.com/q/102838) where it was said "Having no explicit charge in your equation, the charge conjugate symmetry operation would be simply taking the complex conjugate of the wave function". But I'm new in symmetries concepts and so on. $\endgroup$ – Luthien May 30 '16 at 18:18
• $\begingroup$ Ok I thought about it and you're right, it's totally wrong in this context, i was thinking about "complex conjugation" when I wrote this question. $\endgroup$ – Luthien May 30 '16 at 18:42
Complex conjugation has nothing to do with charge conjugation. Charge conjugation flips quantum numbers, which don't appear at all in the standard Schrodinger equation.
The actual symmetry related to complex conjugation is time reversal. However, to actually perform time reversal, you must also replace $i$ with $-i$, so the time reversed Schrodinger equation is $$-i\hbar \frac{\partial}{\partial t} \psi^* = - \frac{\hbar}{2m} \nabla^2 \psi.$$ This is equivalent to the original equation, so the Schrodinger equation is time symmetric. More generally, this is because the Hamiltonian is Hermitian.
| cite | improve this answer | |
• $\begingroup$ Does this mean that $\psi^*$ is my solution $\psi$ that has been time-reversed? I mean, if $\psi$ is a solution in my "original" world, is $\psi^*$ a solution in a "time-reversed" world? $\endgroup$ – Luthien May 30 '16 at 18:34
• $\begingroup$ @Luthien Basically, yes. $\endgroup$ – tparker May 31 '16 at 3:39
Your Answer
|
126eda6d45f8f150 | Circular Math
Five years ago today I posted, Beautiful Math, which is about Euler’s Identity. In the first part of that post I explored why the Identity is so exquisitely beautiful (to mathematicians, anyway). In the second part, I showed that the Identity is a special case of Euler’s Formula, which relates trigonometry to the complex plane.
Since then I’ve learned how naive that post was! It wasn’t wrong, but the relationship expressed in Euler’s Formula is fundamental and ubiquitous in science and engineering. It’s particularly important in quantum physics with regard to the infamous Schrödinger equation, but it shows up in many wave-based contexts.
It all hinges on the complex unit circle and the exp(i×π×a) function.
As you’ll see, this post brings together many previous posts:
• Beautiful Math, which introduces Euler’s Identity and Formula. (For this post, it’s important that you’ve read this one.)
• Many posts about complex numbers, and in particular the complex plane. (In this post you do need to know what the complex plane is.)
• Fourier Curves, which introduces some Fourier transform ideas, including that it’s used in JPEG compression. (Optional reading.)
An irony about that last post is that, except for the title and image captions, I don’t actually mention Fourier transforms at all. (It was mainly about the pretty pictures.) In the next two posts I plan to show you how Fourier transforms work.
That — and more — is based on the following:
One might see just an opaque bit of math. One might recognize the mathematical operation of exponentiation — something to the power of something. One might even recognize the constants e and i but not have any sense of their appearance, use, or implications, here.
The point is that the expression might look mysterious, but it doesn’t look surprising.
On the other hand, if one is more familiar with the constants and the operation, one might very well find the expression not just mysterious, but surprising. And weird! There may be a sense of WTF? or that it can’t work (or at least how does that work?).
The level of hey wait a minute may only increase with this:
What in the world can it possibly mean to raise the transcendental number, e, to the power of i times π? How is that even possible? And how can that be equal to minus one?
One clue is that we’re using the imaginary unit, i, which means the complex numbers, which means the complex plane, which means there is a geometric aspect to this (and it’s the geometry that’s cool and makes Fourier transforms work).
§ §
To make sense of it, we must extend the definition of exponentiation from the basic one most of us first learned.
That definition saw the exponent as an integer specifying how many times to multiply the base number times itself. For instance:
We multiply the base number, N, times itself five times. Simple, but as we got a little deeper into math, we did encounter other definitions:
Note that all three of the above equalities can be mathematically justified using the hyper-important exponent rule:
There are also logarithms, which use real exponents, which gets us closer to the idea of something like π as an exponent.
(Regarding pi, 10π is just the number 1,385.455731… — as with the logarithm above, pi is just another real exponent with a value a bit more than three. It’s using i as an exponent that’s the weird addition here.)
The point here is that, even on the outskirts of math, exponents are part of a larger picture, some of which you’ve probably already seen.
With that introduction, here’s a very good (and very short) video that introduces the ideas behind what at first seems an impossible notion:
(Note that 3.14 minutes is not the same as 3:14 minutes!)
The Executive Summary: In the extended definition, exponentiation with i becomes rotation on the complex plane. When we choose to use e as the base, a full rotation of 360° is exactly 2π radians, which is the circumference of a circle and which therefore allows a direct mapping to trigonometry.
(As I’ve posted about not long ago, multiplication by i rotates points by 90° on the complex plane. See also: Matrix Rotation)
This mapping to trig is where Euler’s Formula comes in:
eia = cos(a) + i sin(a)
Both sides of the expression are ways to denote a complex number that lies on the complex plane at angle a. As you might imagine, treating a complex number as a single exponential value offers some calculation advantages.
(Euler’s Identity, e = -1, is the specific case where a=π.)
But why e? What’s so special about that number? This (14-minute) video explains:
The short version is that the function ex is its own derivative, which tames certain aspects of the math.
As the video gets into, derivatives are important throughout physics (and other areas of life, such infection rates and your bank account). We encounter derivatives everywhere; velocity, for example, is the derivative of distance over time (as seen in the first video).
In general, the function ax (for some value of a) is proportional to its derivative, but only when a=e is the derivative equal. (Remember that e is just a transcendental constant with the value 2.7182818284590…)
The upshot is that, when e is the base, there are useful manipulations of formulas containing it (as seen in both videos). That it’s the base of the natural logarithm makes it a natural choice in many situations (as, for instance, the unit circle on the complex plane).
As far as how the calculation is actually carried out, the short form is:
The exp(x) function is the same thing as raising e to the power of x. It makes the expression easier to write and typeset. It also makes it more clear that “e-to-the-x” is a function.
That short form expands to:
For however large we make n. (The larger it is, the more accurate the calculation.) Note that x can easily be a complex number in this expression.
Just remember: ex = exp(x)
§ §
Figure 1. Various ax curves.
Let’s start with the general exponential function.
Figure 1 shows a set of exponential curves.
Each curve plots ax for some value of a.
The blue curves have values for a above one, and the purple curves have values below one.
The red curve (flat at one) is a=1.0. (One to the power of anything is just one.)
The green curve is a=e.
The dark-blue curve is a=1.5. I included it to show how the flat a=1.0 curve starts to deflect upwards for positive values of x (and downwards for negative values).
The three blue curves, from outside-in, are a=2.0, a=4.0 & a=8.0. (Notice the green a=e curve is between the 2.0 and 4.0 curves — the value of e is 2.71828…)
The purple curves, from outside-in, are a=0.50, a=0.25 & a=0.125. When values for a increase above 1.0, the curve deflects more and more sharply upwards (after passing through 1.0 at x=0.0). For values of a less than 1.0, the curve deflects more sharply as the value decreases towards zero.
All curves pass through 1.0 at x=0.0, because a0 is always 1.0.
There is a bit of magic we can apply regarding the base value, a:
In other words, ax is the same as exk for some constant k — specifically the natural log of a. So for example, if a=3:
This means we can draw an identical set of curves as Figure 1 using just the exp(x) function (which is exactly how I did it).
We can also leverage that nice derivative property and unify our mathematical approach. So it’s the right-hand version that usually appears in physics formulas. That’s part of why it shows up so often.
§ §
The exp(x) function does much more than create “exponentially rising” curves. Here is another set of curves drawn with the exp(x) function:
Figure 2. Various Gaussian curves calculated with the exp function.
You may recognize these as “bell curves” — their formal name is Gaussian (after mathematician Carl Gauss). They are described by the Gaussian function:
Which, remember, is the same thing as:
The constants A, B & C, control the shape of the curve. The value of A determines the height of the peak; the value of B controls where it is centered, and the value of C (which must be greater than zero) controls its width.
In Figure 2, the value of A is 1.0, the value of B is 0.0, and the various curves reflect different values of C.
(Gaussian curves show up a lot in quantum mechanics as the localization of momentum, energy, or position, to name a few. This is closely tied to what comes next.)
Now consider the following double-chart, which shows two ways of representing the same thing:
Figure 3. A waveform (upper) and its Fourier transform (lower).
Both show a combination (or superposition) of three sine waves at three different frequencies: 13, 27, and 42 cycles per second. The upper chart shows the energy at given times, and the lower chart shows energy at given frequencies.
The upper chart averages three copies of the exponential function, each generating a different frequency sine wave (note the use of i here):
The lower chart averages three copies of the Gaussian function, each centered on one of the frequencies:
Two ways of looking at the same thing. The lower chart represents a Fourier Transform of the waveform. The upper chart can be constructed from frequency information via a Inverse Fourier Transform.
§ §
The Fourier transform is a fundamental tool in wave physics.
Audio Spectrum Analyzer
As one example, audio spectrum analyzers are common among audio engineers and enthusiasts, because they display, in real time, how much energy (sound) exists at different frequencies. (They’re also fun to watch.)
After good old VU meters, they are one of the more common audio displays (see an example image to the right).
Such analyzers use a Fourier transform to break an incoming signal — which is energy varying in time (the upper chart in Figure 3) — into its component frequencies. (The image here is essentially the same thing as the lower chart in Figure 3.)
In quantum mechanics, Fourier transforms are involved in solutions to the Schrödinger equation, and they underpin the Heisenberg Uncertainty Principle.
That’s where I’ll pick up next time.
For this and many other posts I’m deeply indebted to Grant Sanderson and his YouTube channel, 3Blue1Brown. If you have any interest in math, you should to subscribe to this channel. It’s by far the best math illumination channel I’ve encountered.
I owe a lot of my recent progress to this guy and his videos!
Stay exponential, my friends. Go forth and spread light and beauty.
About Wyrd Smythe
8 responses to “Circular Math
• Wyrd Smythe
I mentioned justifying the equalities using the exponent rule:
There is also a rule: N1 = N. (We saw that in how all the simple exponential curves passed through 1.0.)
So for this equality:
It’s just:
N5 = N1+1+1+1+1
N5 = N1 × N1 × N1 × N1 × N1
N5 = N × N × N × N × N
The next two are ever so slightly more involved:
For the first one, first consider that, trivially:
N0+x = Nx
And also:
N0 × Nx = Nx
Therefore (dividing both sides by Nx):
N0 = 1
Justifying that basic identity. With that in mind:
Nx-x = N0 = 1
Nx × N-x = 1
Dividing both sides by Nx:
N-x = 1/Nx
With those examples, for now I’ll leave the last one for the interested reader. This second one was a bit involved — the third one is a lot easier.
• Wyrd Smythe
If you want to get a bit deeper into Euler’s formula, here’s a video Grant did during the Lockdown phase in the first half of this year. It’s a bit long (51 minutes) and oriented mostly towards a high school math level, but it’s worth watching if you have an interest.
• Wyrd Smythe
One more. This one is definitely “optional reading” but it again explores Euler’s ubiquitous formula and also introduces group theory, which is another major part of mathematics (it’s 24 minutes long):
• SelfAwarePatterns
As usual, nothing mathy to say, except I scanned this one a little more carefully since you mentioned the Schrodinger equation. I get more interested in math when it’s about something.
• Wyrd Smythe
Heh! Part of me wants to protest that math is always about something, but I know what you mean. These two posts (one coming tomorrow) have a direct application to quantum mechanics. If you look at the Wiki page for the Schrödinger equation you don’t have to go very far down the page to find the first formula with e-to-the-i, and it reappears constantly throughout the page.
It really is a fundamental building block, and having a grasp of what it means opens a lot of doors in physics. (Not just QM, but optics and sound.) It also open a door to the Fourier transform, which turns out to be another basic building block.
FWIW, I have a major commitment to the idea of foundation knowledge. On a practical level, it’s what allowed me to change modes during a changing career. Good foundations make it easier to build new knowledge. An analogy I like is that, if one knows music and has already learned to play an instrument, learning a new instrument is just a matter of learning how to operate the new thing. The more instruments one plays, the easier new ones become.
Math is one of those foundations that tends to show up in many of my favorite places, so I’ve always wanted to know more, and — fortunately for me — I find math fascinating, so it’s easy and rewarding to pursue.
The downside of foundation knowledge — one reason I think a lot of people lack it — is that the ROI really sucks plus it takes forever to acquire. It’s exactly the reaction you’re expressing to math — what’s this for and what’s the point? It’s not very satisfying to say, “Well, maybe it will come in handy later,…” It just takes being fascinated by something I guess. (Or discipline I lack, since otherwise I’d know a lot more about chemistry and biology.)
I think there are a number of things with shallow learning curves — long stretches of effort with little payoff. Music was that way for me long, long ago. Then one reaches an inflection point in the curve and progress skyrockets. Suddenly the pieces makes sense and fit together.
At least that’s how a lot of things have worked for me. Long shallow curve with a steep slope after the light bulb finally goes on. (I’ve gotten the impression my curve is shallower than many in the beginning, but I make up for it later.)
• SelfAwarePatterns
I’m usually a foundations guy myself. In school, I was always at an disadvantage early on, because while others were just memorizing heuristics, I was trying to grok the concept at a deeper level. Of course, doing so always eventually translates into advantages later on.
I’m gradually listening my way through Lex Fridman’s interview of Scott Aarsonson (Aaronson links to the Youtube version from a recent blog post). One of the things he discusses is how understanding a few key concepts enables a lot in computer science. I found his take interesting.
The problem is that, with math, I’ve just never felt the bug, never been interested in it for its own sake. And when I attempt to dig too much into it, it awakens anxieties from all the struggles I had in school. Math books always seem to be written by someone who thinks it, in and of itself, is inherently beautiful. Well, I’m sorry to say it isn’t beautiful to me. I need to have it relentlessly mapped back to something useful, or it takes enormous discipline for me to continue.
I’ve struggled even getting through a chapter on linear algebra in a quantum computing book, despite knowing that I need it for the rest of the book, because the author switches into pure math mode, shoveling it without mentioning how any of it maps to the quantum subject. (The reading I did do wasn’t totally in vain. I noticed that some of the notation in quantum physics papers suddenly seemed less cryptic.)
• Wyrd Smythe
Sounds like we have similar approaches to learning something. I need to understand — I can’t learn by rules or facts. As you say, it pays off big time later.
I think a lot of people got disenchanted with math in school. It can be a very poorly taught subject, sometimes by teachers who don’t grasp or love it themselves. As you’ve found from books, just loving it isn’t enough — that doesn’t carry over to others. It takes connecting it or illuminating it in a compelling and engaging way. (That’s why I love that 3Blue1Brown channel — that’s what the guy does, and he’s really good at it.)
If you wanted to tackle linear algebra again, I highly recommend his Essence of linear algebra series. It’s 15 videos ranging from the 4-17 minute range; most of them kind of in the middle of that. A lot of working with the Schrödinger equation involves concepts from linear algebra, eigenvectors and eigenvalues key among them. The series involves a visual understanding of matrix transformations that makes them almost obvious.
(Tomorrow’s post involves a video of his that makes the Fourier transform almost obvious. It’s a really cool way to look at it, and this e-to-the-power-of-i business is literally at the heart of it. And as I may have mentioned, the Fourier transform underpins the HUP — position and momentum are conjugate Fourier transforms of each other.)
Exactly true about the math looking less cryptic! More and more that quantum math is starting to look like something I understand, although I’m a long ways from being able to work with it.
• Wyrd Smythe
I’m not kidding about synchronicity permeating my life. Today I wanted to curl up with a book, so I borrowed an immediately available Rex Stout book from the library. I didn’t check my collection of purchases closely enough — I already own the book.
So while reading it seems more and more familiar, and I’m trying to figure out if some TV show borrowed the plot or why this seems so familiar. I finally checked my collection more closely and, sure enough, it’s familiar because own it and I’ve read it. (Within the last couple of years, at that. As I’ve said many times, my memory for fiction plots is practically non-existent.)
But here’s the eerie part. One character is a mathematician and in one scene he writes out a formula for what he calls the “second approximation to the normal distribution”. That formula is in the book, and our friend e-to-the-power-of-something is part of that formula. Specifically, it was a Gaussian. (When I say it’s ubiquitous and appears everywhere, I’m not kidding about that, either.)
See this Wiki article for examples.
And what do you think?
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this: |
6045053f4b8a38a4 | Erwin Schrödinger
Also found in: Dictionary, Thesaurus, Wikipedia.
Related to Erwin Schrödinger: Max Planck
Schrödinger, Erwin
Born Aug. 12,1887, in Vienna; died there Jan. 4, 1961; buried in Alpbach, Tírol. Austrian physicist. One of the founders of quantum mechanics.
Schrödinger received the Ph.D. degree from the University of Vienna in 1910. In 1911 he began working at the Physics Institute of the University of Vienna. In 1920 he was a professor at the Technische Hochschule in Stuttgart, and in 1921, a professor at the Technische Hochschule in Breslau (Wrocław). From 1921 to 1927 he was a professor at the Technische Hochschule in Zürich, and from 1927, a professor at the University of Berlin. From 1933 to 1935 he was a professor at Oxford University, and from 1936 to 1938 at the university in Graz. In 1938–39 he was a professor in Ghent. Beginning in 1940, he was first a professor at the Royal Academy in Dublin, and then director of the Institute for Advanced Studies, which he founded in Dublin. From 1956, he was a professor at the University of Vienna.
Schrödinger’s main works dealt with mathematical physics, the theory of relativity, atomic physics, and biophysics. His early studies were devoted to the theory of the crystal lattice and the creation (1920) of the mathematical theory of color, which became the basis for modern colorimetry. His most important contribution was the creation of wave mechanics (late 1925 and early 1926): proceeding from L. de Broglie’s hypothesis regarding the wave properties of matter, Schrödinger showed that the stationary states of atomic systems may be considered as the self-oscillations of the wave field that corresponds to the given system. Schrödinger discovered the fundamental equation of nonrelativistic quantum mechanics (the Schrödinger equation) and gave its solution for a number of particular problems; he provided a general method of applying the equation in perturbation theory. Schrödinger established the relationship between wave mechanics and the “matrix mechanics” of W. Heisenberg, M. Born, and P. Jordan and proved that they were physically identical. The mathematical formalism developed by Schrödinger and the wave function ψ introduced by him proved to be the most adequate mathematical apparatus of quantum mechanics and its applications.
Schrödinger received a Nobel Prize in 1933. He was a foreign member of the Academy of Sciences of the USSR (1934).
Abhandlungen zur Wellenmechanik, 2nd ed. Leipzig, 1928.
In Russian translation:
Izbrannye trudy po kvantovoi mekhanike. Moscow, 1976. (Series Klassiki nauki.)
Chto takoe zhizn’? S tochki zreniia ftziki, 2nd ed. Moscow, 1972.
|
6fffad28d21de508 | A+ A A-
Chemical Dynamics and Kinetic Modelling
In recent years, quantum chemistry has become truly accurate, with uncertainties comparable to typical uncertainties in many experiments. This should be leading to a complete transformation of chemistry, but so far it has not. A major cause of this failure has been that accurate quantum chemistry calculations of interesting observables (e.g., product mixture composition in organic synthesis and heterogeneous catalysis, kinetic isotope effects and rates of low-temperature reactions) are pretty complicated, and often require the efforts of several professional quantum chemists, each a specialist in a certain step of the calculation. We are working on next-generation algorithms in which many of these calculations will be routinely performed by the scientists interested in the problem, rather than by computational chemists who do not know the real physical system.
Quantum Mechanical Effects in Chemical Dynamics
chemical dynamics quantum mech effects
The inclusion of quantum mechanical nuclear effects (such as zero point energy and tunneling) in the calculation of chemical reaction rates is of particular importance. The role of these effects is well-known from textbooks: changes in zero point energy between the reactants and the transition state are responsible for the observed kinetic isotope effects in a wide variety of reactions, and tunneling can increase the rate of an activated proton transfer reaction at low temperatures by several orders of magnitude.
The exact inclusion of these effects in calculations of chemical reaction rates is one of the most challenging tasks of modern theoretical physical chemistry, because even assuming that a reliable electronic potential energy surface (PES) is available the computational effort that is needed to solve the reactive scattering Schrödinger equation increases exponentially with the number of atoms in the reaction. We are working on developing approximate methods to overcome this problem and to provide a practical way to include quantum mechanical effects in reaction rate calculations.
Advanced Methods for Discovery of Elementary Chemical Reactions and Prediction of Chemical Reaction Networks
chemical dynamics advanced algorithmsWe are working on the development of advanced automated algorithms for discovering important new chemical reactions. The problem of finding unexpected reactions is very challenging because it scales exponentially with the number of atoms in the reactant(s).
The key to significantly improving the scaling is to use evolutionary algorithms which use all the information that is known about the Potential Energy Surface (PES) and chemical bonds to improve the probability that the next search step will be near a saddle point. The algorithm will then use the computed energy, gradients and Hessian at that search point to “learn” more about the PES landscape and provide better informed decisions about which points to search next.
Prognosis of Site-Selective Chemical Reactivity for Organic Molecules
prognosisofsiteselectivechemicalThe goal of this project is to create a collection of fast and reliable algorithms to make a prognosis of the reactivity of organic molecules from their structure employing quantum chemistry calculations. Traditionally, computational analysis of possible reaction pathways requires working with large datasets.
There are some general-purpose workflow engines that allow users to organize and schedule different tasks using a graphical user interface. However, quantum chemistry calculations yield results, which cannot be used by most organic chemists directly. The output of the calculations must be translated into language or formalism to point out their chemical relevance. Although there are billions of reactions involved, only a limited number of factors or reactivity principles exist, which apply to the vast majority of chemical reactions. For example, factors such as “Acidity”, “Basicity”, “Lewis basicity”, are important reactivity descriptors for the largest number of reactions. The main idea of this project approach is to automate analysis of proposed reaction pathways by the calculating their key parameters. We strongly believe that the project will help chemists to understand various reaction mechanisms and to discover new reactions.
Algorithms for Optimization of Heterogeneous Catalysts
algorithmsforoptimazationofheteroThe development of efficient algorithms of computational catalytic design with minimal human intervention and optimal computational expenses represents one of the main challenges of present-day theoretical chemistry and physics. We are working on a novel approach for computational screening heterogeneous catalysts with variable-composition simulations of the material. At the core of our algorithm is an evolutionary algorithm which incorporates “learning from history” done through selection of the low-energy structures and high-catalytic activity to become parents of the new generation. Combining it with automated algorithms for screening catalytic activity of heterogeneous catalysts provides systematic and exhaustive tools for screening a set of chemically varied complex compounds. With the proper choice of the descriptor of catalytic activity, all relevant parameters can be automatically analyzed and the most promising materials identified. The method has several innovative characteristics that allows for its application to probe complex materials as it is automated and requires minimum human intervention. We are working on several interesting applications.
Selected Publications
gender equality |
15a23ede0413a24f | Try Our Apps
The Best Internet Slang
wave mechanics
noun, Physics.
a form of quantum mechanics formulated in terms of a wave equation, as the Schrödinger equation.
Compare matrix mechanics.
Origin of wave mechanics
First recorded in 1925-30 Unabridged
Cite This Source
British Dictionary definitions for wave mechanics
wave mechanics
(functioning as sing) (physics) the formulation of quantum mechanics in which the behaviour of systems, such as atoms, is described in terms of their wave functions
Collins English Dictionary - Complete & Unabridged 2012 Digital Edition
Cite This Source
wave mechanics in Science
wave mechanics
A theory that interprets the behavior of matter (especially subatomic or other small particles) in terms of the properties of waves. A broad range of physical phenomena, from the propagation of earthquakes to the structures of electron orbitals in atoms, have been understood using wave mechanics. Quantum mechanics uses a form of wave mechanics and involves wave equations such as Schrödinger's equation to capture both the wavelike and particlelike properties of matter.
The American Heritage® Science Dictionary
Cite This Source
Word of the Day
Difficulty index for wave mechanics
Few English speakers likely know this word
Word Value for wave
Scrabble Words With Friends
Nearby words for wave mechanics |
e78207609223abfc | Take the 2-minute tour ×
I am, in full generality, confused about perturbation theory in quantum mechanics.
My textbook and Wikipedia have the same general approach to explaining it: given some Hamiltonian $H=H^{(0)} + H^\prime$, we can break down each eigenfunction $\left\vert n \right\rangle$ into a power series in an invented constant $\lambda$ and the eigenenergies likewise:
$\left\vert n \right\rangle = \sum\lambda^i\left\vert n^{(i)}\right\rangle$
$E_n = \sum \lambda^i E_n^{(i)}$
$\left(H^{(0)} + H^\prime\right) \left(\left\vert n^{(0)}\right\rangle + \lambda \left\vert n^{(0)}\right\rangle + \cdots \right) = \left(E^{(0)}+ \lambda E^{(1)} + \cdots\right) \left(\left\vert n^{(0)}\right\rangle + \lambda \left\vert n^{(1)}\right\rangle + \cdots \right)$
... and then they take $\lambda\to1$.
My question is - what's the logic here? Where did this come from? What purpose does $\lambda$ serve, given that the actual size of each contribution will be determined by the $E^{(i)}$'s and $\left\vert n^{(i)}\right\rangle$'s?
share|improve this question
4 Answers 4
up vote 4 down vote accepted
Firstly, I refer you to Prof. Binney's textbook (see below) which covers perturbation theory in quantum mechanics in explicit detail. When doing perturbation theory, we perturb the Hamiltonian $H^{(0)}$ of a system which has been solved analytically, i.e. the eigenstates and eigenvalues are known. Specifically,
$$H^{(0)}\to H^{(0)} + \lambda H'$$
where $H'$ is the perturbation, and $\lambda$ is a coupling constant. Why include such a constant? As Binney says, it provides us a 'slider' which when gradually increased to unity increases the strength of the perturbation. When $\lambda = 0$, the system is unperturbed, and when $\lambda=1$ we 'fully perturb the system.'
Introducing a coupling constant $\lambda$ also provides us with a manner to refer to a particular order of perturbation theory; $\mathcal{O}(\lambda)$ is first order, $\mathcal{O}(\lambda^2)$ is second order, etc. As we increase in powers of the coupling constant, we hope the corrections decrease. (The series may not even converge.)
A caveat: the demand that a coupling $\lambda \ll1$ may not be sufficient or correct to ensure that the coupling is small; this is only the case when the coupling is dimensionless. For example, if the coupling, in units where $c=\hbar=1$, had a mass (or equivalently energy) dimension of $+1$, then to ensure a weak coupling we would need to demand, $\lambda/E \ll 1$, where $E$ had dimensions of energy. Such couplings are known as relevant as at low energies they are high, and at high energies the coupling is low.
share|improve this answer
Which textbook is this? – linkhyrule5 Mar 26 at 11:15
A free PDF of the book is provided by Binney at: www-thphys.physics.ox.ac.uk/people/JamesBinney/QBhome.htm – JamalS Mar 26 at 11:16
For easy reference, can you link that in the question? – Kvothe Mar 26 at 12:55
@Kvothe: Certainly. – JamalS Mar 26 at 12:55
As far as I understand, the logic behind this is the following.
We write down the Hamiltonian for the perturbed system as the Hamiltonian for the unperturbed one plus some perturbation \begin{equation} H = H^{(0)} + H' \, . \end{equation}
Assuming that the perturbation is applied gradually we then introduce $H(\lambda)$ operator \begin{equation} H(\lambda) = H^{(0)} + \lambda H' \, , \end{equation} which is identical to $H^{(0)}$ when $\lambda = 0$ and is identical to $H$ when $\lambda = 1$, thus giving a continuous change from the unperturbed to the perturbed system.
We finally assume that the time-independent Schrödinger equation holds for all $\lambda \in [0, 1]$ \begin{equation} H(\lambda) |n(\lambda)\rangle = E(\lambda) |n(\lambda)\rangle \, , \end{equation} and we introduce the power series expansions for $|n(\lambda)\rangle$ and $E(\lambda)$ you mentioned.
Lately we often set $\lambda$ equal to 1 if we are interested in the fully perturbed system.
share|improve this answer
But what does a "first-order-approximation" really mean in this case, then? Since $\lambda \in [0,1]$, there's no particular reason why a higher-order term should be smaller than a lower-order one that I can see... – linkhyrule5 Mar 26 at 10:50
First-order approximation to energy (state vector) is the coefficient of the first power of $\lambda$ in the expansions of energy (state vector) in powers of $\lambda$, i.e. $E^{(1)}$ ($n^{(1)}$) in the notation you used. – Wildcat Mar 26 at 10:55
Right, but ... I mean, technically it is indeed an approximation in the first order of $\lambda$; what I'm asking is, what makes it necessarily more accurate than the third-order terms (alone)? What makes the contributions of higher-order terms less significant, once you take $\lambda\to1$? – linkhyrule5 Mar 26 at 10:56
@linkhyrule5 hmmm... the higher-order terms are not less significant. Where did you get that? Perturbative series are not even guaranteed to converge. – Wildcat Mar 26 at 11:01
@linkhyrule5 well, you can successively calculate first-order corrections, second-order corrections, and so forth and check the convergence. If perturbative series do converge, you can safely use the theory, but if not, then you are in a trouble. – Wildcat Mar 26 at 11:16
The point of introducing the coupling constant $\lambda$ is that the perturbation series in $\lambda$ might not have radius of convergence $\geq 1$, i.e. the power series might not be convergent at $\lambda=1$, and hence that it might not make sense to substitute $\lambda=1$. In fact, that's typically the case.
Nevertheless, a divergent series still make sense as a formal power series if we have a free parameter $\lambda$. (One may think of $\lambda$ as a convenient bookkeeping device, which keeps track of the perturbative order.) Of course, a formal power series is of limited use if we don't know how to sum it.
However, a divergent formal power series may in turn be an asymptotic series. If we are granted that the system makes sense non-perturbatively (so that we can talk about the correct result), it might still be the case that the first few terms of the perturbative power series expansion in $\lambda$ may constitute an excellent approximation, even if the full perturbation series in $\lambda$ is divergent.
share|improve this answer
In case $H'$ is small in some sense wrt. $H_0$ one usually writes $$H(\lambda)=H_0+{\lambda}H'.$$ If the eigenvalues of $H_0$ are known one then obtains a perturbation series expressing the eigenvalues and eigenvectors of $H$ in terms of those of $H_0$. $\lambda$ is primarily introduced to keep track of terms.
Complications occur in case an eigenvalue of $H_0$ is degenerate and if it is continuum-embedded.
There is a vast literature about this matter. In the volumes of Reed and Simon you can find much about the mathematical background.
share|improve this answer
Your Answer
|
b6c8482729ec8363 | What's new in
Note: Newest contributions are at the top!
Year 2015
Is non-associative physics and language possible only in many-sheeted space-time?
In Thinking Allowed Original there was very interesting link added by Ulla about the possibility of non- associative quantum mechanics.
Also I have been forced to consider this possibility.
1. The 8-D imbedding space of TGD has octonionic tangent space structure and octonions are non-associative. Octonionic quantum theory however has serious mathematical difficulties since the operators of Hilbert space are by definition associative. The representation of say octonionic multiplication table by matrices is possible but is not faithful since it misses the associativity. More concretely, so called associators associated with triplets of representation matrices vanish. One should somehow transcend the standard quantum theory if one wants non-associative physics.
2. Associativity therefore seems to be fundamental in quantum theory as we understand it recently. Associativity is indeed a fundamental and highly non-trivial constraint on the correlation functions of conformal field theories. In TGD framework classical physics is an exact part of quantum theory so that quantum classical correspondence suggests that associativity could play a highly non-trivial role in classical TGD.
The conjecture is that associativity requirement fixes the dynamics of space-time sheets - preferred extremals of Kähler action - more or less uniquely. One can endow the tangent space of 8-D imbedding H=M4× CP2 space at given point with octonionic structure: the 8 tangent vectors of the tangent space basis obey octonionic multiplication table.
Space-time realized as n-D surface in 8-D H must be either associative or co-associative: this depending on whether the tangent space basis or normal space basis is associative. The maximal dimension of space-time surface is predicted to be the observed dimension D=4 and tangent space or normal space allows a quaternionic basis.
3. There are also other conjectures (see this) about what the preferred extremals of Kähler action defining space-time surfaces are.
1. A very general conjecture states that strong form of holography allows to determine space-time surfaces from the knowledge of partonic 2-surfaces and 2-D string world sheets.
2. Second conjecture involves quaternion analyticity and generalization of complex structure to quaternionic structure involving generalization of Cauchy-Riemann conditions.
3. M8-M4× CP2 duality stating that space-time surfaces can be regarded as surface in either M8 or M4× CP2 is a further conjecture.
4. Twistorial considerations select M4× CP2 as a completely unique choice since M4 and CP2 are the only spaces allowing twistor space with Kähler structure. The conjecture is that preferred extremals can be identified base spaces of 6-D sub-manifolds of the product CP3× SU(3)/U(1)× U(1) of twistor spaces of M4 and CP2 having the property that it makes sense to speak about induced twistor structure.
The "super(optimistic)" conjecture is that all these conjectures are equivalent.
One must be of course very cautious in order to not draw too strong conclusions. Above one considers quantum physics at the level of single space-time sheet. What about many-sheeted space-time? Could non-associative physics emerge in TGD via many-sheeted space-time? To answer this question one must first understand what non-associativity means.
1. In non-associative situation brackets matter. A(BC) is different from (AB)C. From schooldays or at least from the first year calculus course one recalls the algorithm: when calculating the expression involving brackets one first finds the innermost brackets and calculates what is inside them, then proceed to the next innermost brackets, etc... In computer programs the realization of the command sequences involving brackets is called parsing and compilers perform it. Parsing involves decomposition of program to modules calling modules calling.... Quite generally, the analysis of linguistic expressions involves parsing. Bells start to ring as one realizes that parsings form a hierarchy as also do the space-time sheets!
2. More concretely, there is hierarchy of brackets and there is also a hierarchy of space-time sheets, perhaps labelled by p-adic primes. B and C inside brackets form (BC), something analogous to a bound state or chemical compound. In TGD this something could correspond to a "glueing" space-time sheets B and C at the same larger space-time sheet. More concretely, (BC) could correspond to braided pair of flux tubes B and C inside larger flux tube, whose presence is expressed as brackets (..). As one forms A(BC) one puts flux tube A and flux tube (BC) containing braided flux tubes B and C inside larger flux tube. For (AB)C flux one puts tube (AB) containing braided flux tubes A and B and tube C inside larger flux tube. The outcomes are obviously different. A
3. Non-associativity in this sense would be a key signature of many-sheeted space-time. It should show itself in say molecular chemistry, where putting on same sheet could mean formation of chemical compound AB from A and B. Another highly interesting possibility is hierarchy of braids formed from flux tubes: braids can form braids, which in turn can form braids,... Flux tubes inside flux tubes inside... Maybe this more refined breaking of associativity could underly the possible non-associativity of biochemistry: biomolecules looking exactly the same would differ in subtle manner.
4. What about quantum theory level? Non-associativity at the level of quantum theory could correspond to the breaking of associativity for the correlation functions of n fields if the fields are not associated with the same space-time sheet but to space-time sheets labelled by different p-adic primes. At QFT limit of TGD giving standard model and GRT the sheets are lumped together to single piece of Minkowski space and all physical effects making possible non-associativity in the proposed sense are lost. Language would be thus possible only in TGD Universe! My nasty alter ego wants to say now something - my sincere apologies: in superstring Universe communication of at least TGD has indeed turned out to be impossible! If superstringy universe allows communications at all, they must be uni-directional!
Non-associativity is an essentially linguistic phenomenon and relates therefore to cognition. p-Adic physics labelled by p-adic primes fusing with real physics to form adelic physics are identified as the physics of cognition in TGD framework.
1. Could many-sheeted space-time of TGD provides the geometric realization of language like structures? Could sentences and more complex structures have many-sheeted space-time structures as geometrical correlates? p-Adic physics as physics of cognition would suggests that p-adic primes label the sheets in the parsing hierarchy. Could bio-chemistry with hierarchy of magnetic flux tubes added, realize the parsing hierarchies?
2. DNA is a language and might provide a key example about parsing hierarchy. The mystery is that human DNA and DNAs of most simplest creatures do not differ much. Our cousins have almost identical DNA with us. Why do we differ so much? Could the number of parsing levels be the reason- p-adic primes labelling space-time sheets? Could our DNA language be much more structured than that of our cousins. At the level of concrete language the linguistic expressions of our cousin are indeed simple signals rather than extremely complex sentences of old-fashioned German professor forming a single lecture each. Could these parsing hierarchies realize themselves as braiding hierarchies of magnetic flux tubes physically and more abstractly as the parsing hierarchies of social structures. Indeed, I have proposed that the presence of collective levels of consciousness having hierarchy of magnetic bodies as a space-time correlates distinguishes us from our cousins so that this explanation is consistent with more quantitative one relying on language.
3. I have also proposed that intronic portion of DNA is crucial for understanding why we differ so much from our cousins (see this and this). How does this view relate to the above proposal? In the simplest model for DNA as topological quantum computer introns would be connected by flux tubes to the lipids of nuclear and cell membranes. This would make possible topological quantum computations with the braiding of flux tubes defining the topological quantum computer program.
Ordinary computer programs rely on computer language. Same should be true about quantum computer programs realized as braidings. Now the hierarchical structure of parsings would correspond to that of braidings: one would have braids, braids of braids, etc... This kind of structure is also directly visible as the multiply coiled structure of DNA. The braids beginning from the intronic portion of DNA would form braided flux tubes inside larger braided flux tubes inside.... defining the parsing of the topological quantum computer program.
The higher the number of parsing levels, the higher the position in the evolutionary hierarchy. Each braiding would define one particular fundamental program module and taking this kind of braided flux tubes and braiding them would give a program calling these programs as sub-programs.
4. The phonemes of language would have no meaning to us (at our level of self hierarchy) but the words formed by phonemes and involving at basic level the braiding of "phoneme flux tubes" would have. Sentences and their substructures would in turn involve braiding of "word flux tubes". Spoken language would correspond to a temporal sequence of braidings of flux tubes at various hierarchy levels.
5. The difference between us and our cousins (or other organisms) would not be at the level of visible DNA but at the level of magnetic body. Magnetic bodies would serve as correlates also for social structures and associated collective levels of consciousness. The degree of braiding would define the level in the evolutionary hierarchy. This is of course the basic vision of TGD inspired quantum biology and quantum bio-chemistry in which the double formed by organism and environment is completed to a triple by adding the magnetic body.
p-Adic hierarchy is not the only hierarchy in TGD Universe: there is also the hierarchy of Planck constants heff=n× h giving rise to a hierarchy of intelligences. What is the relationship between these hierarchies?
1. I have proposed that speech and music are fundamental aspects of conscious intelligence and that DNA realizes what I call bio-harmonies in quite concrete sense (see this and this): DNA codons would correspond to 3-chords. DNA would both talk and sing. Both language and music are highly structured. Could the relation of heff hierarchy to language be same as the relation of music to speech?
2. Are both musical and linguistic parsing hierarchies present? Are they somehow dual? What does parsing mean for music? How musical sounds could combine to form the analog of two braided strand? Depending on situation we hear music both as separate notes and as chords as separate notes fuse in our mind to a larger unit like phonemes fuse to a word.
Could chords played by single instrument correspond to braidings of flux tubes at the same level? Could the duality between linguistic and musical intelligence (analogous to that between function and its Fourier transform) be very concrete and detailed and reflect itself also as the possibility to interpret DNA codons both as three letter words and as 3-chords (see this)?
See the new chapter Is Non-Associative Physics and Language Possible Only in Many-Sheeted Space-Time?.
Does also low Tc superconductivity rely on magnetic flux tubes in TGD Universe?
Discussions with Hans Geesink have inspired sharpening of the TGD view about bio-superconductivity (bio-SC), high Tc superconductivity (SC) and relate the picture to standard descriptions in a more detailed manner. In fact, also standard low temperature super-conductivity modelled using BCS theory could be based on the same universal mechanism involving pairs of magnetic flux tubes possibly forming flattened square like closed flux tubes and members of Cooper pairs residing at them.
A brief summary about strengths and weakness of BCS theory
First I try to summarise some basics about BCS theory.
1. BCS theory is successful in 3-D superconductors and explains a lot: supracurrent, diamagnetism, and thermodynamics of the superconducting state, and it has correlated many experimental data in terms of a few basic parameters.
2. BCS theory has also failures.
1. The dependence on crystal structure and chemistry is not well-understood: it is not possible to predict, which materials are super-conducting and which are not.
2. High-Tc SC is not understood. Antiferromagnetism is known to be important. The quite recent experiment demonstrates conductivity- maybe even conductivity - in topological insulator in presence of magnetic field (see this). This is compete paradox and suggests in TGD framework that the flux tubes of external magnetic field serve as the wires (see previous posting).
3. BCS model based on crystalline long range order and k-space (Fermi sphere). BCS-difficult materials have short range structural order: amorphous alloys, SC metal particles 0-down to 50 Angstroms (lipid layer of cell membrane) transition metals, alloys, compounds. Real space description rather than k-space description based on crystalline order seems to be more natural. Could it be that the description of electrons of Cooper pair is not correct? If so, k-space and Fermi sphere would be only appropriate description of ordinary electrons needed to model the transition to to super-conductivity? Super-conducting electrons could require different description.
4. Local chemical bonding/real molecular description has been proposed. This is of course very natural in standard physics framework since the standard view about magnetic fields does not provide any ideas about Cooper pairing and magnetic fields are only a nuisance rather than something making SC possible. In TGD framework the situation is different.
TGD based view about SC
TGD proposal for high Tc SC and bio-SC relies on many-sheeted space-time and TGD based view about dark matter as heff=n× h phase of ordinary matter emerging at quantum criticality (see this).
Pairs of dark magnetic flux tubes would be the wires carrying dark Cooper pairs with members of the pair at the tubes of the pair. If the members of flux tube pair carry opposite B:s, Cooper pairs have spin 0. The magnetic interaction energy with the flux tube is what determines the critical temperature. High Tc superconductivity, in particular the presence of two critical temperatures can be understood. The role of anti-ferromagnetism can be understood.
TGD model is clearly x-space model: dark flux tubes are the x-space concept. Momentum space and the notion of Fermi sphere are certainly useful in understanding the transformation ordinary lattice electrons to dark electrons at flux tubes but the super conducting electron pairs at flux tubes would have different description.
Now come the heretic questions.
1. Do the crystal structure and chemistry define the (only) fundamental parameters in SC? Could the notion of magnetic body - which of course can correlate with crystal structure and chemistry - equally important or even more important notion?
2. Could also ordinary BCS SC be based on magnetic flux tubes? Is the value of heff=n× h only considerably smaller so that low temperatures are required since energy scale is cyclotron energy scale given by E= heff=n× fc, fc = eB/me. High Tc SC would only have larger heff and bio-superconductivity even larger heff!
3. Could it be that also in low Tc SC there are dark flux tube pairs carrying dark magnetic fields in opposite directions and Cooper pairs flow along these pairs? The pairs could actually form closed loops: kind of flattened O:s or flattened squares.
One must be able to understand Meissner effect. Why dark SC would prevent the penetration of the ordinary magnetic field inside superconductor?
1. Could Bext actually penetrate SC at its own space-time sheet. Could opposite field Bind at its own space-time sheet effectively interfere it to zero? In TGD this would mean generation of space-time sheet with Bind=-Bext so that test particle experiences vanishing B. This is obviously new. Fields do not superpose: only the effects caused by them superpose.
Could dark or ordinary flux tube pairs carrying Bind be created such that the first flux tube portion Bind in the interior cancels the effect of Bext on charge carriers. The return flux of the closed flux tube of Bind would run outside SC and amplify the detected field Bext outside SC. Just as observed.
2. What happens, when Bext penetrates to SC? heff→ h must take place for dark flux tubes whose cross-sectional area and perhaps also length scale down by heff and field strength increases by heff. If also the flux tubes of Bind are dark they would reduce in size in the transition heff→ h by 1/heff factor and would remain inside SC! Bext would not be screened anymore inside superconductor and amplified outside it! The critical value of Bext would mean criticality for this heff → h phase transition.
3. Why and how the phase transition destroying SC takes place? Is it energetically impossible to build too strong Bind? So that effective field Beff=Bdark+ Bind+Bext experienced by electrons is reduced so that also the binding energy of Cooper pair is reduced and it becomes thermally unstable. This in turn would mean that Cooper pairs generating the dark Bdark disappear and also Bdark disappears. SC disappears.
See the chapter Quantum model for bio-superconductivity: II
What music can teach about consciousness?
Recently I have been reading the of Oliver Sacks titled "Musicophilia" dealing with various aspects of music experience. Humans as a species indeed have a very special relation to music. But is it really genuine characteristic of human consciousness? One can even ask whether consciousness emerges only in higher species or whether it could be in some form a characteric of any living or even inanimate system? I am not the only quantum consciousness theorists forced to consider panpsychism in some form. In this framework one can ask whether music like aspects of conscious experience could be universal and only especially highly developed in humans?
In this chapter I restrict the consideration to those stories of Musicophilia, which I find of special interest from the point of view of TGD inspired theory of consciousness. The outcome is a more precise formulation for the general TGD inspired vision about brain based on basic ideas of quantum TGD.
Zero Energy Ontology (ZEO) implies a new view about the relation between geometric and experienced time and allowing to generalize quantum measurement theory to a theory of consciousness.
Strong form of holography implies the analog of AdS/CFT duality between 2-D representation of physics based on string world sheets and partonic 2-surfaces and 4-D space-time representations. This duality is not tautology and this inspires the idea that these two representations correspond to two modes for consciousness motivating "Left brain talks, right brain sings" metaphor.
1. Language and music could relate to two dual representations of conscious information - local and holistic, cognitive and sensory. Discretization of function/its Fourier transform as a collection of its values at discrete set values of time/frequencies would correspond local/holistic approximations of function. In principle any conscious entity - self- could utilize these two representational modes at appropriate quantum criticality.
2. The holistic "musical consciousness" is assignable to right brain hemisphere and according to the stories of Sacks seems to characterized by episodal sensory memories. TGD based view about memories relies on ZEO: the memories would be mental images with sensory input from geometric past, genuine sensory experiences of time reversed sub-selves! This picture simplifies considerably and one can see all memories - sensory, cognitive, or emotional - as analogs of phantom pain, which would be also a sensory memory and even more a genuine sensory experience. It is even possible that our biological bodies are used by two selves: right brain hemisphere sleeps when we are awake and vice versa. Even the experiences of epileptics about having double consciousness could be understood.
3. A more concrete realization of "Left brain talks, right brain sings" metaphor relies on the assumption that "magneto-anatomy" is universal. Only the "magneto-physiology" characterized by the values of heff characterizing quantum criticality and defining a kind of intelligence quotient dictating the span of long term memory and planned action varies.
heff would differ for the magnetic bodies of various brain areas, and the spectrum of heff for right and left brain would differ and characterize their specializations. For instance, the value of heff would be large (small) for the cognitive areas of left (right) brain and small (large) for some higher sensory areas of right (left) brain. Magnetic bodies form a fractal hierarchy and one can characterize even individual cells and neurons by the value of heff associated with them. The spectrum for heff allows also to distinguish between members of the same species since it defines the skill profile. This obviously goes far beyond the genetic determinism.
See the chapter What music can teach about consciousness? or the article What music can teach about consciousness?
A new control mechanism of TGD inspired quantum biology
The idea that TGD Universe is quantum critical, is the key idea of quantum TGD and fixes the theory more or less uniquely since the only coupling constant parameter of the theory - Kähler coupling strength - is analogous to critical temperature. Also more than one basic parameters are in principle possible - maximal quantum criticality fixes the values of all of them - but it seems that only Kähler coupling strength is needed. TGD Universe is a quantum critical fractal: like a ball at the top of hill at the top of hill at.... Quantum criticality allows to avoid the fine tuning problems plaguing as a rule various unified theories.
Quantum criticality
The meaning of quantum criticality at the level of dynamics has become only gradually clearer. The development of several apparently independent ideas generated for about decade ago have led to the realization that quantum criticality is behind all of them. Behind quantum criticality are in turn number theoretic vision and strong forms of general coordinate invariance and holography.
1. The hierarchy of Planck constants defining hierarchy of dark phases of ordinary matter corresponds to a hierarchy of quantum criticalities assignable to a fractal hierarchy of sub-algebras of super-symplectic algebra for which conformal weights are n-ples of those for the entire algebra, n corresponds to the value of effective Planck constant heff/h=n. These algebras are isomorphic to the full algebra and act as gauge conformal algebras so that a broken super-conformal invariance is in question.
2. Quantum criticality in turn reduces to the number theoretic vision about strong form of holography. String world sheets carrying fermions and partonic 2-surfaces are the basic objects as far as pure quantum description is considered. Also space-time picture is needed in order to test the theory since quantum measurements always involve also the classical physics, which in TGD is an exact part of quantum theory.
Space-time surfaces are continuations of collections of string world sheets and partonic 2-surfaces to preferred extremals of Kähler action for which Noether charges in the sub-algebra of super-symplectic algebra vanish. This condition is the counterpart for the reduction of the 2-D criticality to conformal invariance. This eliminates huge number of degrees of freedom and makes the strong form of holography possible.
3. The hierarchy of algebraic extensions of rationals defines the values of the parameters characterizing the 2-surfaces, and one obtains a number theoretical realization of an evolutionary hierarchy. One can also algebraically continue the space-time surfaces to various number fields - reals and the algebraic extensions of p-adic number fields. Physics becomes adelic. p-Adic sectors serve as correlates for cognition and imagination. One can indeed have string world sheets and partonic 2-surfaces, which can be algebraically continued to preferred extremals in p-adic sectors by utilizing p-adic pseudo constants giving huge flexibility. If this is not possible in the real sector, figment of imagination is in question! It can also happen that only part of real space-time surface can be generated: this might relate to the fact that imaginations can be seen as partially realized motor actions and sensory perceptions.
Quantum criticality and TGD inspired quantum biology
In TGD inspired quantum biology quantum criticality is in crucial role. First some background.
1. Quantum measurement theory as a theory of consciousness is formulated in zero energy ontology (ZEO) and defines an important aspect of quantum criticality. Strong form of NMP states that the negentropy gain in the state function reduction at either boundary of causal diamond (CD) is maximal. Weak form of NMP allows also quantum jumps for which negentropic entanglement is not generated: this makes possible ethics (good and evil) and morally responsible free will: good means basically increase of negentropy resources.
2. Self corresponds to a sequence state function reductions to the same boundary of CD and heff does not change during that period. The increase of heff (and thus evolution!) tends to occur spontaneously, and can be assigned to the state function reduction to the opposite boundary of CD in zero energy ontology (ZEO). The reduction to the opposite boundary means death of self and living matter is fighting in order to avoid this even. To me the only manner to make sense about basic myth of Christianity is that death of self generates negentropy.
3. Metabolism provides negentropy resources for self and hopefully prevents NMP to force the fatal reduction to the opposite boundary of CD. Also homeostasis does the same. In this process self makes possible evolution of sub-selves (mental images dying and re-incarnating) state function by state function reduction so that the negentropic resources of the Universe increase.
A new mechanism of quantum criticality
Consider now the mechanisms of quantum criticality. The TGD based model (see this) for the recent paradoxical looking finding (see this) that topological insulators can behave like conductors in external magnetic field led to a discovery of a highly interesting mechanism of criticality, which could play a key role in living matter.
1. The key observation is that magnetic field is present. In TGD framework the obvious guess is that its flux tubes carry dark electrons giving rise to anomalous currents running in about million times longer time scales and with velocity, which is about million times higher than expected. Also supra-currents can be considered.
The currents can be formed of the cyclotron energies of electrons are such that they correspond to energies near the surface of the Fermi sphere: recall that Fermi energy for electrons is determined by the density of conduction electrons and is about 1 eV. Interestingly, this energy is at the lower end of bio-photon energy spectrum. In the field of 10 Tesla the cyclotron energy of electron is .1 mV so that the integer characterizing cyclotron orbit must be n≅ 105 if conduction electron is to be transferred to the cyclotron orbit.
2. The assumption is that external magnetic field is realized as flux tubes of fixed radius, which correspond to space-time quanta in TGD framework. As the intensity of magnetic field is varied, one observes so called de Haas-van Alphen effect used to deduce the shape of the Fermi sphere: magnetization and some other observables vary periodically as function of 1/B.
This can be understood in the following manner. As B increases, cyclotron orbits contract. For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation of say magnetization.
3. For some critical values of the magnetic field strength a new orbit emerges at the boundary of the flux tube. If the energy of this orbit is in the vicinity of Fermi surface, an electron can be transferred to the new orbit. This situation is clearly quantum critical.
If the quantum criticality hypothesis holds true, heff/h=n dark electron phase can generated for the critical value of magnetic fields. This would give rise to the anomalous conductivity perhaps involving spin current due to the spontaneous magnetization of the dark electrons at the flux tube. Even super-conductivity based on the formation of parallel flux tube pairs with either opposite or parallel directions of the magnetic flux such that the members of the pair are at parallel flux tubes, can be considered and I have proposed this a mechanism of bio-superconductivity and also high Tc super-conductivity
A new mechanism of quantum criticality and bio-control
The quantum criticality of the process in which new electron orbit emerges near Fermi surface suggests a new mechanism of quantum bio-control by generation of super currents or its reversal.
1. In TGD inspired quantum biology magnetic body uses biological body as motor instrument and sensory receptor and EEG and its fractal variants with dark photons with frequencies in EEG range but energy E=hefff in the range of bio-photon energies make the necessary signalling possible.
2. Flux tubes can become braided and this makes possible quantum computation like processes. Also so called 2-braids - defined by knotted 2-surfaces imbedded in 4-D space-time surface - are possible for the string world sheets defined by flux tubes identified to be infinitely thin, are possible. As a matter fact, also genuine string world sheets accompany the flux tubes. 2-braids and knots are purely TGD based phenomenon and not possible in superstring theory or M-theory.
3. It is natural to speak about motor actions of the magnetic body. It is assumed that the flux tubes of the magnetic body connect biomolecules to form a kind of Indra's web explaining the gel like character of living matter. heff reducing phase transitions contract flux tubes connecting biomolecules so that they can find each other by this process and bio-catalysis becomes possible. This explains the mysterious looking ability of bio-molecules to find each other in the dense molecular soup. In fact the dark matter part is far from being soup! The hierarchy of Planck constants and heff=hgr hypothesis imply that dark variants of various particles with magnetic moment are neatly at their own flux tubes like books in shelf.
Reconnection of the U-shaped flux tubes emanating from two subsystems generates a flux tube pair between them and gives rise to supracurrents flowing between them. Also cyclotron radiation propagating along flux tubes and inducing resonant transitions is present. This would be the fundamental mechanism of attention.
4. I have proposed that the variation of the thickness of the flux tubes could serve as a control mechanism since it induces a variation of cyclotron frequencies allowing to get in resonance or out of it. For instance, two molecules could get in flux tube contact when the cyclotron frequencies are identical and this can be achieved if they are able to vary their flux tube thickness. The molecules of immune system are masters in identifying alien molecules and the underlying mechanism could be based on cyclotron frequency spectrum and molecular attention. This would be also the mechanism behind water memory and homeopathy (see this), which still is regarded as a taboo by mainstreamers.
5. Finally comes the promised new mechanism of bio-control! The variation of the magnetic field induced by that of flux tube thickness allows also to control whether there is quantum criticality for the generation of dark electron supra currents of electrons. The Fermi energy of the conduction electrons at the top of Fermi sphere is the key quantity and dictated by the density of these electrons. This allows to estimate the order of magnitude of the integers N characterizing cyclotron energy for ordinary Planck constant and the maximal value of heff/h=n cannot be larger than N.
See the chapter Quantum Model for Bio-Superconductivity: II or the article A new control mechanism of TGD inspired quantum biology
The group of Suchitra Sebastian has discovered very unconventional condensed matter system which seems to be simultaneously both insulator and conductor of electricity in presence of magnetic field. Science article is entitled "Unconventional Fermi surface in an insulating state". There is also a popular article "Paradoxical Crystal Baffles Physicists" in Quanta Magazine summarizing the findings. I learned about the finding first from the blog posting of Lubos (I want to make absolutely clear that I do not share the racistic attitudes of Lubos towards Greeks. I find the discussions between Lubos and same minded blog visitor barbarians about the situation in Greece disgusting).
The crystal studied at superlow temperatures was Samarium hexaboride - briefly SmB6. The high resistance implies that electron cannot move more that one atom's width in any direction. Sebastian et al however observed electrons traversing over a distance of millions of atoms- a distance of orde 10-4 m, the size of a large neuron. So high mobility is expected only in conductors. SmB6 is neither metal or insulator or is both of them! The finding is described by Sebastian as a "big schock and as a "magnificent paradox by condensed matter theorists Jan Zaanen. Theoreticians have started to make guesses about what might be involved but according to Zaanen there is no even remotely credible hypothesis has appeared yet.
On basis of its electronic structure SmB6 should be a conductor of electricity and it indeed is at room temperature: the average number conduction electrons per SmB6 is one half. At low temperatures situation however changes. At low temperatures electrons behave collectivly. In superconductors resistance drops to zero as a consequence. In SmB6 just the opposite happens. Each Sm nucleus has the average 5.5 electrons bound to it at tight orbits. Below 223 degrees of Celsius the conduction electrons of SmB6 are thought to "hybridize" around samarium nuclei so that the system becomes an insulator. Various signatures demonstrate that SmB6 indeed behaves like an insulator.
During last five years it has been learned that SmB6 is not only an insulator but also so called topological insulator. The interior of SmB6 is insulator but the surface acts as a conductor. In their experiments Sebastian et al hoped to find additional evidence for the topological insulator property and attempted to measure quantum oscillations in the electrical resistance of their crystal sample. The variation of quantum oscillations as sample is rotated can be used to map out the Fermi surface of the crystal. No quantum oscillations were seen. The next step was to add magnetic field and just see whether something interesting happens and could save the project. Suddenly the expected signal was there! It was possible to detect quantum oscillations deep in the interior of the sample and map the Fermi surface! The electrons in the interior travelled 1 million times faster than the electrical resistance would suggest. Fermi surface was like that in copper, silver or gold. A further surprise was that the growth of the amplitude of quantum oscillations as temperature was decreased, was very different from the predictions of the universal Lifshitz-Kosevich formula for the conventional metals.
Could TGD help to understand the strange behavior of SmB6?
There are several indications that the paradoxical effect might reveal the underlying dynamics of quantum TGD. The mechanism of conduction must represent new physics and magnetic field must play a key role by making conductivity possible by somehow providing the "current wires". How? The TGD based answer is completely obvious: magnetic flux tubes.
One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator. One should understand why the current runs along 2-D surfaces. In fact, many exotic condensed matter systems are 2-dimensional in good approximation. In the models of integer and fractional quantum Hall effect electrons form a 2-D system with braid statistics possible only in 2-D system. High temperature super-conductivity is also an effectively 2-D phenomenon.One should also understand topological insulator property at deeper level, that is the conduction along the boundaries of topological insulator.
1. Many-sheeted space-time is second fundamental prediction TGD. The dynamics of single sheet of many-sheeted space-time should be very simple by the strong form of holography implying effective 2-dimensionality. The standard model description of this dynamics masks this simplicity since the sheets of many-sheeted space-time are replaced with single region of slightly curved Minkowski space with gauge potentials sums of induced gauge potentials for sheets and deviation of metric from Minkowski metric by the sum of corresponding deviations for space-time sheets. Could the dynamics of exotic condensed matter systems give a glimpse about the dynamics of single sheet? Could topological insulator and anyonic systems provide examples of this kind of systems?
2. Second basic prediction of TGD is strong form of holography: string world sheets and partonic 2-surfaces serve as kind of "space-time genes" and the dynamics of fermions is 2-D at fundamental level. It must be however made clear that at QFT limit the spinor fields of imbedding space replace these fundamental spinor fields localized at 2-surface. One might argue that the fundamental spinor fields do not make them directly visible in condensed matter physics. Nothing however prevents from asking whether in some circumstances the fundamental level could make itself visible.
In particular, for large heff dark matter systems (, whose existence can be deduced from the quantum criticality of quantum TGD) the partonic 2-surfaces with CP2 size could be scaled up to nano-scopic and even longer size scales. I have proposed this kind of surfaces as carriers of electrons with non-standard value of heff in QHE and FQHE.
The long range quantum fluctuations associated with large, heff=n× h phase would be quantum fluctuations rather than thermal ones. In the case of ordinary conductivity thermal energy makes it possible for electrons to jump between atoms and conductivity becomes very small at low temperatures. In the case of large scale quantum coherence just the opposite happens as observed. One therefore expects that Lifshitz-Kosevich formula for the temperature dependence of the amplitude does not hold true.
The generalization of Lifschitz-Kosevich formula to quantum critical case deduced from quantum holographic correspondence by Hartnoll and Hofman might hold true qualitatively also for quantum criticality in TGD sense but one must be very cautious.
The first guess is that by underlying super-conformal invariance scaling laws typical for critical systems hold true so that the dependence on temperature is via a power of dimensionless parameter x=T/mu;, where μ is chemical potential for electron system. As a matter fact, exponent of power of x appears and reduces to first power for Lifshitz-Konsevich formula. Since magnetic field is important, one also expects that the ratio of cyclotron energy scale Ec∝ ℏeff eB/me to Fermi energy appears in the formula. One can even make an order of magnitude guess for the value of heff/h≅ 106 from the facts that the scale of conduction and conduction velocity were millions times higher than expected.
Strings are 1-D systems and strong form of holography implies that fermionic strings connecting partonic 2-surfaces and accompanied by magnetic flux tubes are fundamental. At light-like 3-surfaces fermion lines can give rise to braids. In TGD framework AdS/CFT correspondence generalizes since the conformal symmetries are extended. This is possible only in 4-D space-time and for the imbedding space H=M4× CP2 making possible to generalize twistor approach.
3. Topological insulator property means from the perspective of modelling that the action reduces to a non-abelian Chern-Simons term. The quantum dynamics of TGD at space-time level is dictated by Kähler action. Space-time surfaces are preferred extremals of Kähler action and for them Kähler action reduces to Chern-Simons terms associated with the ends of space-time surface opposite boundaries of causal diamond and possibly to the 3-D light-like orbits of partonic 2-surfaces. Now the Chern-Simons term is Abelian but the induced gauge fields are non-Abelian. One might say that single sheeted physics resembles that of topological insulator.
4. The effect appears only in magnetic field. I have been talking a lot about magnetic flux tubes carrying dark matter identified as large heff phases: topological quantization distinguishes TGD from Maxwell's theory: any system can be said to possess "magnetic body, whose flux tubes can serve as current wires. I have predicted the possibility of high temperature super-conductivity based on pairs of parallel magnetic flux tubes with the members of Cooper pairs at the neighboring flux tubes forming spin singlet or triplet depending on whether the fluxes are have same or opposite direction.
Also spin and electric currents assignable to the analogs of spontaneously magnetized states at single flux tube are possible. The obvious guess is that the conductivity in question is along the flux tubes of the external magnetic field. Could this kind of conductivity explains the strange behavior of SmB6. The critical temperature would be that in which the parallel flux tubes are stable. The interaction energy of spin with the magnetic field serves as a possible criterion for the stability if the presence of dark electrons stabilizes the flux tubes.
The following represents an extremely childish attempt of a non-specialist to understand how the conductivity might be understood. The current carrying electrons at flux tubes near the top of Fermi surface are current carriers. heff=n×h and magnetic flux tubes as current wires bring in the new elements. Also in the standard situation one considers cylinder symmetric solutions of Schrödinger equation in external magnetic field and introduces maximal radius for the orbits so that formally the two situations seem to be rather near to each other. Physically the large heff and associated many-sheeted covering of space-time surface providing the current wire makes the situation different since the collisions of electrons could be absent in good approximation so that the velocity of charge carriers could be much higher than expected as experiments indeed demonstrate.
Quantum criticality is the crucial aspect and corresponds to the situation in which the magnetic field attains a value for which a new orbit emerges/disappears at the surface of the flux tube: in this situation dark electron phase with non-standard value of heff can be generated. This mechanism is expected to apply also in bio- superconductivity and to provide a general control tool for magnetic body.
1. Let us assume that flux tubes cover the whole transversal area of the crystal and there is no overlap. Assume also that the total number of conduction electrons is fixed, and depending on the value of heff is shared differently between transversal and longitudinal degrees of freedom. Large value of heff squeezes the electrons from transversal to longitudinal flux tube degrees of freedom and gives rise to conductivity.
2. Consider first Schrödinger equation. In radial direction one has harmonic oscillator and the orbits are Landau orbits. The cross sectional area behaves like πR2= nTheff/2mωc giving nT∝1/heff. Increase of the Planck constant scales up the radii of the orbits so that the number of states in cylinder of given radius is reduced.
Angular momentum degeneracy implies that the number of transversal states is NT= nT2∝ 1/heff2. In longitudinal direction one has free motion in a box of length L with states labelled by integer nL. The number of states is given by the maximum value NL of nL.
3. If the total number of states is fixed to N = NLNT is fixed and thus does not depend on heff, one has NL ∝ heff2. Quanta from transversal degrees of freedom are squeezed to longitudinal degrees of freedom, which makes possible conductivity.
4. The conducting electrons are at the surface of the 1-D "Fermi-sphere", and the number of conduction electrons is Ncond≅ dN/dε × δ ε≅dN/dε T= NT/2εF ∝ 1/heff4. The dependence on heff does not favor too large values of heff. On the other hand, the scattering of electrons at flux tubes could be absent. The assumption L∝heff increases the range over which current can flow.
5. To get a non-vanishing net current one must assume that only the electrons at the second end of the 1-D Fermi sphere are current carriers. The situation would resemble that in semiconductor. The direction of electric field would induce symmetry breaking at the level of quantum states. The situation would be like that for a mass in Earth's gravitational field treated quantally and electrons would accelerate freely. Schrödinger equation would give rise to Airy functions as its solution.
What about quantum oscillations in TGD framework?
1. Quantum oscillation refers to de Haas-van Alphen effect - an oscillation of the induced magnetic moment as a function of 1/B with period τ= 2πe/ℏS, where S is the momentum space area of the extremal orbit of the Fermi surface, in the direction of the applied field. The effect is explained to be due to the Landau quantization of the electron energy. I failed to really understand the explanation of this source and in my humble opinion the following arguments provide a clearer view about what happens.
2. If external magnetic field corresponds to flux tubes Fermi surface decomposes into cylinders parallel to the magnetic field since the motion in transversal degrees of freedom is along circles. In the above thought experiment also a quantization in the longitudinal direction occurs if the flux tube has finite length so that Fermi surface in longitudinal direction has finite length. One expects on basis of Uncertainty Principle that the area of the cross section in momentum space is given by S∝ heff2/πR2, where S is the cross sectional area of the flux tube. This follows also from the equation of motion of electron in magnetic field. As the external magnetic field B is increased, the radii of the orbits decrease inside the flux tube, and in momentum space the radii increase.
3. Why does the induced magnetic moment (magnetization) and other observables oscillate?
1. The simplest manner to understand this is to look at the situation at space-time level. Classical orbits are harmonic oscillator orbits in radial degree of freedom. Suppose that that the area of flux tube is fixed and B is increased. The orbits have radius rn2= (n+1/2) × hbar/eB and shrink. For certain field values the flux eBA =n×hbar corresponds to an integer multiple of the elementary flux quantum - a new orbit at the boundary of the flux tube emerges if the new orbit is near the boundary of Fermi sphere providing the electrons. This is clearly a critical situation.
2. In de Haas- van Alphen effect the orbit n+1 for B has same radius as the orbit n for 1/B+Δ (1/B): rn+1(1/B) =rn(1/B+Δ (1/B)). This gives approximate differential equation with respect to n and one obtains (1/B)(n)= (n+1/2)× Δ (1/B) . Δ (1/B) is fixed from the condition the flux quantization. When largest orbit is at the surface of the flux, tube the orbits are same for B(n) and B(n+1), and this gives rise to the de Haas - van Alphen effect.
3. It is not necessary to assume finite radius for the flux tube, and the exact value of the radius of the flux tube does not play an important role. The value of flux tube radius can be estimated from the ratio of the Fermi energy of electron to the cyclotron energy. Fermi energy about .1 eV depending only on the density of electrons in the lowest approximation and only very weakly on temperature. For a magnetic field of 1 Tesla cyclotron energy is .1 meV. The number of cylinders defined by orbits is about n=104.
4. What happens in TGD Universe in which the areas of flux tubes identifiable as space-time quanta are finite? Could quantum criticality of the transition in which a new orbit emerges at the boundary of flux tube lead to a large heff dark electron phase at flux tubes giving rise to conduction?
1. The above argument makes sense also in TGD Universe for the ordinary value of Planck constant. What about non-standard values of Planck constant? For heff/h =n the value of flux quantum is n-fold so that the period of the oscillation in de Haas - van Alphen effect becomes n times shorter. The values of the magnetic field for which the orbit is at the surface of the flux tube are however critical since new orbit emerges assuming that the cyclotron energy corresponds is near Fermi energy. This quantum criticality could give rise to a phase transition generating non-standard value of Planck constant.
What about the period for Δ (1/B)? For heff/h=n? Modified flux quantization for extremal orbits implies that the area of flux quantum is scaled up by n. The flux changes by n units for the same increment of Δ (1/B) as for ordinary Planck constant so that de Haas -van Alphen effect does not detect the phase transition.
2. If the size scale of the orbits is scaled up by n1/2 as the semiclassical formula suggests the number of classical orbits is reduced by a factor 1/n if the radius of the flux tube is not changed in the transition h→ heff to dark phase. n-sheetedness of the covering however compensates this reduction.
3. What about possible values of heff/h? The total value of flux seems to give the upper bound of heff/h=nmax, where nmax is the value of magnetic flux for ordinary value of Planck constant. For electron and magnetic field for B=10 Tesla and has n≤ 105. This value is of the same order as the rough estimate from the length scale for which anomalous conduction occurs.
Clearly, the mechanism leading to anomalously high conductivity might be the transformation of the flux tubes to dark ones so that they carry dark electrons currents. The observed effect would be dark, quantum critical variant of de Haas-van Alphen effect!
Also bio-superconductivity is quantum critical phenomenon and this observation would suggests sharpening of the existing TGD based model of bio-super-conductivity. Super-conductivity would occur for critical magnetic fields for which largest cyclotron orbit is at the surface of the flux tube so that the system is quantum critical. Quantization of magnetic fluxes would quantify the quantum criticality. The variation of magnetic field strength would serve as control tool generating or eliminating supra currents. This conforms with the general vision about the role of dark magnetic fields in living matter.
To sum up, a breaktrough of TGD is taking place. I have written about thirty articles during this year - more than one article per week. There is huge garden there and trees contain fruits hanging low! It is very easy to pick them: just shatter and let them drop to the basket! New experimental anomalies having a nice explanation using TGD based concepts appear on weekly basis and the mathematical and physical understanding of TGD is taking place with great leaps. It is a pity that I must do all alone. I would like to share. I can only hope that colleagues could take the difficult step: admit what has happened and make a fresh start.
See the chapter Quantum Model for Bio-Superconductivity: II or the article Does the physics of SmB6 make the fundamental dynamics of TGD directly visible?
Aromatic rings as the lowest level in the molecular self hierarchy?
For details see the chapter Quantum model for nerve pulse or the article Impressions created by TSC 2015 conference.
TGD based model for anesthetic action
There are two energies involved.
For details see the chapter Quantum model for bio-superconductivity: II or the article Quantitative model of high Tc super-conductivity and bio-super-conductivity.
To the index page |
1f6a9ab965413988 |
in Buddhadharma
On the Deity
Contents SiteMap of Philosophy SiteMap of Ancient Egyptian Sapience SiteMap
The Deity & Deities in the Vedânta & Classical Yoga
The Deities in the Buddhadharma
The Deities in the Ancient Egyptian Religion
The God of Monotheism
God in the "Western Tradition"
Deity Yoga in Buddhist Tantra and the God of Process.
The question can and should be posed : How do the teachings of the Buddha, the Buddhadharma, relate to the images of the Divine as found in non-Dharmic religions such as Hinduism, Kemetism, Hermetism & Abrahamism ? Is it possible, on the basis of philosophy, in particular Process Philosophy, to erect a bridge between God and the Dharma ? Is a spiritual philosophy, i.e. a view allowing the birth of the Divine in the consciousness of individual, rationally possible ?
In matters of the spirit, touching the various interactions between humans and the Divine, a crucial divide is to be identified. On the one hand, the Divine may be viewed as a process, implying dynamism & change, i.e. as "Dharmic", while on the other hand, this radical otherness ("totaliter aliter"), transcending the frontiers of the mundane, conventional, nominal world, may appear as a "substance of substances", implying statism ("causa sui") & inherent existence from its own side ("svabhâva")
The latter, "non-Dharmic" or essentialist view, prevails in the West, where it was initiated by Ancient Egyptian religion & sapience. In the theologies of the three "religions of the book" (Judaism, Christianity & Islam), based upon the Platonic & Peripatetic traditions, this substantialist view identified the Divine as an independent object-God unaffected by His creation ! Such an essentialism can also be found in Hinduism, particularly in the Vedânta (the notion of "Brahman") and Classical Yoga (with "Îśvara").
Before addressing the issue, let a few definitions bring the necessary clarity and enable us to stabilize the discourse.
supernatural, radical otherness
In general, the word "Divine" refers to "supernatural", meta-nominal phenomena. In non-Dharmic contexts, these are deemed as either part of nature (pantheism), as transcending nature (theism), or as encompassing both nature and its beyond (pan-en-theism). In Dharmic contexts, the Divine refers to unbounded wholeness (as in Buddhism & Taoism), touching pan-sacralism.
The direct experience of the "supernatural" introduces mysticism. This is a non-conceptual, non-dual, ineffable prehension (or special apprehension) of the Divine.
DEITY : Supreme Being deemed Divine
DEITIES : Supreme Beings deemed Divine
From the Latin "Deus", or "God".
In non-Dharmic contexts, this term denotes all things belonging to God, viewed as numerically one, as in monotheism, qualitatively one, as in henotheism, or numerically plural, as in polytheism. In this sense, the word "Deity" has a more personal connotation (cf. "nobiscum Deus" or "God-with-us") than the abstract word "Divine", although it remains neutral as to whether God is singular, one or plural.
In Dharmic spiritualities such as Buddhism & Taoism, "Deity" refers to ideal, non-substantial (pure) manifestations of the wisdom-mind of the Buddhas (the "Dharmakâya") or to the Divine self-determinations of the absolute Tao, the so-called "immortals" ("hsien"). It does not refer to a Supreme Being existing from its own side. This invokes the crucial difference !
GOD : the Supreme Being
GODS & GODDESSES : the Supreme Beings
A precise definition of "God" is impossible. The word points to an absolute category beyond affirmation & denial, i.e. beyond any possible conceptualization (cf. the Via Negativa of ps.-Dionysius the Areopagite). This is the apophatic view.
In all non-Dharmic contexts, a singular "Supreme Being" is denoted, and so God = the Deity. "Infinite, eternal, absolute, etc." are positive limit-concepts attributed to this Supreme Being, also deemed Good, Wise, Beautiful, Omniscient, Omnipotent etc. This Supreme Being called "God" is the Creator of the universe. This is the katapathic view.
Insofar as this Supreme Being is considered to be an eternal, self-identical "substance of substances" (either singular and/or one) it is the God of monotheism or henotheism. Insofar as a plurality of these Supreme Beings is envisaged, polytheism is at hand.
The question remains whether a Supreme Being (and by extension Supreme Beings) can be thought without any reference to substantialism ? Can the notion God (or Deities) exist independently and inherently be relinquished ? This is the core question. Process theology answers this affirmatively.
RELIGION : to unite
From the Latin verb "religare", religion.
In sensu lato, religion is the joining of the part with the larger whole. The latter may be nature viewed as a totality, or a comprehensive perception as in orgasm or religious experience (cf. "yoga" from the Sanskrit root "yug", meaning "yoke").
In sensu stricto, the word points to the "totaliter aliter", i.e. a radical otherness called "Divine". In the context of theism, this involves a Supreme Being or Beings, either transcending nature or representing, for example, a subtle, fiery, logoic "pneuma" at the head of the natural order (as the Stoics assumed). In the Dharma religions, the ultimate truth (reality) is the radical otherness with which unification is sought.
At the core of religion is the religious or mystical experience, the direct, individual prehension of the Divine.
the social organization of religious experience
Organized worship according to a "canon" established by a founder (plus a founding message and/or text) and/or his or her followers (plus a tradition or "magister"). As soon as a spiritual group is formed, a rule of order is called for (cf. the rise of Christianity or monasticism). At some point, this group-form becomes quasi independent and a goal on its own. Religions are therefore defined by two pillars : the original teachings + the traditions (the so-called "magister fidei"). To evidence the authentic core besides the dross, both need critical deconstruction.
direct, immediate experience of the Divine
From the Greek "mustikos", hidden, secret.
The "visio Dei experimentalis" is the authentic core of all religious experience and hence of all religions. It is the "secret" in the heart of faith and the living soul of human spirituality. Without it, religion is a dry and unrewarding experience. With it, a direct experience of the Divine becomes possible ... In the Buddhadharma, this refers to the nondual experience of absolute truth, the ultimate nature of phenomena.
a Divine reality exists !
From the Greek "theos", God.
The existence and continuity of creation is owed to a single, inherently existing Supreme Being or Creator (monotheism), a single unity of Supreme Beings (henotheism) or a plurality of Supreme Beings (polytheism), distinct from creation (but not necessarily transcending it, as the Stoic "pneuma" testifies). The order of the Deity or Deities is both omnipotent & omniscient. In this definition, theism exceeds monotheism to encompass polytheism and henotheism.
many Supreme Beings
There are many Supreme Beings. This manifold causes the created order to come into being, sustains it and participates in its creativity and enfoldment. These Beings, transcending and/or coinciding with the natural order, are not interconnected, do not spring from a common source, are co-eternal from the beginning, form an atomized Divine order, are mutually exclusive, while each has its own specific, irreducible domain or field of activity.
Insofar as these Divine beings are headed by an absent "Most High" Deity (a "Deus absconditus"), a mild form of polytheism is defined. Insofar the role of this "Most High" can be assumed by various Deities, monolatry is defined. Insofar as such a "Supreme of the supreme" is absent, archaic or primitive polytheism is indicated. This construction works well in mythical and pre-rational modes of cognition. It is already difficult to maintain its stability in proto-rational conceptualizations and it is in direct conflict with the principles of reason.
One Supreme Being exists, but reversibly so.
From the Greek "monos" and "latreia", service.
A "Most High" is acknowledged, but not universally or irreversibly. In Ancient Egypt, especially in the Old Kingdom, various Supreme Beings were called "the Great" ("wr" or "aA"), and worshipped as such : Atum-Re and Osiris are strong examples (but any "god of the city" was also "the Great"). Only in the New Kingdom is a New Solar Theology at work, focusing, in the Late Ramesside Era, on the Greatest God before and within all beings (Amun). Then the provisional nature of oneness and greatness looses ground (although, to the affects, it was never lost).
Monolatry is consistent with mythical & pre-rational thought. It may be seen as a stage between polytheism and henotheism.
One in all Divine Beings & all Divine Beings as One
From the Greek "hen" and "theos", The One God.
Divine Beings or Powers cause the created order to come into being. They are expressions, Self-manifestations or epiphanies of one and the same Great God. These Supreme Beings, transcending and/or coinciding with the natural order, are interconnected, spring from a common source (before or simultaneous with creation), are not co-eternal from the beginning, do form a concerted Divine order, are not necessarily mutually exclusive. Although each has its own specific, irreducible domain, cooperation, interchanges and adjustments between these remains possible, although not necessary.
Insofar as some of these Beings transcend the natural order, pan-en-theist henotheism is defined. Insofar as all of these Beings coincide with the natural order (the source of Them is simultaneous with creation), pantheist henotheism is being defined.
This is the approach of the New Kingdom theologies of the Aten, of Amun and of Ptah.
only "1" Supreme Being
There is only a single Supreme Being, the sole, single God. This God is Alone and causes the created order to come into being, sustains it and participates in its creativity and enfoldment. This solitary, singular Being, transcending the natural order, does not share its Divine nature with anything else, has no "second" and so is Absolutely Alone. All other beings to whom some "Divine status" or "perfection" may be attributed (like prophets & saints) are essentially powerless and derive their status from this sole, unique One.
Insofar as the single God may be worshipped in multiple ways, theomonism is defined (as in Judaism). Insofar as this sole God allows for Divine mediation, mild monotheism is at hand (as in Christianity). Insofar as this unique God dictates only one way of worship (namely of that One Alone), we speak of strict monotheism (as in Islam).
Strict monotheism proclaims a dualistic, unilateral relation between God and the world, wherein God is a Being who absolutely controls events from outside (omnipotence, predestination). Emphasis is put on the numerical "firstness" of God, or (God = {1}).
only One Supreme Natural Being
From the Greek "pan" and "theos", the universe = God.
There is only One Supreme Being, the sole, single God, a "logos" who does not transcend the natural order ; the One and the world coincide. Everything part of the natural order is therefore in essence Divine and subsummated by this God, the supreme "God of nature". There is no transcendent essence outside nature, and therefore creation is not caused by anything outside the natural order. Naturalistic auto-creation (auto-generation) is effectuated with nothing except nature or the universe is conceived as uncreated and eternal.
This view was prevalent in Stoicism.
All in The All and The All in all
From the Greek "pan en theos", all in God.
God (singular, as in monotheism or plural, as in henotheism) is truly different from the natural order, but existentially present in every element of creation as a manifold of Self-manifestations of Divine Names, Attributes, Gods & Goddesses, the abstract differentials of nature, of the world in action (creationism). There is nothing outside God, who is both transcendent (theism) and immanent (pantheism). Creation happens in, by and for The All. God encompasses creation in all directions, but transcends it. All in The All and The All in all.
This view is at work in Hermetism and in various mysticisms.
God exists indifferently !
The existence and continuance of creation is owed to One Supreme Single Being. Transcending creation, God does not interfere with the natural order of creation. The natural laws are defined from the beginning and God does not alter them (miracles are impossible). There is no "revealed" religion. God is absent, except in the laws of nature. The experience of God is only possible within these laws.
the Divine does not exist !
There is no Divine Being, nor Divine Beings. There is nothing Divine in ontology (no theo-ontology). There is nothing transcendent, supernatural nor "pneumatic" in the natural order. The latter is the only existing order. There may be a natural hierarchy, but not to accommodate a Supreme Being, nor an indifferent deist God.
the Divine exists, but not as the Deity or as Deities.
the Divine is Nameless.
There is no Supreme Being (God), nor Supreme Beings (Gods & Goddesses), but there is something Divine in ontology. There is a transcendent level (beyond the natural order) or a supernatural layer (within the natural order). There is a natural hierarchy, but it does not accommodate a Supreme Being, nor Supreme Beings.
The Divine exists and cannot be given any name. The Divine exceeds all possible categories. This view is extant in Buddhism & Taoism.
maybe the Divine exists or maybe not ...
There may be a Divine Being or Divine Beings. There may be a transcendent, supernatural or "pneumatic" natural stratum. The latter may be the only existing order. If a natural hierarchy exist, it may imply a Supreme Being, possibly omnipotent & omniscient. But, these propositions may also all be untrue.
Insofar as it can never be decided whether these propositions are true or not, radical agnosticism is defined. If a decision about them is postponed to the future, prospective agnosticism sees the light.
the Divine exists merely as an interdependent phenomenon
lacking inherent existence.
To help the reader grasp the crucial distinctions amidst these complex characterizations, the difference between "Dharmic" and "non-Dharmic" is crucial.
As soon as the Divine is (a) characterized conceptually and (b) given a substantial status, i.e. defined in terms of a self-powered, independent, non-relational, self-identical entity, the non-Dharmic view prevails. Singular, plural, transcendent, immanent, impersonal, personal etc. are merely further demarcations drawn on this substantialism.
The Dharmic view accepts the existence of the Divine, called "ultimate reality" or "absolute truth" and also approaches It conceptually. The difference here is given with the radical and consistent lack of attributing substantial status to It. Designating only process, the Divine is deemed an interdependent phenomenon existing "conventionally". Instead of a pantheism (keeping the notion of a "God" intact), a pan-sacralism ensues. There is no radical ontological distinction between "worldly" phenomena and the "otherworldly" Deity. All phenomena lack inherent existence or "substance", and so merely exist functionally in terms of an overall interdependence between all possible events, entities, happenings, occasions etc. Dharma is strictly nominalist, whereas non-Dharma imputes universals.
In the Dharmic view, the Divine points to the special nature of ultimate reality. While it lacks inherent existence and depends on determinations & conditions like all other phenomena, it is a unique kind of change or process, a perfect movement or flawless symmetry-transformation (a holomovement). Just like a swimmer performs a swimming style while in the process of changing movements, the Divine manifests as the perfect continuity of a form of perfection. Because always changing, it is impermanent, but because these changes are constantly perfectly, flawlessly and harmoniously patterned, the Divine also exhibits permanency, albeit co-emergent with process !
This is the view of Buddhism and Taoism.
The Deity & Deities in the Vedânta & Classical Yoga
revealed henotheist pan-en-theism
"This Brahman is without an earlier and without a later, without an inside and without and outside. This Self (Âtman) is Brahman, the all-perceiving."
Brhadâranyaka Upanisad, 2.5.19.
"That Self (Âtman) is not this, it is not that (neti, neti). It is unseizable, for it cannot be seized. It is indestructible, for it cannot be destroyed. It is unattached, for it does not attach itself. It is unbound. It does not tremble. It is not injured."
Brhadâranyaka Upanisad, 4.4.22.
"Îsvara is a special Purusa (Self) untouched by the causes of sorrow, karman (actions) & its fruition and the deposit in the depth-memory."
Patañjali : Yoga Sûtra, I.24.
"... Brahman does exist as a well-known entity - eternal, pure, intelligent, free by nature, and all-knowing and all-powerful. For from the very derivation of the word Brahman, the ideas of eternality, purity, etc. become obvious, this being in accord with the root brmh. Besides, the existence of Brahman is well known from the fact of Its being the Self of all ; for everyone feels that this Self exists, and he never feels, 'I do not exist'. Had there been no general recognition of the existence of the Self, everyone would have felt, 'I do not exist'. And that Self is Brahman."
Śankara : Brahma-Sûtra-Bhâsya, I.i.1, my italics.
"The realization of Brahman results from the firm conviction arising from the deliberation on the Vedic texts and their meanings, but not from other means of knowledge like inference etc."
Śankara : Brahma-Sûtra-Bhâsya, I.i.2, my italics.
"... like the effulgence of the sun, Brahman has eternal consciousness by Its very nature, so that It has no dependence on the means of knowledge."
Śankara : Brahma-Sûtra-Bhâsya, I.i.5, my italics.
"... even for a single god there is the possibility of assuming many bodies simultaneously. (...) it is understood that in the case of Prajâpati also, when He was intent on creation, the Vedic words flashed in His mind before creation and then He created the things according to these."
Śankara : Brahma-Sûtra-Bhâsya, I.iii.26.
To understand the view in Vedânta & Classical Yoga, let us first clarify a few pivotal concepts :
Brahman : is the eternal, imperishable absolute in its absoluteness, the supreme nondual One Reality of Vedânta. According to the Vedas, Being can only be attributed to Brahman ("Kham Brahm" or "All is Brahman"). Brahman (not to be confused with Brahmâ) is an abstract not accessible to the conceptual mind, and hence totally impersonal and untouched by the material world ("nirguna Brahman"). This is "the One without a second", or "satchidânanda", or "being-consciousness-bliss" ; without Brahman nothing would exist ("sat"), without consciousness ("citta") nothing would be perceived and without bliss ("ânanda") nothing would be realized. These are merely conceptual approximations. Rendering It more concrete, Brahman becomes Îśvara. Brahman is Îśvara in Its relation to the manifest world and as an object of worship. Brahman is not "one", but beyond all numbers and thus unmeasured, without form, without plan, lay out, categories or characteristics ;
Îśvara : or "Lord of the Universe" refers to the more concrete concept of a personal universal Deity, creating, sustaining and destroying the universe (cf. the "trimûrti"). This is Brahman again, but then insofar as creation goes. Here Brahman manifests. Îśvara is the triple unity of Brahmâ (creator), Viśnu (sustainer) & Śiva (destroyer). Îśvara is therefore a personalized Brahman ("saguna Brahman") ;
Brahmâ : is the first Deity of the "trimûrti", the Hindu trinity of Brahmâ, Viśnu and Śiva. Brahmâ is the Absolute (Brahman or God) in His aspect of Creator of the universe (Viśnu is its Sustainer and Śiva is its Destroyer). Brahmâ is often depicted as having four faces, representing the four Vedas as well as the cardinal directions of space. He belongs to the realm of "mâyâ", and is therefore related to Îśvara, being the creative aspect or modality of the latter ;
Mâyâ : or "deception, illusion, appearance", but also "measurement, form, plan, lay out" is the force or "śakti" of Brahman, inseparably united with It and hence co-eternal. But Brahman is not "śakti". Mâyâ and Brahman together are named Îśvara. Mâyâ draws a veil over Brahman and so we experience the diversity of the universe rather than the One Absolute Reality, the absolute in its absoluteness, beyond all affirmation & denial. Mâyâ has two aspects : "avidyâ" (ignorance), leading to identification with the world of materiality and away from Brahman and "vidyâ" (knowledge), leading the realization of Brahman. Both are active in time & space and so relative. Both are transcended by actually realizing Brahman, the Absolute or God ;
Prajâpati : or "Lord of Creatures", a title used in the Vedas to refer to Indra, Savitri, and other Deities. Also used for Brahmâ and the "seers" ("riśis"), the spiritual sons of Brahmâ (the ten Prajâpatis) ;
Purusa : or "man, person", refers to the original, eternal person, soul, pure consciousness, Self or "âtman" (Vedânta). This Self, as a witness, observes the changes taking place in "prakrti", the material world and is itself ontologically identical with Brahman ;
Deva : or "shining one", is a name for mortal divinities inhabiting a realm higher than human beings, but also Brahman in the form of a personal God. This therefore refers to all possible heavens of the invisible world ;
Considering the Absolute in its Absoluteness, i.e. Brahman, the Vedânta is consistent with what in monotheism is called the "essence of God", or God as He Is for Himself Alone. That God is a Supreme Being can be known (by the heart and by the mind), but what this Being of God actually is cannot possibly be known in conceptual terms. His essence is ineffable and remains for ever veiled. God and Brahman are the One Alone. This is the pre-creational, pre-existent Supreme Being, creating the world "ex nihilo". The pivotal difference between Hindu henotheism and the monotheisms is the idea the innermost "soul" or "âtman" is ontologically identical with Brahman, whereas in the West no creature is able to deify to the point of total, absolute identity with God. In the West, God is ontologically different from His creatures. He is a Caesar, a king and supreme spiritual master.
Considering the Absolute in its Self-manifestations, Hindu thought makes way for henotheism, for Brahman manifests as Îśvara and the latter is grasped as a multiple variety of Deities, all epiphanies of Brahman, or aspects of "mâyâ", the force of Brahman. Brahman is a magician and involved in creation, fashioning, sustaining & destroying it. Îśvara is the personal face of Brahman, but this face is never singular, but involved with the world in terms of an endless variety of epiphanies. Although Brahman is "without a second", Its personal dimension ("saguna Brahman" or Îśvara) is, as the theology of Amun has it, "one and millions".
In Classical Yoga, Îśvara is the archetypal yogi, a special Self ("purusa") detached from "prakrti", the world-order. The yogi does not seek identification with Îśvara, but moves beyond creation. The notion of fusing or identifying with "nirguna" Brahman, the impersonal Absolute (cf. "dharma-megha-samâdhi"), is discarded, for the yogi only has to stop the fluctuations of consciousness to abide in his own, true form. Hence, not identification, but restriction is the "via Regia" to enlightenment.
These theologies are clearly non-Dharmic, and so presuppose the existence of the Supreme Being, the impersonal God called "Brahman", existing inherently, independent from anything else, Alone, singular and without connections. At some point, Brahman magically assuming the "persona" of Îśvara, a mere force-field of appearance, creates the world (Brahmâ), sustains it (Viśnu) and eventually destroys it (Śiva). As a personalized God, Îśvara can be worshipped and, generating a multitude of epiphanies, inspires "seers" to compose the sacred texts of the Vedas. These texts, in particular their sonic, magical qualities, suffice to acquire the firm conviction Brahman exists ! Indeed, it is interesting to note the Vedânta does not point to direct experience or inference (reason) as valid ways to realize Brahman, but to
"the firm conviction arising from the deliberation on the Vedic texts and their meanings", turning Brahmanism into a revealed religion. Just as in the Western monotheisms, God, both impersonal & persona, i.e. pan-en-theist, is posited on the basis of a fundamental theology based on scriptures. Not experience or reason are the preferred ways, but faith. As the latter is mediated by a multitude of epiphanies of "saguna Brahman" (the personal Brahman or Îśvara), the theism in question is a form of revealed henotheist pan-en-theism ...
Relatedness is deemed a lesser form of existence, akin to illusion or mere appearance ("mâyâ"). In the Vedânta, realization is the removal of the superimposition of the illusionary forms of "mâyâ" on Brahman. In Classical Yoga, enlightenment or "samâdhi" is the elimination ("nirodha") of the last element of flux ("vriti") from consciousness ("citta"). In both forms, the mystic returns to the original, inherently existing station-of-no-station of the Absolute in its absoluteness. It pre-existed, exists and will continue to exist. It is absolutely removed from anything except Itself, completely independent, eternal, imperishable, permanent and a "substance of substances".
The Deities in the Ancient Egyptian Religion
magical henotheist pan-en-theism
"You created the sky far away in order to ascend to it, to witness everything You created. You are alone, shining in your form of the living Aten. Risen, radiant, distant and near."
Akhenaten : Hymn to the Aten, 72 - 74.
"The One who initiated existence on the first occasion,
Amun, who developed in the beginning, whose origin is unknown.
No god came into being prior to Him.
No other god was with Him who could say what He looked like.
He had no mother who created His name.
He had no father to beget Him or to say : "This belongs to me."
Who formed His own egg.
Power of secret birth, who created His (own) beauty.
Most Divine God, who came into being Alone.
Every god came into being since He began Himself."
Hymns to Amun, 100th Chapter, my italics.
"Lo, every word of the god (Ptah) came into being through the thoughts of the heart & the command by the tongue."
Memphis Theology, 53.
The use of capitals in words as "God", "Deity", "Deities" or "Divine", points to a rational context, i.e. how these appear in a theology conducted in the rational mode of thought. Hence, when these words are used in the context of Ancient Egyptian ante-rational thought (which, as a cultural form, was mythical, pre-rational & proto-rational), this restriction is lifted. Hence, words such as "god", "the god", "gods", "goddesses", "pantheon" or "divine" are not capitalized.
spoken word
written word
Predynastic - Prehistorical dynastic - historical
realm of sacred myth realm of divine rule
primeval mother goddess
Great Sorceress
Great Magician
mind (Sia), speech (Hu)
and effect (Heka)
image-words as
offerings to Maat
In the Predynastic age of Ancient Egypt (ca. 4000 - 3000 BCE), the Great Sorceress ruled. This primeval goddess was the guardian of the sacred, the hidden, namely fertility, birth, creation, death and resurrection. The Great Sorceress was at work during the night. She was Lunar. Her craft belonged to the Earth, to the dreamworld, to the underworld ("duat"), to hypnosis, trance, divination and the feminine. Sorcery is always "Lunar" and accompanies ancestor worship, family ties, local traditions, dark secrets and love stories. The hurt & pain caused by one's personal past will often become the object of these witchcrafts & sorceries. Needless to say destruction, hate, suffering & annihilation also belong to the initiations of the night, the knowledge & practice of the "lower" mysteries.
In those Predynastic, mythical times, the Great Sorceress guaranteed, due to affiliation, the cosmic legitimacy of the kings & rulers of a divided Egypt. This would continue to be the case after Egypt got united under Menes. The sacred magic she represented was related with the processes of nature and required no writing to be effective. It was the sorcery of the lasting, antagonistic continuity of the Moon, the night and the feminine. The latter keeps the sacred hidden, for the essence of the processes behind fertility, gestation, growth, healing, death & resurrection are invisible.
With the rise of Pharaonic Egypt (ca. 3000 BCE), the situation dramatically altered. The sacred male ruler placed his throne & feet on the body of the feminine Earth of the Great Mother Goddess, the Great Sorceress, while his supernatural powers also depended on his affiliation with her. Without her sacred power he was unable to hunt in safety and keep the "good" order of his domain.
The origin of his magic was totally different and developed in two stages :
• as a "Follower of Horus" he was the incarnation of the overseeing plane witnessed by the piercing eyes of the Horus ("Heru") Hawk high up in the sky, the height of heaven ("pet"). "Heru-ur", "Horus the Elder", was the son of Re and Hathor (of Qesqeset), indicative of the first phase of the assimilation of the sacred by Pharaoh : Horus is the outcome of a merge between the (rising) Solar religion and the sacred, Lunar powers of the Great Sorceress ;
• as the "son of Re", Pharaoh finalized the assimilation by causing his own birth, life, death and resurrection as a god among the deities of the sky. Pharaoh creates himself and everything through the command on his tongue and out of his mouth. His magic lets divine presence shine so bright so all darkness is transformed into luminous matter. He himself goes through the cycle of birth, growth, decay, death & resurrection (rejuvenation) as do all deities, but Pharaoh is nevertheless different. He is the only god on Earth and his divinity implies a magic which is all-encompassing, perceiving both night & day, extending from the pre-creational to the first time ("zep tepy") and its eternal recreation in the future. Pharaoh is actual divine presence moving ahead and embracing the future. A light bringing order, peace & justice in a chaotic, unpredictable world with "blows" coming from the everpresent chaos ("isefet").
The presence of Re and his son Pharaoh were the foundation of the vertical obelisk or petrified light-beam resting on the horizontal, continuous and imperfect movement of the sacred feminine and its sorcery (the shamans and their Earth). Pharaoh brings perfection and what he does is discontinuous, unique, always new, forever rejuvenating. This is Solar magic. The feminine (the Earth) is conquered with the spirit of heaven and light. The magic of Pharaoh is just, pure, true, white. This inner necessity is not present in sorcery. This dual soteriology will remain active throughout the entire period of Ancient Egyptian Pharaonic civilization (ca. 3000 - 30 BCE) :
1. VIA THE MOON : the (lower) sky of Osiris : the ultimate state of human blessedness is to live the life of an "Osiris NN", with a court, humbling servants and a kingdom situated in the vast darkness of the Duat (like creation is a bubble of moist air suspended in chaos). Even the smallest offer made with a sincere heart during earthly life might be enough to be helped by Isis or Osiris, and so the commoners made sure the holy family would notice them. This economy is inclusive of everyman, but conditional, except for Pharaoh - the Eye of Horus ;
2. ENDING IN THE SUN : the (upper) sky of Re : the sky of Osiris and the sky of Re are proximate, and after the highest spirituality of servitude has been fulfilled, the "Ba" of the deceased is transformed, in the horizon, into an "Akh" of Re, sailing, among the other pure beings of light, on the Bark of Re, illuminating the beings of day and night, including the deities and the justified blessed dead of Osiris (who otherwise sleep). The sacred knowledge regarding this spiritual evolution was for the very few and, when first written down, portrayed in the tomb of kings only. This economy is exclusive of everyman, reserved to the deities (as the king and his high priests) and unconditional - the Eye of Re.
Moreover, the divine king was the unity of the Two Lands. He prepares himself in isolation (self-creation and absolute self-steering) and this to unify the antagonistic divisions left untouched by the passive feminine (magnetic) force-field. To do this, he creates in himself an active, masculine action ahead (cf. like a free electron), fed by the continuous rejuvenation caused by the daily Solar transformations, navigating with Re and joining him in this self-eternalizing light-feast of beings of light, all in perfect peace, justice, unity & truth, the eternal concert of the immortal ones praising the Great One (cf. the Amduat).
As a Magician, Pharaoh was stronger than Khepri (the self-creative aspect of Re) ! To him belonged everything before any deity had come into being. This can only mean Pharaoh was before Atum. He is the son of Her who bore Atum, namely the Sky-goddess (or "Nut", the feminine consort of Nun). Pharaoh is the son of the Great Sorceress herself ! So how could his magic fail ? Her sacred powers were incorporated in Pharaoh, for he is the son of both Re and the sky-goddess (Hathor). He is Horus and he is the son of Re.
Nevertheless, Pharaoh's magic remained "Solar" and its strongest implication is a perfect protection in all action. The higher "mysteries" teach the aspirant to be silent and to bow (for the deities). Through silence, magical speech is acquired. Then the just Great Word can be spoken and this magical speech conferred. Through service, mastership is continuously perfected and refined. But there is much more. The Pyramid Texts teach the possibility of deification. Pharaoh's magic is ascending, transformative, dynamical. The healing powers of his light & presence make Pharaoh's magic stand firm against destructive sorcery. In principle, Pharaoh rebuilds what he destroys. His magic is boundless and no god, spirit, demon or fiend could resist the power of the sacred words spoken with authority and written down in the divine script.
This brings us to the fact of magic in Ancient Egypt. The magician is a scholar and a priest. He knows how to read and write hieroglyphs, knows the ancient books and their powerful formula. He is a magician because he knows. Hence, his official function is symbolized by a papyrus scroll, determinative of writing, abstraction and esoterical knowledge. He is able to travel in the realm of the dead protected by Horus (cf. Coffin Texts, spell 572). In this magic the mouth is essential, for it is with it the Great Word is spoken.
"My lips are the Two Enneads. I am the Great Word."
Pyramid Texts, utterance 506 (§ 1100).
If his mouth is closed, nothing can be said and no magic ensue. The magician knows the names and knows what exists in his heart (mind) and so is able to utter whatever he likes. However, the way the Great Word is pronounced, its intonation, rhythm and psalmody are also very important. To repeat a formula four times made it powerful in all quarters of creation. Magic is a powerful tool to realize spiritual & material ends.
"My tongue is the pilot in charge of the Bark of Righteousness (Maat). (...) The soles of my feet are the two Barks of Righteousness."
Pyramid Texts, utterance 539 (§§ 1306 & 1315).
The Masters Magicians (the Bulls of the Sky) judge the aspirant on the basis of his esoterical knowledge, more important than practical aptitudes to be developed later. They judge him using what they know of him. The magician speaks and the divine speaks through him. He is before Atum, before the Ennead, before all other deities. His knowledge extends to the pre-creational realm and so the Great Magician is the father of the gods !
Although the distinction between Pharaonic magic and popular sorcery stands, in practice the division was less pronounced. Egyptian Solar Magic was founded on the principle of the assimilation of the power of the sacred feminine, and the greatest magician was he who was able to extend his power beyond creation and the pantheon. He was the child of the Great Sorceress and only through Her was he all-encompassing.
"O you who are content with what you have done -four times- and who send Maat to Re daily, the liver of Re is flourishing daily because of Maat, and he partakes of the meal of the Great Goddess."
Coffin Texts, spell 165, III 6.
So all Solar Magic was rooted in the Lunar approach but transcended this through the medium of light. Because of the pure clarity established by the panorama of Horus & Re (their "height of heaven"), Pharaoh's magic was at work day & night. Furthermore, his magic was mental, verbal and scriptural.
In Heliopolitanism, the Great Word had four aspects :
Heliopolitan schema
Sia : thought thought in the heart
Hu : speech Hu : word on the tongue
Heka : protection inherent in Hu
Maat : truth inherent in Hu
In the New Kingdom Memphite view, the original Heliopolitan fourfold characterizing the Great Word in the Old Kingdom, namely born in thought, expressed with authority, manifesting without any resistance and restoring the balance, is reduced to what is formed in the heart (on the mind) and what is said by the tongue (spoken out).
Shabaka Stone : LINE 53
(hieroglyphs in red are reconstructed)
(53) There comes into being in the heart. There comes into being by the tongue. (It is) as the image of Atum. Ptah is the very great, who gives life to all the gods and their Kas. It all in this heart and by this tongue."
"Heart" is "mind" and "tongue" equals "speech". The simultaneity of the mental (subjective) and material (objective) sides of the cognitive process is indicated by the use of symmetrical writing.
The "heart" of Ptah is not a "nous" devoid of context, i.e. an abstract, rational Divine (Platonic) Mind. It is too early for that. Rather, the contents of mind (the divine words) simultaneously move Ptah's tongue. Formal and material poles come together in Ptah's continuous actions.
The mental process suggested is proto-rational, and aims at establishing a solid case for ongoing creative speech and the ontic supremacy of Ptah as "very great" (while allowing, consistent with henotheism, other deities to exist as such "in" Ptah).
"His Ennead (Ptah's) is before him as heart, authoritative utterance, teeth and lips. They are the semen and hands of Atum."
Memphis Theology, line 55.
So although, in pre-dynastic times, Ancient Egyptian religion probably began as a polytheism, Pharaonism displayed an increasing tendency to find means to transcend divisions and bring about greater organization. Although strict monotheism never saw the light (Atenism is a mild form of monotheism), from the start henotheism was clearly intended. As the supreme magician and high priest of the five major divine families (Re, Ptah, Thoth, Osiris & Amun), the divine king played a pivotal, unifying role. In the Old Kingdom, the deities were indeed organized in constellations, but in the New Kingdom, in its New Solar Theology, they were deemed as epiphanies of the Great One. The theology of Amun is the clearest example of this tendency to introduce this One God, hidden & millions.
Amun is not singular (in a quantitative sense, as in monotheism), but unified and of one spirit (in a qualitative sense). He was deemed both transcendent (before creation and above all deities), but also immanent (present in the "holy of holies" of every temple of Egypt), hearing the pleas of both high priests and the most humble of commoners. This pan-en-theism was not based on revelations, for the "holy" books of Kemet were magical formulae enabling one to satisfy the deities and safeguard one's passage in the afterlife.
This magic was wholly "natural", based on an organic worldview integrating both order & chaos, both light & darkness. The supernatural was the manifestation of the heavenly divine on Earth. And while the spirits ("akhu") of the deities remained in heaven ("pet"), descending on Earth by way of their dynamic souls ("ba") and vital energies ("kas"), the divine king was the only deity incarnate on Earth (i.e. his spirit was the only one living in Egypt, "ta meri", the "land of love").
Indeed, at the head of this complex spiritual hierarchy of divine constellations stood Pharaoh, the "great house", the Great Magician able to pronounce the Great Word causing the "good Nile" keeping Egypt prosperous and therefore unified !
The essence or spirit ("akh") of the deities ("netjeru") were imperishable and abided for ever in heaven ("pet"). They manifested in spatiotemporal forms ("bas" & "kas") resembling the differential states of nature.
Creation cannot exist without the quasi-permanent, eternally recurrent Ennead and so in Heliopolitan thought, the forces of nature (starting with Atum creating Atum) and their harmonious concert (represented by Maat and the balance) represent the first stirring of the substantialist intention to fixate objects from their own side. The deities are projected "outside" and represent the luminous constants of creation. To return to these Polar "Imperishables" is the goal of Pharaoh's transformation, who tries to escape the Lunar vicissitudes of the Osirian realm, the Duat. Although truly African, and rooted in Shamanism and its awareness of the ongoing processes of nature, Egyptian spirituality tries to isolate and exalt the "fixed stars" in the various constellations of nature, while remaining aware of the constant unpredictable change undergone by the latter (cf. the strange attractor ruling the flood of the Nile).
The Deities in the Buddhadharma
non-theist & trans-theist dharmism
"All beings in the world,
Will finally lay the body down,
Since such a one as the Teacher,
The peerless person in the world,
The Tathâgata endowed with the powers,
The Buddha, has attained final Nirvâna."
spoken by Brahmâ Sahampatti in the Brahmasamjutta, 608.
The pervasiveness of impermanence is the core Buddhist argument raised against any conception of an inherently existing Absolute (Brahman), a Creator-God (Brahmâ") or His epiphanies. A firm conviction regarding this is not reached by way of the Vedas, but by reasoning (listening & studying), contemplation (reflection) and meditation.
"As the wise test gold by burning, cutting and rubbing it (on a piece of touchstone), so are You to accept my words after examining them and not merely out of regard for me."
- Jñânasara-samuccaya, 31.
The Buddhadharma is not a faith in the sense of an unexamined acceptance of certain dogmas. The "axioms" of its system are the result of a close investigation of the nature of mind and the nature of phenomena. Nothing is taken for granted, and the three marks of existence are found everywhere : impermanence, suffering & emptiness (or selflessness, absence of own form, Self or "svabhâva"). Insofar as spiritual people (those seeking to generate the Divine in their own consciousness) do not wish to blindly follow a path based on genuine renunciation of the world, but want to bring all their faculties, reason included, into this path, they are bound to discover nothing inherently existing can be found anywhere. This means one cannot point to a single object without also finding the facts of arising, abiding & ceasing.
The Selflessness of persons implies the "I" is merely imputed on the body, on one's volitions, feelings, thoughts & conscious reflections. Without these, the "I" would not manifest. The "I" is not equal to the body or the mind, nor is it different from them. It cannot be found to exist as a solid, permanent, inherently existing, independent entity forcefully identified as possessing an essence or "own form" ("svabhâva"). While it has no substantial existence, it has a function and exists to perform certain conventional tasks. But besides this pragmatical use of the first person, this perspective is void of substance. The "I" is not a universal. The Selflessness of phenomena implies there is not a single object "out there" that is not merely imputed on a sensoric or mental base of designation and a conceptual designation or imputation. Objects do exist functionally, and the sensoric base of designation is valid but mistaken. It is valid because sense objects do have a conventional functionality and exist as extra-mental determinations & conditions, but they are mistaken because they appear as inherently existing while they are merely transient functional states. Hence, Brahman cannot be found, nor can any other absolute principle, Deity or Deities deemed permanent and self-subsisting (self-powered and not other-powered). Brahman exists, but cannot be found as He would like ...
Even "nirvâna" and the wisdom-mind of the Buddhas do not escape this fundamental feature of the Dharma : emptiness. Buddhas arise, abide and cease insofar as they are holomovements, i.e. the continuity of perfect forms manifesting in countless perfect moments of consciousness. Each Buddha-continuum is a garland of such moments, a bead held together by the form of the Divine dynamism itself. Their "permanence" being the consistency of the perfect form-in-movement, not the self-identity presupposed in a substance existing from its own side, independent from anything else. Like a differential equation, a Buddha is merely a "form" of perfect continuity of sublime change interconnected with all other phenomena. So while also Buddhas are dependent-arisings, they are special insofar as their arising, abiding and ceasing is the perfect expression of a perfect form, each moment being a different solution of an identical differential equation or set of equations. Buddhas are therefore comparable to quantum states or strange attractors operating chaotic phase-spaces. They are like verbs, not nouns.
In the Buddhist classification of "samsâra", there are six types of Deities ("devas") of the Desire Realm. They are the Four Great Royal Lineages, The Thirty-Three, the Joyous (where Maitreya lives), Without Combat, Enjoying Emanation & Controlling Others' Emanation. Above these six are the seventeen divisions within the Form Realm. These Devas are free from the type of desire ruling the Desire Realm, but they still have desire for visible form (color, shape), sounds and objects of touch. There are no odors or tastes. The four main areas of the Form Realm correspond to the four concentrations or absorptions causing rebirth there. Above the Form Realm is the Formless Realm, separated from attachment to both Desire & Form. There are also four levels in the Formless Realm. In this realm there is no case of one being seeing another being or conversing with another. Here Divine aloneness is complete.
However, the heavens of the Devas represent a temporal realm of bliss achieved by good deeds. These Divine beings constantly enjoy inexpressible joy and much indulge in this. Being distracted by the result of their good actions, they create for themselves the illusion of the eternity of their paradisiacal state. It is this illusion of grandeur making them believe their are creators, sustainers & destroyers ! In fact, the cosmic continuum of all universes has been there since beginningless time, and not unlike cosmic breathing, this universe is followed by another and so forth (the Big Bang being merely a single out-breath).
Because of their strong & extremely pleasurable condition, there is little to no reason for the Devas to look beyond their comfortable and carefree existence and undertake spiritual training. Intoxicated by pleasure, they ignore harsh realities. Considering themselves imperishable, they develop vanity, haughtiness & pride. However, when their good karma is exhausted, which is inevitable, they too are forced out of this state of heavenly joy to be reborn again, and this per definition in less favorable circumstances. Hence, the Devas die a terrible death.
Their suffering resides in eventually realizing this situation, i.e. understanding their error in thinking their condition as permanent. Seeing their own demise, they are left behind by the others and suffer terribly.
Because theism is the view of an inherently existing Supreme Being (or Beings), the Buddhadharma is non-theist and not atheist, for although such a substantial (inherently existing) Being or Beings do not exist, there are Divine entities with incredible storehouses of power resulting from good past actions. Because the state of Buddhahood is altogether beyond cyclic existence, ineffable & nameless (nonconceptual & nondual), it exceeds the condition of the Devas. Hence, the Buddhadharma is transtheist.
The God of Monotheism
revealed monotheism
The three Mediterranean religions "of the book" (Judaism, Christianity & Islam), all three rooted in Abraham, inspired -in various meandering courses- by Heliopolitanism and the Ancient Egyptian heritage, worked out an onto-theology, i.e. an ontology of an objective, self-subsisting, substantial Supreme Being, conceptualizing it (a) in terms of the (neo)Platonic tradition, i.e. as a "summum bonum" (cf. Philo of Alexandria, Al-Kindi, Augustine) or (b) in tune with the Peripatetic emphasis on empirical reality (cf. Maimonides, Averroes, Thomas Aquinas).
This ultimate God-as-substance created the world "ex nihilo", and is believed to be the ontological "imperial" root of all possible existence. Only in the more mystical traditions of these faiths do we find another, less positive affirmation of this substance-God's necessary supremacy : the negative veils "Ain", "Ain Soph" and "Ain Soph Aur" in Qabalah (Luria), the ineffable hyper-existence of God in negative theology (ps.-Dionysius the Areopagite, Marguerite Porete) and the unknowability of the Divine essence in Sufism (Ibn Arabi). But these refined mystical "apophatic" speculations were muted by the overall "katapathic" noise produced by the theologians, as always preoccupied by apologetic concerns and manipulative, power-based mass-indoctrination.
In their view, God is a Caesar ! This singular omnipotent Dictator is the sole Supreme Being, the substantial absolute of absoluteness creating a plural creation ex nihilo. As the "summum bonum", God does not tolerate evil, considered as the mere absence of goodness ("privatio boni"). In these religions, the focus is not on truth & ontology, but on salvation, the restoration of the link with this sole God. But in the process of erecting the salvic model, a theology was invented build upon Greek concept-realism. This superstructuring of religious experience using "heathen" intellectual constructs would prove to be detrimental to the survival of their fundamental theologies.
Vainly these religious philosophies tried to bring faith and reason together. By identifying the mind of God with Plato's world of ideas, the Platonists had to exchange Divine grace for intuitive reason. The Peripatetics introduced perception as a valid source of knowledge and so prepared the end of Christian theology, the rational explanation of the "facts" of revelation. There seemed to be little or no facts after all !
When Peripatetic metaphysics got integrated in monotheist theology, the end of fundamental theology could not be far off. Indeed, how to assimilate the more empirical approach of Aristotle without harming the God of revelation ? As soon as the natural world became focus of attention, the "facts" of revelation could no longer be believed at their face value. The most clear example of this is geocentrism. All three faiths claimed the Earth to be at the center of the universe. Embracing Copernicanism threw humanity off its self-proclaimed pedestal and paved the way to prove most "facts" of scripture were manmade literary fictions. Not only were the so-called "scientific miracles" found in the holy books explained in a secular way, but literary criticism proved how these texts themselves are merely historical compositions adapted to the circumstances of their time. Moreover, only few authentic (original) texts could be identified ! How to erect a strong conviction or faith on nothing more than stories and remain a sane, rational human being ? A good example is the book Q.
Moreover, Aristotle's concept of the "Unmoved Mover" reaffirmed the general Greek prejudice against relationality, identifying objects entertaining relationships with other objects as of "lower rank" compared to objects removed from empirical actuality, looking down at the world from their unmoved Olympic heights.
Indeed, for Thomas Aquinas
(1225 - 1274), the relation between God and the world is a "relatio rationis", not a real or mutual bond. This scholastic notion can be explained by taking the example of a subject apprehending an object. From the side of the object only a logical, rational relationship persists. The object is not affected by the subject apprehending it. From the side of the subject however, a real relationship is at hand, for the subject is really affected by the perception of the object. According to Thomism, God is not affected by the world, and so God is like an object, not a subject ! The world however is affected by this object-God, clearly not "Emmanuel", God-with-us. Hence, the relationship between God and the world cannot be reciprocal. If so, the world only contributes to the glory of God ("gloria externa Dei"). The finite is nothing more than a necessary "explicatio Dei". This is the only way the world can contribute to God.
In the line of this reasoning, the monotheist God, like a Caesar of sorts, is omnipotent and omniscient. This means God knows what is possible as possible, what is presently real as real and also the future of what is real (predestination). Moreover, God can do what He likes and so is directly responsible for all events (cf. "insh'Allah"). These views make it however impossible not to attribute all possible evil, like the slaying of the innocent, to God ! Such a theology turns the good God into a brutal monster or proves the point He cannot exist (cf. Sartre). Finally, free will cannot be combined with this view of God as the sufficient condition of all things, for freedom only harmonizes with a view of God as merely the necessary condition.
In a philosophical discourse on the Divine influenced by the data of science, no longer a priori -as a handmaiden- forced to take sides with the dogma's of revelation, these inconsistencies in monotheist theology can no longer be maintained. Fundamental theology is shipwrecked, and the distinction between the discourse of faith and -since the Renaissance- the reasons of metaphysics became more pertinent (cf. deism). The Age of Enlightenment would eliminate the more "scientific" pretensions of the revelations (like the story of creation, geocentrism, the position of woman, slavery and other contra-factual & immoral views), and by the beginning of the XXth century, relativity & quantum mechanics introduced a new, post-Newtonian view on spatio-temporality and the physical categories of determination (replacing efficient causality with neo-causality, interaction, statistical probabilism, teleological determination, etc.). The Judeo-Christian socio-political grip on humanity was incapacitated. In Islam, the revolution of "an age of enlightened reason" is still on its way and its first stirrings can today be felt in the so-called "European Islam".
God in the "Western Tradition"
occult henotheist pan-en-theism
Because of the dictatorial views of Roman Catholicism embracing, from the IVth century onward, imperial tenets, "Paganism" was removed from the public domain and became occult, i.e. concealed or hidden from view. So the successors of Alexandrian Hermetism developed a Western form of spirituality at the fringes of mainstream religious activity.
Let it suffice to say this movement embraced Late Hellenistic Paganism, Hermetism, the Jewish Qabalah, outlawed Christian Gnosticism and a wrong allegorical interpretation of Ancient Egyptian lore. The result was a strange mix of astrology, sorcery, magic, alchemy, eschatological mysticism and the like. The ontology underlining these practices embraced henotheism (the One God manifest in various sublime natural states or Deities) and although some occultists developed pantheist views, their onto-theology was overall pan-en-theist ; their One God was present in Nature but simultaneously absolutely removed from His Creation (cf. the negative veils in Qabalah). Eventually, by assimilating Hermetism from the Moon Deity of Harran and other Arab occult tenets, the Crusaders would remix this early occultism with their Christianity, leading to Templar spirituality, Grail mysteries and over time to Rosicrucianism, Enochianism, Freemasonry, Theosophy, Witchcraft, Satanism and the various magical societies of the late XIXth (Golden Dawn), early XXth century (OTO, AMORC). Finally, in the 1970s, these various currents were brought together in the Los Angeles based "New Age" movement.
Although strict monotheism is not present here, theism is never abolished. The Supreme Being is "one, hidden & millions", and when moving through the various occult terminologies, one grasps the tenets of Late Paganism are not superseded, quite on the contrary. The occult view on Ancient Egypt is Late Hellenistic, and the various founding histories (for example of Freemasonry) are fictitious. As the Egyptian language has only been recently deciphered, the view on the religion of Kemet merely serves the purpose of creating an illusion of antiquity. Indeed, the tenets of the Western Tradition probe no deeper than Late Hellenistic lore. Even their views on Greek philosophy are outdated and in conflict with recent textual criticism. As formal rationality has been integrated, the Western Tradition does little more than rationalize ante-rational Paganism, turning the mysteries into an activity mystifying the mysterious, feeding an ever-present anti-Catholic sentiment. Precisely because of rationalizing mixed views conjunct with being forced, for more than 15 centuries, into the shadow realm of Western civilization, this so-called "Western Tradition" is an outstanding example of irrationalism.
Its amalgam of various views turns the Western Tradition into a
miscellaneous collection of articles of sentimental value, a bric-a-brac of Western flirtations with unwholesome mysteries, turning spiritual mastery into knowing the highest secret is an empty box, a misinterpreted spirituality. Alas !
Deity Yoga in Buddhist Tantra and the God of Process
rational pansacralism
To the Buddhadharma, the absence of inherent existence, or strict nominalism, is fundamental. All phenomena are other-powered and none is substantial. Absence of inherent existence is the mode in which wisdom-mind experiences all possible realities, Buddhahood included. Space is the common metaphor used to clarify this experience, for "space" is uninterrupted. This experience of space-like emptiness is simultaneous with illusion-like dependent arising. Ultimate existence exists conventionally, for every phenomenon has two isolates : viewed from the angle of wisdom-mind is lacks substance or essence and viewed from the perspective of the conventional mind it depends on outer determinations and conditions. The illusion-like aspect of every phenomenon means it is valid to logically identify and functionally describe its conventional nature, but mistaken insofar as it appears as self-powered, independent and inherently existing, while ultimate analysis shows it is not. That's why it is called "illusion-like".
Contrary to its ultimate, genuine nature, its conventional, apparent nature appears differently than it truly is, i.e. conceals its ultimate truth : non-substantiality. This illusion-like, non-essentiality of the apparent, conventional nature of things has been approached by several similes :
a dream : in this state the five senses seem active but are not ;
a magic show : things are made to appear using circumstances and hidden connections, but are merely tricks ;
an optical aberration : there appear to be relationships between phenomena, but this is not the case ;
a mirage : complete visual entities like cities or oceans appear when there is nothing ;
an echo : what is heard is merely a repetition of a previous sound, but seems to be generated anew ;
a hallucination : voices and images are seen, but they are not there ;
as a reflection : appearances are there, but not really so ;
as a magical city : appearances are conjured, but exist dependent of specific states of mind, etc.
While phenomena appear full & solid, they are truly empty, i.e. lack inherent, self-powered existence. But while all phenomena are truly empty, they nevertheless apparently appear ! The ultimate truth of this appearances is the fact they lack substance while concealing this, for they occur as self-powered (independent, substantial, inherently existing, etc). Insofar as they appear they are not non-existent, but merely logically & functionally instantiated, i.e. examples of logical designation & functional operation (cf. ultimate logic). This last fact is crucial to apprehend. Although all sensoric & mental objects are conceptually designated, sensoric objects do possess a base of designation and this, so must we assume, is extra-mental. Suppose we negate this, then the whole universe is merely a projection of mind and apparent objects are invalid and mistaken. This position is idealist and can be refuted by epistemological inquiry (cf. Criticosynthesis, 2008). Likewise, the Mind Only school is in error, reducing the Two Truths to the single truth of genuine existence. As Je Tsongkhapa (1357 - 1419) has shown, apparent objects are valid insofar as conventionality in at hand, while at the same time mistaken insofar as their appearance goes (cf. emptiness).
The question before us is this : how to conceive the appearance of a Buddha ? In other words : how to apprehend the emergence of a Buddha out of emptiness ? If we can answer this question adequately, then perhaps we may address the question of how to think the "God of process", i.e. an other-powered Supreme Being apprehending the Harmony and Unity of All Possibilities, untainted by the metaphysical compliment of omnipotence ?
In Sûtra meditations on emptiness, a direct realization of emptiness is deemed possible. Non-conceptual and nondual, it nevertheless involves a direct intuitive cognition of genuine reality. The meditator remains with this vacuity or non-affirming negation, appreciating its implications and allowing the ramifications of the analytical unfindability of inherent existence to affect the mind. This immersion may, in the stage of seeing, lead to a direct experience of emptiness or genuine reality. When this meditation ends, the meditator returns to the world of appearances, and at this point all objects dawn as a magician's illusions ; seeming to exist in their own right but known to be empty of inherent, self-powered existence. From what this apparent reality appears from is not addressed.
Although the view of emptiness is the same as in Sûtra, Tantra proceeds, after having meditated on the lack of inherent existence, to reflect on the sameness of the meditator and the ideal being, Deity or Form Body ("Rûpakâya") of a Buddha, both mixing like "water and milk". Where Sûtra causes the effect (Buddhahood), Tantra brings this result into the path. It deliberately reflects on the final result of the spiritual path as being the stuff of which the meditator will appear. It develops ways to actually bring this final result into the path. Instead of exclusively concentrating on the exclusive, non-affirmative negation (resulting in a direct realization of wisdom-mind or "Dharmakâya", the Truth Body of a Buddha), the tantric yogi uses this direct realization as a basis of imaginative appearance, imitating a Buddha's ability to do this in fact. The Deity is therefore an affirming, choice negation, eliminating everything except the anticipated form-aspect of the final result (the Form Body of a Buddha, constituted by an Enjoyment Body or "Sambhogakâya" and a Emanation Body or "Nirmânakâya"). Instead of a Sûtra practitioner, who merely lets objects re-appear after reflecting on emptiness, the Secret Mantra practitioner allows the mind of wisdom to appear in an ideal compassionately active form. In this way, the meditator learns to bridge emptiness and appearances, and the Deity is a binding phenomenon, allowing the "truth" aspect of genuine reality to connect with the "form" aspect, allowing one to apprehend how appearances emerge from emptiness (how the Truth Body and the appearing Form Body connect). Because all appearances are made part of the Form Body, all phenomena are "pure".
The link made is one between the (Sûtric) realization of emptiness (the direct cognition of genuine reality by merely affirming the absence of self-powered phenomena) and the as yet unrealized potentiality of Buddhahood. As this contains the possibility for genuine reality to appear in an ideal form, a cross-over is made between emptiness (genuine reality) and ideal appearances. Hence, not all appearances are merely apparent, i.e. valid but mistaken. There is a class of valid and unmistaken appearing phenomena ! With the identification of this class, the possibility of ideal appearances is affirmed.
This ideal is represented by the Truth Body and Form Body of a Buddha. Of these Bodies, the Truth Body or "Dharmakâya" is the suchness aspect. Traditionally, the Truth Body of a Buddha is divided in a Nature Body and a Wisdom Body, or the ultimate true cessation and the ultimate true path.
The Nature Body is of two types. On the one hand, there is a naturally pure Nature Body, or the absence of inherent existence since beginningless time in the sphere of Buddhahood. It is called a "non-product" because it lacks production, duration, disintegration, beginning, middle and end. On the other hand, there is the adventitiously pure Nature Body, or the absence of adventitious stains. It is called "spontaneous" because -having utterly eliminated the subtle motivational efforts initiating deeds of body & speech- it allows for the spontaneity of the Enjoyment and Emanation Bodies. The Nature Body is not knowable as being limited to any measure and so vast. It is innumerable, non-conceptual, unequal and completely pure. It has five qualities : non-production, non-difference, non-perversity (free from all extremes), purity and Clear Light.
The Wisdom Body is the final, perfect mind of wisdom, cognizing the genuine mode of existence of all phenomena. It also cognizes the varieties of phenomena insofar as they are conventional, apparent realities. This is a Buddha's omniscient consciousness, with omniscient eye, ear, nose, tongue, body & mental consciousnesses. In a single moment, any of these cognize all phenomena. This Wisdom Body is omnipresent, cognizing the emptiness of everything in a nondual way.
Can the class of ideal phenomena, the class containing individual Buddhas, be extended to integrate all ideals under one single ideal ? Insofar as each ideal phenomenon defines the Form Body of a single Buddha, an extension would imply the Truth & Form Bodies of all possible Buddhas. In fact, Buddhist Tantra has already made this extension. In the Vajrayâna, the experiential content of the "Dharmakâya" is called the primordial Buddha or "Âdi-Buddha", also called "Samantabhadra", "He Who Is All-Pervadingly Good" or "He Whose Beneficence is Everywhere" or "Vajradhara", "the Dharma-Holder". This ultimate Buddha of Buddhas represents the wisdom of suchness taught by all Buddhas, i.e. the universal insight into the unity of sameness & difference, the unity of ultimate (genuine) truth (reality) and conventional (apparent) truth (reality).
The Âdi-Buddha represents the "Dharmakâya" as such.
The differences between this Âdi-Buddha and the abstract concept of a "God of process" are merely terminological & cultural. This concept of God is part of "Process Philosophy", a system developed by Alfred North Whitehead (1861 - 1947). He is often associated with Charles Hartshorne (1987 - 2003), who, during one semester, was his assistant, and who focused on God. Process Philosophy is the culmination of a speculative movement concerning God starting in the Renaissance. The "God of the philosophers" does not satisfy the conditions of faith (strong conviction on the basis of a revealed text), but seeks to rationally understand the Supreme Being and find forms of worship avoiding, as much as possible, mystification and paradox. Process Philosophy optimalizes this search precisely because it seeks to integrate the post-Newtonian sciences of non-Euclidian geometry, relativity, cosmology, and quantum mechanics. In doing so, the concept of God becomes far more transparant than anything realized before by the religions.
Its basic intuitions are :
• we live in a universe, not a pluriverse : it is a philosophy of organicism, thinking the unity of all what happens ;
• part of this unity evidenced by the universe can be grasped by reason, allowing for science. Not a single generalization would be possible if the universe were totally random & chaotic ;
• the universe appears to be a dynamic whole, and so growth and becoming are fundamental to it ;
• the displayed dynamism implies novelty and this means an event is never completely determined by what happened before it, for otherwise nothing would truly "happen". The universe is always an incomplete abiding synthesis and must be "remade" every time. This is "creative synthesis" or "creative advance" ;
• this creative becoming is from the inside aimed at the realization of esthetic value or harmony. This beauty is the result of multiple adaptations of multiple elements to each other. Harmony is the result of this multiplicity brought under unity.
For Whitehead, actual entities are the basic category of his system. Events are a nexus of actual entities. Everything existing is an actual entity. When something is real, it is a happening, and occasion. Hence, there is a plurality of nodes of activity. Actual entities are like Leibniz' monads, with the exception they do have "windows", i.e. they enter each other's selfbecoming or "concrescence".
This idea is merely another way to articulate the fact of dependent arising ("pratîtya-samutpâda") : all events are linked with all other events, or, in other words, there is not a single event isolated, independent or self-powered. Thinking interdependence is thinking emptiness or lack of inherent existence. While apparent, interconnected reality appears, it lacks self-power, own-nature or a "self" ("svabhava"). While it lacks a substantial essence ("eidos"), it appears interrelated.
In Process Philosophy, God is not self-powered and so not omnipotent. God is not an impassible super-object, a Caesar disconnected from and looking down on the world, but, on the contrary, changed and touched by what happens.
Besides spatiotemporal actual entities, reality is also characterized by three formative abstract elements escaping space & time : creativity, eternal objects & God. Creativity is formless and eternal objects are pure possibilities. These two formative elements are not actual, mere potential. God however, is actual but nevertheless escapes the spatio-temporal order.
Basic Categories of Process Ontology
the Real
actual world real actual
God abstract actual
eternal objects pure possibilities
This scheme makes clear God is a non-temporal & non-spatial actual entity giving relevance to the realm of pure possibility in the becoming of the actual world, encompassing non-temporal eternity & temporal everlastingness. God, both potential & actual, is the meeting ground of the actual world & pure possibilities. Together, the realm of abstract possibilities and the actual world form reality or the Real.
Among the formative elements, God is an actual entity, while the eternal objects are not. The latter are therefore not part of the real, actual world, but merely elements contributing to the form of definiteness of the former. God is the anterior ground guaranteeing a fraction of all possibilities may enter into the factual becoming of the spatiotemporal world. Without God, nothing of what is possible, can become some thing, change and create. The universe, its order and creativity are the result of a certain valuation of possibilities. However, God is not the universe, nor its order (derived from eternal objects) or the creativity at work in actual entities. The latter are concrete actual entities, while God is an abstract actual entity, while creativity & eternal objects are non-actual formative elements. This bring in the following subtle nuances :
1. concrete actual entities (the actual world) : all what exists in the world of facts and events. This is the only world there is. There is no realm of other-worldly events (the supernatural is necessarily part of the world - cf. hylic pluralism) ;
2. abstract actual entity (the abstract) : God "the organ of novelty, aiming at intensification" is the Artist who makes a beautiful world more likely. Out of the formless possibilities, God makes the choice of harmony and brings it about by arranging propensities ;
3. potential eternal objects (the potential Realm of Possibilities) : selfsame, "pure" forms outside the stream of actual entities, organizing them ;
4. creativity : the formless "matrix" of all things, the principle of the continuous becoming of novel unity and creative advance out of multiplicity.
God is the instance grounding the permanence and continuous novelty characterizing the universe. This primordial nature of God is completely separated from the actual world. For although an actual entity, God's activity is "abstract", namely in the esthetic (artistic) process of valuating possibilities, which are no fictions. But God is engaged in the factual becoming of the actual entities, but cannot be conceived as a concrete actual entity, a fact among the facts. God is the only "abstract" actual entity possible. Besides being an abstract Godhead, God is also a Divine consciousness prehending all events. This is his consequent nature. In these two ways, God is related to the realm of actualities. Let us look at these two ways in more detail.
"Viewed as primordial, he is the unlimited conceptual realization of the absolute wealth of potentiality. In this aspect, he is not before all creation, but with all creation. But, as primordial, so far is he from 'eminent reality', that in this abstraction he is 'deficiently actual' - and this in two ways. His feelings are only conceptual and so lack the fullness of actuality. Secondly, conceptual feelings, apart from complex integration with physical feelings, are devoid of consciousness in their subjective forms."
Whitehead, A.N. : PR, §§ 521.
God's primordial nature is transcendent and does not touch the universe, the actual world. This aspect of Deity is God as the "Lord of All Possibilities". It offers all events the possibility to constitute themselves. If not, nothing would happen. Possibilities, although highly abstract, are no fictions, and enter concrete entities (cf. Popper's propensity-interpretation of the Schrödinger equation). Although there is no imaginary heavenly (Platonic) museum displaying the statue of David before Michelangelo fashioned it, the latter did not invent the material, the possibility allowing him to do so. So the fact formless creativity received definite form is attributed to God as Principle of Definiteness. By way of conceptual valuation, God imposes harmony on all possibilities, for actuality implies choice & limitation. But as all order is contingent, lots of things always remain possible. Whitehead never speaks of God as the "Creator of the Universe" (too suggestive of the total dependence of the world). The "ideal harmony" is only realized as an abstract virtually, and God is the actual entity bringing this beauty into actuality, turning potential harmony into actual esthetic value.
Taking into account everything given in the field of existence of all actual events, God's highest purpose for each is for it to contribute to the realization of the purpose of the whole, namely the unity of harmony in diversity.
God does not decide, but lures, i.e. makes beauty more likely. There is no efficient causality at work here, but a teleological pull inviting creative advance. Given the circumstances, a tender pressure is present to achieve the highest possible harmony. God is the necessary condition, but not the sufficient condition for events. Classical omnipotence & omniscience are thus eliminated. God knows all actual events as actual and all possible (future) events as possible. He does not know all future events as actual. This is a category mistake. He cannot hamper creativity. Giving metaphysical complements to God is relinquished.
God's purpose for each and every event, given all determining conditions determining, is it contributing to the realization of the purpose of the whole universe, the unity of harmony in diversity. God is the unique abstract actual entity making it possible for the multiplicity of events to end up in harmony. This aspect of God is permanent, eternal and not linked to time & space. It is a permanent property of reality, resulting in a uni-verse. Call this aspect of Deity "Godhead".
"Love neither rules, nor is it unmoved ; also it is a little oblivious as to morals. It does not look to the future, for it finds its own reward in the immediate present."
Whitehead, A.N. : PR, §§ 520 - 521.
God's consequent nature is God's concrete, super-conscious presence in the universe, actually being near all possible events and valorizing them to bring out harmony and the purpose of the whole. God, with infinite care, is a tenderness loosing nothing that can and wants to be saved. Hence, God's experience of the world changes. It always grows and can never be given as a whole. God is loyal and will never forsake any event.
The two natures of God are not two parts or elements, but two ways of dealing with the world. Primordially, God is always offering possibilities and realizing unity and order, and this in all possible worlds. Consequentially, God takes the self-creation of all actual events in this concrete universe into account, considering what they realize of what is made possible. These two ways, initiating & responding, permanent & alternating are God's bi-polar, pan-en-theist approaches of the actual world.
Process Theology is another way to present the three Bodies of the Âdi-Buddha, the primordial Buddha representing the class of all awakened events or phenomena.
The Truth Body of the Âdi-Buddha, the "Dharmakâya" is a formless, undifferentiated, nondual field of creativity, out of which all possibilities may arise. But in itself this Body has no motivational factors to allow the Form Bodies to arise. The latter are "spontaneous" emergences. Likewise, the creative field and God are not causally related. God does not create this field, nor is this field defined by what God wants. Since beginningless time, the Truth Body is given, just as the unlimited field of creativity.
The Form Body, in particular the Enjoyment Body ("Sambhogakâya") is an ideal form emerging out of the Truth Body for the sake of compassionate activity. God makes certain definite forms possible by valuating the endless field of creativity using the key of unity & beauty. In Process Philosophy, compassion is subsumed under beauty, for how can ugliness and disorder be compassionate ? The Form Bodies are the two ways the Âdi-Buddha relates to ordinary, apparent events ("samsâra") : the Enjoyment Body is the ideal "form" with which the endless possibilities are given definiteness (God as primordial), while the Emanation Body is the ideal "event" bringing this form down to the plane of physicality and concrete "luring" Divine consciousness (God as consequent).
Many more very subtle correspondences between Process Theology and Buddhist Tantra can be identified, but from what has been pointed out, it follows the core of the Dharmic approach and the heart of Process Philosophy, namely overall non-substantiality gave rise to very similar views on how process and ideal form can be understood. Ideal form rises out of the genuine reality of continuous process (creativity) to allow apparent reality to emerge with characteristic definiteness. Without these ideal forms, formless creativity would never receive shape and not a single actual (apparent) event would happen.
Insofar as apparent reality is grasped as a gigantic interdependent, organic world, both Buddhism and Process Philosophy agree with the best of science. Insofar as the presence of ideal phenomena is recognized by those wishing to bring the Divine in their consciousness, the best religious view is laid bare. These ideal phenomena represent the ultimate truth of non-substantiality and the way forms emerge to give definiteness to this undifferentiated field by way of the ultimate keys by which the universe functions : unity, beauty and compassion.
© Wim van den Dungen, Antwerp - 2017
philo@sofiatopia.org l Acknowledgments l SiteMap l Bibliography
May all who encounter the Dharma accumulate compassion & wisdom.
initiated : 29 XI 2008 - last update : 28 XI 2011 - version n°1 |
290e98bff7d00b78 | Why does the rotational constant B decrease and transition spacings decrease as the mass of a particle increase?
I understand from a purely equation perspective that since
$$B = \frac{h} {8\pi ^2 cI}$$
that as $I$ increases the denominator increases and so $B$ decreases. But what is the physical reasoning behind this? Why or in what way is the rotational constant dependent on mass?
• $\begingroup$ I is the moment of inertia of the molecule, which is given by $$I = \mu R^2$$ R is the distance between the two atoms and $\mu$ is the reduced mass of a bimolecular system, given by $$\mu = \frac {m_1 m_2} {m_1 + m_2}$$ If the mass of either particle increases, then the reduced mass increases, causing $I$ to increase, which then causes $B$ to decrease. $\endgroup$ Dec 5 '19 at 8:41
• $\begingroup$ thank for simply restating my question and not answering it at all. $\endgroup$ Dec 5 '19 at 14:29
• $\begingroup$ How is the energy related to B? $\endgroup$
– Buck Thorn
Dec 5 '19 at 17:54
• 3
$\begingroup$ See, it is pretty much the same with any quantum system (think of PIB, think of HO). A heavier particle means more classic-like behavior, which means "less discrete" energy spectrum, which means smaller transition spacings. $\endgroup$ Dec 5 '19 at 21:13
First, take a look at classical physics. The angular momentum of a particle rotating in a plane is defined as $$L = I \omega$$ and its kinetic energy is $$E = \frac{1}{2} I \omega^2 = \frac{L^2}{2I}.$$
So if you formulate your energy in terms of the angular momentum of your rotating particle, you arrive at the inverse relation.
In analogy to the classical picture, the eigenvalues of the rotational Schrödinger equation, $$ E = hcBJ(J+1),$$ likewise depend quadratically on the angular momentum quantum number, and thus have a similar inverse dependence on the moment of inertia through $B$.
Your Answer
|
dc85cdd70cd8e0f3 | About states, observables and the wave functional interpretation in QFT with gauge fields | PhysicsOverflow
• Register
Please help promote PhysicsOverflow ads elsewhere if you like it.
New printer friendly PO pages!
Migration to Bielefeld University was successful!
Please vote for this year's PhysicsOverflow ads!
... see more
Tools for paper authors
Submit paper
Claim Paper Authorship
Tools for SE users
Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post
Public \(\beta\) tools
Report a bug with a feature
Request a new functionality
404 page design
Send feedback
(propose a free ad)
Site Statistics
195 submissions , 153 unreviewed
4,845 questions , 2,033 unanswered
5,295 answers , 22,497 comments
1,470 users with positive rep
788 active unimported users
More ...
+ 4 like - 0 dislike
First of all, I'm a mathematician, so forgive me for my possible trivial mistakes and poor knowledge of physics.
In a QFT, we just start with a field (scalar, vectorial, sponsorial, gauge etc), so I would like to know what are the observables and the states in this context.
In QFT, the general approach would be by using the Fock space (for the free field case, since I don't really know if this would be true for the interacting one) and getting down, by using the particles associated to the operators $a$ and $a^{\dagger}$, to QM particles (I don't really know if this is true, because the number of particles is not constant and depends on the observer) or by using the wave functional interpretation (a functional on the space of field configurations satisfying Schrödinger equation), though I've heard that this functional is not Lorentz covariant (by the way, any proof?). However, according to this article (http://core.ac.uk/download/pdf/11921990.pdf) the wave functional interpretation is equivalent to the Fock space, so, in any case, this interpretation is not physically reasonable.
In AQFT, in contrast, the operators are already given (so we already have the observables). Furthermore, if the Lorentzian manifold is globally hyperbolic, a Cauchy hyper surface would be a possible interpretation for a state.
In other aspect, are the quantized fields of a given QFT really observables in the sense that they measure?
Now, adding gauge fields, everything will be grupoid valued and observables would be defined on quotients by the gauge group. In this context, I haven't really seen anything written about states and I have no idea on how the Fock space would be. The naive approach would be to consider the wave functional interpretation with domain in a grupoid.
Furthermore, if we restrict ourselves to TQFT, CFT or other specific class of field theories, would all this problem be solved?
Thanks in advance.
This post imported from StackExchange Physics at 2015-05-01 12:37 (UTC), posted by SE-user user40276
asked Apr 17, 2015 in Theoretical Physics by user40276 (140 points) [ revision history ]
edited May 1, 2015 by Dilaton
An historical remark: the pdf reference you cite seems to be quite out of date concerning the references...the interpretations of QFT provided there and the related discussions/problems are known since the end of the fifties of the last century ;-)
2 Answers
+ 4 like - 0 dislike
The algebraic approach gives the better idea of what the states and observables of a quantum theory are, and this holds in infinite dimensional systems as well.
In the modern mathematical terminology, observables of quantum mechanics are the elements of a topological $*$-algebra, and states are objects of its topological dual that are positive and have norm one. The most usual case is to take the $*$-algebra to be a $C^*$ or $W^*$ (von Neumann) algebra; however with such choice unbounded operators are not, strictly speaking, observables (but they can be "affiliated" to the algebra if their spectral projections are in the algebra). The advantage of this abstract approach is that, by the GNS construction, one can immediately associate an Hilbert space to the given $*$-algebra (and a particular state), where the elements of the algebra act as linear operators, and the given state as the average w.r.t. a specific Hilbert space vector.
In usual physical terms, only self-adjoint operators are considered to be observables, for an observable should have real spectrum (and could be associated to a strongly continuous group of unitary operators). The quantum field is, usually, considered to be an observable in a QFT (it is self-adjoint but unbounded, so often it would be affiliated to the $W^*$ algebra generated by its family of exponentials, the Weyl operators); and it is perfectly possible, theoretically, to measure its average value on states (to do it really in experiments, that is all another problem).
Quantum field theories are almost always represented in Fock spaces. However, since the Heisenberg group associated with an infinite dimensional symplectic space is not locally compact, the Stone-von Neumann theorem does not hold and there are infinitely many irreducible inequivalent representations of the Weyl relations, the Fock space being only one of them. To complicate things more, the Haag's theorem states that, roughly speaking, the free and interacting Fock representations are unitarily inequivalent (but that is a problem mostly for scattering theory, not at a fundamental level).
The "wave functional interpretation" (never heard this terminology) is just the functorial nature of the second quantization procedure that can associate to each Hilbert space the corresponding Fock space. This is due to Segal and you may also consult Nelson. The idea is that to each Hilbert space $\mathscr{H}$ one can associate a Gaussian probability space $(\Omega,\mu)$ such that the Fock space $\Gamma(\mathscr{H})$ is unitarily equivalent to $L^2(\Omega,\mu)$, and the map between $\mathscr{H}$ and $\Gamma(\mathscr{H})$($L^2(\Omega,\mu)$) is a functor in the category of Hilbert spaces with self-adjoint and unitary maps as morphisms. The $L^2(\Omega,\mu)$ point of view becomes very natural if one is interested to study QFTs by means of the stochastic integral approach (Feynman-Kac formulas) in euclidean time.
answered Apr 17, 2015 by yuggib (360 points) [ no revision ]
Most voted comments show all comments
Thanks for your answer. I've never heard about interacting Fock space, is there any reference? About the wave functional, I don't really know how can I get an Hamiltonian to construct a Schrödinger equation to this functional. Furthermore, in the case of gauge fields, do you know how observables and states would be defined? Actually, I've never seen Wightman axioms for the case of a gauge fields (any reference?), so I don't really know what's a QFT with gauge fields.
The interacting Fock space cannot be rigorously constructed in most interesting QFTs; however you may take a look to the second book of Bratteli-Robinson to get an idea (applied on a different context) of the Haag's theorem and the inequivalent vacuum/ground-state representations associated to different QFTs. Also the book by Derezinski and Gerard gives some detail (in the end) on quantization of interacting theories. Finally, you may also try to take a direct look at the original works by Haag himself.
Concerning the wave functional, the Hamiltonian in that case would be, roughly speaking, the same as in the Fock representation but with the field replaced by the multiplication by the gaussian functional, and the momentum replaced by the derivative w.r.t. to the aforementioned functional. In general the Hamiltonian has to be a self-adjoint operator on the $L^2(\Omega,\mu)$ space. Anyways I am not completely familiar with this type of description, so take these informations with benefit of the doubt ;-)
Finally, gauge theories are not different, in principle, to other field theories. I am not an expert on this context either, but I suggest you to take again a look at the second volume of the Bratteli-Robinson where gauge fields are studied in the language of AQFT, even if the application they have in mind are mostly in statistical mechanics (anyways this should be not so different from what you look for).
Sorry, but what do you mean by the Fock representation. There is no sympletic space at the beginning of the construction, so ,given a QFT, how can you associate a Fock representation?
Most recent comments show all comments
Sorry, but, again, I can't see what do you mean by the Hamiltonian in the Fock representation.
Let us continue this discussion in chat.
+ 3 like - 0 dislike
From the rigorous point of view, the observable vacuum sector of a relativistic quantum field theory (QFT) on flat Minkowski space is defined by the Wightman axioms. (There are also variations of these in terms of nets of local algebras, but the Wightman axioms are considered most basic; they are also the criterion to be met for a solution of the Clay Millenium problem to construct a QFT for Yang-Mills. There you can also see how the vacuum sector of a gauge theory fits in conceptually. The unsolved conceptual problems that you allude to concern the charged sectors only.)
Given the Wightman axioms, the observables (in the sense of potentially measurable operators) are the smeared fields obtained by integrating the distribution-operator valued fields with an arbitrary Schwartz test function, their products, the linear combinations of these, and their weak limits, as far as they exist.
The state vectors are the products $\psi=A|0\rangle$ where $A$ is an observable and $|0\rangle$ is the vacuum state. (Of course, many different $A$ produce the same $\psi$; e.g., for a free QFT, one can change $A$ by adding any operator of the form $Ba(f)$ where $a(f)$ is a smeared annihilation operator, without changing the state.)
The dynamics is dependent on the choice of a time direction along positive multiples of a timelike vector $v$, and is given by $\psi(t):=A(t)|0\rangle$, where $A(t)$ is obtained from $A$ by replacing all arguments $x$ of field operators in the expression defining $A$ by $x-tv$. The latter operation is an algebra automorphism believed to be always inner, i.e., induced by conjugation with a strongly continuous 1-parameter group generated by a $v$-dependent Hamiltonian $H$ with $H|0\rangle=0$. Assuming this, the Schroedinger equation holds.
To get a more concrete view of the Hilbert space and the dynamics one must either consider exactly solvable QFTs (of which nontrivial examples currently are known only in spacetime dimensions $<4$, and indeed, in 2-dimensional conformal field theory one can give a much more specific picture.), or sacrifice rigor and consider renormalized perturbation theory. In 4 dimensions, the latter builds the Hilbert space as a formal deformation of a Fock space and the fields as formal power series in $\hbar$ or a renormalized coupling constant, although to get physical results one hopes that these formal power series can be evaluated numerically by appropriate trickery. In case of QED this works exceedingly well, but less so in other QFTs.
Alternatively, one discretizes the QFT on a finite lattice, and reduces the problem in this way to one of ordinary quantum mechanics, hoping that for a fine enough and large enough lattice, the results close to the continuum results.
One can also use the functional Schroedinger representation, though this is not mathematically well-defined. Note that contrary to the false claim unlike the functional field equation discussed in the article cited by the OP (which is a philosophical, not a physics paper), the functional Schroedinger equation is in general not equivalent to the Fock representation. In particular, unlike the Fock representation, the functional Schroedinger equation is able to explain many nonperturbative features of interacting QFT. See the discussion of Jackiw's work.
For nonrelativistic QFTs, the situation is somewhat simpler, as particle number is conserved. In the vacuum representation, the Hilbert space is a proper Fock space, and splits into a direct sum of $N$-particle spaces to which standard quantum mechanics applies. However other representations such as those relevant for equilibrium thermodynamics, some of the problems from the relativistic case recur, since the appropriate Hilbert space is no longer a Fock space.
In curved space, no good system of axioms is known, and one generally uses a Fock space perturbation approach with all its limitations.
answered May 1, 2015 by Arnold Neumaier (15,737 points) [ revision history ]
edited May 2, 2015 by Arnold Neumaier
Most voted comments show all comments
The perturbation theory is an approach based on the Fock representation, but it is not Fock representation by itself (which is just one of the infinitely many possible unitarily inequivalent irreducible representations of the CCR). So if you say that the perturbative approach has its limitations I agree with you, but if you say that the limitations are proper of the Fock representation in its generality I don't agree.
@yuggib: Of the infinitely many inequivalent representations only one is a Fock representations, according to standard terminology.
Yes there is only a representation called like that, fixed what CCR (CAR) you are representing: i.e. fixed the complex Hilbert space (one-particle space) on which the Weyl's relations are written. Nevertheless, as I said, it is not tied to perturbation theory, is just a representation of the CCR (CAR).
@yuggib: If you still think the functional Schroedinger representation is not more powerful than Fock space and CCR, please tell me how you find instantons or theta angles in a Fock representation. The point is that many nontrivial field theories (and maybe all) need a Hilbert space that is strictly larger than the limiting Fock space that is obtained when the renormalized coupling constant tends to zero. Perturbation theory cannot see the missing part.
Concerning 1), my intuition (but it may be also wrong) is that if there are other non-fock representations of the free dynamics they should satisfy the Wightman axioms as well, for these axioms seem "representation-independent", at least to me.
Concerning 3), I think that even if you cannot say for sure that the interacting representations are non-Fock, this may indeed be the case for many (or maybe most) interesting theories. Looking a little bit around, I found for example that the representation associated to the renormalized $\phi^4_3$ Hamiltonian given by Glimm is non-Fock (link to where I found the assertion: https://projecteuclid.org/euclid.cmp/1103857837, in page 2); however Glimm's hamiltonian has still the volume cutoff, so this seems unrelated to Haag's theorem. I think that the possibilities are many, and it is difficult to know a priori in which type of representation one may end up after renormalization.
Most recent comments show all comments
@yuggib: I couldn't find an appropriate reference; so I retract my claims about 1) and 3). They reflected my intuition rather than definite knowledge, and after the present discussion I am no longer convinced that my intuition was correct.
I now found a weak reference; Arthur Jaffe, shorthly after minute 04:00 in http://media.scgp.stonybrook.edu/video/video.php?f=20120117_1_qtp.mp4 says that Fock space is not appropriate for interacting fields.
Your answer
Live preview (may slow down editor) Preview
Your name to display (optional):
Anti-spam verification:
user contributions licensed under cc by-sa 3.0 with attribution required
Your rights |
c9d623302a858962 | Quantum Bayesianism
Each point in the Bloch ball is a possible quantum state for a qubit. In QBism, all quantum states are representations of personal probabilities.
In physics and the philosophy of physics, quantum Bayesianism is a collection of related approaches to the interpretation of quantum mechanics, of which the most prominent is QBism (pronounced "cubism"). QBism is an interpretation that takes an agent's actions and experiences as the central concerns of the theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement.[1][2] According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism.[3][4] The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it.[5][6]
This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. Rooted in the prior work of Carlton Caves, Christopher Fuchs, and Rüdiger Schack during the early 2000s, QBism itself is primarily associated with Fuchs and Schack and has more recently been adopted by David Mermin.[7] QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. The QBist interpretation is historically derivative of the views of the various physicists that are often grouped together as "the" Copenhagen interpretation,[8][9] but is itself distinct from them.[9][10] Theodor Hänsch has characterized QBism as sharpening those older views and making them more consistent.[11]
More generally, any work that uses a Bayesian or personalist (a.k.a. "subjective") treatment of the probabilities that appear in quantum theory is also sometimes called quantum Bayesian. QBism, in particular, has been referred to as "the radical Bayesian interpretation".[12]
In addition to presenting an interpretation of the existing mathematical structure of quantum theory, some QBists have advocated a research program of reconstructing quantum theory from basic physical principles whose QBist character is manifest. The ultimate goal of this research is to identify what aspects of the ontology of the physical world make quantum theory a good tool for agents to use.[13] However, the QBist interpretation itself, as described in the Core positions section, does not depend on any particular reconstruction.
History and development
British philosopher, mathematician, and economist Frank Ramsey, whose interpretation of probability theory closely matches the one adopted by QBism.[14]
E. T. Jaynes, a promoter of the use of Bayesian probability in statistical physics, once suggested that quantum theory is "[a] peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble."[15] QBism developed out of efforts to separate these parts using the tools of quantum information theory and personalist Bayesian probability theory.
There are many interpretations of probability theory. Broadly speaking, these interpretations fall into one of three categories: those which assert that a probability is an objective property of reality (the propensity school), those who assert that probability is an objective property of the measuring process (frequentists), and those which assert that a probability is a cognitive construct which an agent may use to quantify their ignorance or degree of belief in a proposition (Bayesians). QBism begins by asserting that all probabilities, even those appearing in quantum theory, are most properly viewed as members of the latter category. Specifically, QBism adopts a personalist Bayesian interpretation along the lines of Italian mathematician Bruno de Finetti[16] and English philosopher Frank Ramsey.[17][18]
According to QBists, the advantages of adopting this view of probability are twofold. First, for QBists the role of quantum states, such as the wavefunctions of particles, is to efficiently encode probabilities; so quantum states are ultimately degrees of belief themselves. (If one considers any single measurement that is a minimal, informationally complete POVM, this is especially clear: A quantum state is mathematically equivalent to a single probability distribution, the distribution over the possible outcomes of that measurement.[19]) Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience.[13] Second, it suggests that quantum mechanics can be thought of as a local theory, because the Einstein–Podolsky–Rosen (EPR) criterion of reality can be rejected. The EPR criterion states, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity."[20] Arguments that quantum mechanics should be considered a nonlocal theory depend upon this principle, but to a QBist, it is invalid, because a personalist Bayesian considers all probabilities, even those equal to unity, to be degrees of belief.[21][22] Therefore, while many interpretations of quantum theory conclude that quantum mechanics is a nonlocal theory, QBists do not.[23]
Fuchs introduced the term "QBism" and outlined the interpretation in more or less its present form in 2010,[24] carrying further and demanding consistency of ideas broached earlier, notably in publications from 2002.[25][26] Several subsequent papers have expanded and elaborated upon these foundations, notably a Reviews of Modern Physics article by Fuchs and Schack;[19] an American Journal of Physics article by Fuchs, Mermin, and Schack;[23] and Enrico Fermi Summer School[27] lecture notes by Fuchs and Stacey.[22]
Prior to the 2010 paper, the term "quantum Bayesianism" was used to describe the developments which have since led to QBism in its present form. However, as noted above, QBism subscribes to a particular kind of Bayesianism which does not suit everyone who might apply Bayesian reasoning to quantum theory (see, for example, the Other uses of Bayesian probability in quantum physics section below). Consequently, Fuchs chose to call the interpretation "QBism," pronounced "cubism," preserving the Bayesian spirit via the CamelCase in the first two letters, but distancing it from Bayesianism more broadly. As this neologism is a homophone of Cubism the art movement, it has motivated conceptual comparisons between the two,[28] and media coverage of QBism has been illustrated with art by Picasso[7] and Gris.[29] However, QBism itself was not influenced or motivated by Cubism and has no lineage to a potential connection between Cubist art and Bohr's views on quantum theory.[30]
Core positions
According to QBism, quantum theory is a tool which an agent may use to help manage his or her expectations, more like probability theory than a conventional physical theory.[13] Quantum theory, QBism claims, is fundamentally a guide for decision making which has been shaped by some aspects of physical reality. Chief among the tenets of QBism are the following:[31]
1. All probabilities, including those equal to zero or one, are valuations that an agent ascribes to his or her degrees of belief in possible outcomes. As they define and update probabilities, quantum states (density operators), channels (completely positive trace-preserving maps), and measurements (positive operator-valued measures) are also the personal judgements of an agent.
2. The Born rule is normative, not descriptive. It is a relation to which an agent should strive to adhere in his or her probability and quantum state assignments.
3. Quantum measurement outcomes are personal experiences for the agent gambling on them. Different agents may confer and agree upon the consequences of a measurement, but the outcome is the experience each of them individually has.
4. A measurement apparatus is conceptually an extension of the agent. It should be considered analogous to a sense organ or prosthetic limb—simultaneously a tool and a part of the individual.
Reception and criticism
Jean Metzinger, 1912, Danseuse au café. One advocate of QBism, physicist David Mermin, describes his rationale for choosing that term over the older and more general "quantum Bayesianism": "I prefer [the] term 'QBist' because [this] view of quantum mechanics differs from others as radically as cubism differs from renaissance painting ..."[28]
Reactions to the QBist interpretation have ranged from enthusiastic[13][28] to strongly negative.[32] Some who have criticized QBism claim that it fails to meet the goal of resolving paradoxes in quantum theory. Bacciagaluppi argues that QBism's treatment of measurement outcomes does not ultimately resolve the issue of nonlocality,[33] and Jaeger finds QBism's supposition that the interpretation of probability is key for the resolution to be unnatural and unconvincing.[12] Norsen[34] has accused QBism of solipsism, and Wallace[35] identifies QBism as an instance of instrumentalism; QBists have argued insistently that these characterizations are misunderstandings, and that QBism is neither solipsist nor instrumentalist.[17][36] A critical article by Nauenberg[32] in the American Journal of Physics prompted a reply by Fuchs, Mermin, and Schack.[37] Some assert that there may be inconsistencies; for example, Stairs argues that when a probability assignment equals one, it cannot be a degree of belief as QBists say.[38] Further, while also raising concerns about the treatment of probability-one assignments, Timpson suggests that QBism may result in a reduction of explanatory power as compared to other interpretations.[1] Fuchs and Schack replied to these concerns in a later article.[39] Mermin advocated QBism in a 2012 Physics Today article,[2] which prompted considerable discussion. Several further critiques of QBism which arose in response to Mermin's article, and Mermin's replies to these comments, may be found in the Physics Today readers' forum.[40][41] Section 2 of the Stanford Encyclopedia of Philosophy entry on QBism also contains a summary of objections to the interpretation, and some replies.[42] Others are opposed to QBism on more general philosophical grounds; for example, Mohrhoff criticizes QBism from the standpoint of Kantian philosophy.[43]
Certain authors find QBism internally self-consistent, but do not subscribe to the interpretation.[44] For example, Marchildon finds QBism well-defined in a way that, to him, many-worlds interpretations are not, but he ultimately prefers a Bohmian interpretation.[45] Similarly, Schlosshauer and Claringbold state that QBism is a consistent interpretation of quantum mechanics, but do not offer a verdict on whether it should be preferred.[46] In addition, some agree with most, but perhaps not all, of the core tenets of QBism; Barnum's position,[47] as well as Appleby's,[48] are examples.
Popularized or semi-popularized media coverage of QBism has appeared in New Scientist,[49] Scientific American,[50] Nature,[51] Science News,[52] the FQXi Community,[53] the Frankfurter Allgemeine Zeitung,[29] Quanta Magazine,[16] Aeon,[54] and Discover.[55] In 2018, two popular-science books about the interpretation of quantum mechanics, Ball's Beyond Weird and Ananthaswamy's Through Two Doors at Once, devoted sections to QBism.[56][57] Furthermore, Harvard University Press published a popularized treatment of the subject, QBism: The Future of Quantum Physics, in 2016.[13]
The philosophy literature has also discussed QBism from the viewpoints of structural realism and of phenomenology.[58][59][60]
Relation to other interpretations
Group photo from the 2005 University of Konstanz conference Being Bayesian in a Quantum World.
Copenhagen interpretations
The views of many physicists (Bohr, Heisenberg, Rosenfeld, von Weizsäcker, Peres, etc.) are often grouped together as the "Copenhagen interpretation" of quantum mechanics. Several authors have deprecated this terminology, claiming that it is historically misleading and obscures differences between physicists that are as important as their similarities.[14][61] QBism shares many characteristics in common with the ideas often labeled as "the Copenhagen interpretation", but the differences are important; to conflate them or to regard QBism as a minor modification of the points of view of Bohr or Heisenberg, for instance, would be a substantial misrepresentation.[10][31]
QBism takes probabilities to be personal judgments of the individual agent who is using quantum mechanics. This contrasts with older Copenhagen-type views, which hold that probabilities are given by quantum states that are in turn fixed by objective facts about preparation procedures.[13][62] QBism considers a measurement to be any action that an agent takes to elicit a response from the world and the outcome of that measurement to be the experience the world's response induces back on that agent. As a consequence, communication between agents is the only means by which different agents can attempt to compare their internal experiences. Most variants of the Copenhagen interpretation, however, hold that the outcomes of experiments are agent-independent pieces of reality for anyone to access.[10] QBism claims that these points on which it differs from previous Copenhagen-type interpretations resolve the obscurities that many critics have found in the latter, by changing the role that quantum theory plays (even though QBism does not yet provide a specific underlying ontology). Specifically, QBism posits that quantum theory is a normative tool which an agent may use to better navigate reality, rather than a set of mechanics governing it.[22][42]
Other epistemic interpretations
Approaches to quantum theory, like QBism,[63] which treat quantum states as expressions of information, knowledge, belief, or expectation are called "epistemic" interpretations.[6] These approaches differ from each other in what they consider quantum states to be information or expectations "about", as well as in the technical features of the mathematics they employ. Furthermore, not all authors who advocate views of this type propose an answer to the question of what the information represented in quantum states concerns. In the words of the paper that introduced the Spekkens Toy Model,
if a quantum state is a state of knowledge, and it is not knowledge of local and noncontextual hidden variables, then what is it knowledge about? We do not at present have a good answer to this question. We shall therefore remain completely agnostic about the nature of the reality to which the knowledge represented by quantum states pertains. This is not to say that the question is not important. Rather, we see the epistemic approach as an unfinished project, and this question as the central obstacle to its completion. Nonetheless, we argue that even in the absence of an answer to this question, a case can be made for the epistemic view. The key is that one can hope to identify phenomena that are characteristic of states of incomplete knowledge regardless of what this knowledge is about.[64]
Leifer and Spekkens propose a way of treating quantum probabilities as Bayesian probabilities, thereby considering quantum states as epistemic, which they state is "closely aligned in its philosophical starting point" with QBism.[65] However, they remain deliberately agnostic about what physical properties or entities quantum states are information (or beliefs) about, as opposed to QBism, which offers an answer to that question.[65] Another approach, advocated by Bub and Pitowsky, argues that quantum states are information about propositions within event spaces that form non-Boolean lattices.[66] On occasion, the proposals of Bub and Pitowsky are also called "quantum Bayesianism".[67]
Zeilinger and Brukner have also proposed an interpretation of quantum mechanics in which "information" is a fundamental concept, and in which quantum states are epistemic quantities.[68] Unlike QBism, the Brukner–Zeilinger interpretation treats some probabilities as objectively fixed. In the Brukner–Zeilinger interpretation, a quantum state represents the information that a hypothetical observer in possession of all possible data would have. Put another way, a quantum state belongs in their interpretation to an optimally-informed agent, whereas in QBism, any agent can formulate a state to encode her own expectations.[69] Despite this difference, in Cabello's classification, the proposals of Zeilinger and Brukner are also designated as "participatory realism," as QBism and the Copenhagen-type interpretations are.[6]
Bayesian, or epistemic, interpretations of quantum probabilities were proposed in the early 1990s by Baez and Youssef.[70][71]
Von Neumann's views
R. F. Streater argued that "[t]he first quantum Bayesian was von Neumann," basing that claim on von Neumann's textbook The Mathematical Foundations of Quantum Mechanics.[72] Blake Stacey disagrees, arguing that the views expressed in that book on the nature of quantum states and the interpretation of probability are not compatible with QBism, or indeed, with any position that might be called quantum Bayesianism.[14]
Relational quantum mechanics
Comparisons have also been made between QBism and the relational quantum mechanics (RQM) espoused by Carlo Rovelli and others.[73][74] In both QBism and RQM, quantum states are not intrinsic properties of physical systems.[75] Both QBism and RQM deny the existence of an absolute, universal wavefunction. Furthermore, both QBism and RQM insist that quantum mechanics is a fundamentally local theory.[23][76] In addition, Rovelli, like several QBist authors, advocates reconstructing quantum theory from physical principles in order to bring clarity to the subject of quantum foundations.[77] (The QBist approaches to doing so are different from Rovelli's, and are described below.) One important distinction between the two interpretations is their philosophy of probability: RQM does not adopt the Ramsey–de Finetti school of personalist Bayesianism.[6][17] Moreover, RQM does not insist that a measurement outcome is necessarily an agent's experience.[17]
Other uses of Bayesian probability in quantum physics
QBism should be distinguished from other applications of Bayesian inference in quantum physics, and from quantum analogues of Bayesian inference.[19][70] For example, some in the field of computer science have introduced a kind of quantum Bayesian network, which they argue could have applications in "medical diagnosis, monitoring of processes, and genetics".[78][79] Bayesian inference has also been applied in quantum theory for updating probability densities over quantum states,[80] and MaxEnt methods have been used in similar ways.[70][81] Bayesian methods for quantum state and process tomography are an active area of research.[82]
Technical developments and reconstructing quantum theory
Conceptual concerns about the interpretation of quantum mechanics and the meaning of probability have motivated technical work. A quantum version of the de Finetti theorem, introduced by Caves, Fuchs, and Schack (independently reproving a result found using different means by Størmer[83]) to provide a Bayesian understanding of the idea of an "unknown quantum state",[84][85] has found application elsewhere, in topics like quantum key distribution[86] and entanglement detection.[87]
Adherents of several interpretations of quantum mechanics, QBism included, have been motivated to reconstruct quantum theory. The goal of these research efforts has been to identify a new set of axioms or postulates from which the mathematical structure of quantum theory can be derived, in the hope that with such a reformulation, the features of nature which made quantum theory the way it is might be more easily identified.[51][88] Although the core tenets of QBism do not demand such a reconstruction, some QBists—Fuchs,[26] in particular—have argued that the task should be pursued.
One topic prominent in the reconstruction effort is the set of mathematical structures known as symmetric, informationally-complete, positive operator-valued measures (SIC-POVMs). QBist foundational research stimulated interest in these structures, which now have applications in quantum theory outside of foundational studies[89] and in pure mathematics.[90]
The most extensively explored QBist reformulation of quantum theory involves the use of SIC-POVMs to rewrite quantum states (either pure or mixed) as a set of probabilities defined over the outcomes of a "Bureau of Standards" measurement.[91][92] That is, if one expresses a density matrix as a probability distribution over the outcomes of a SIC-POVM experiment, one can reproduce all the statistical predictions implied by the density matrix from the SIC-POVM probabilities instead.[93] The Born rule then takes the role of relating one valid probability distribution to another, rather than of deriving probabilities from something apparently more fundamental. Fuchs, Schack, and others have taken to calling this restatement of the Born rule the urgleichung, from the German for "primal equation" (see Ur- prefix), because of the central role it plays in their reconstruction of quantum theory.[19][94][95]
The following discussion presumes some familiarity with the mathematics of quantum information theory, and in particular, the modeling of measurement procedures by POVMs. Consider a quantum system to which is associated a -dimensional Hilbert space. If a set of rank-1 projectors satisfying
exists, then one may form a SIC-POVM . An arbitrary quantum state may be written as a linear combination of the SIC projectors
where is the Born rule probability for obtaining SIC measurement outcome implied by the state assignment . We follow the convention that operators have hats while experiences (that is, measurement outcomes) do not. Now consider an arbitrary quantum measurement, denoted by the POVM . The urgleichung is the expression obtained from forming the Born rule probabilities, , for the outcomes of this quantum measurement,
where is the Born rule probability for obtaining outcome implied by the state assignment . The term may be understood to be a conditional probability in a cascaded measurement scenario: Imagine that an agent plans to perform two measurements, first a SIC measurement and then the measurement. After obtaining an outcome from the SIC measurement, the agent will update her state assignment to a new quantum state before performing the second measurement. If she uses the Lüders rule[96] for state update and obtains outcome from the SIC measurement, then . Thus the probability for obtaining outcome for the second measurement conditioned on obtaining outcome for the SIC measurement is .
Note that the urgleichung is structurally very similar to the law of total probability, which is the expression
They functionally differ only by a dimension-dependent affine transformation of the SIC probability vector. As QBism says that quantum theory is an empirically-motivated normative addition to probability theory, Fuchs and others find the appearance of a structure in quantum theory analogous to one in probability theory to be an indication that a reformulation featuring the urgleichung prominently may help to reveal the properties of nature which made quantum theory so successful.[19][22]
It is important to recognize that the urgleichung does not replace the law of total probability. Rather, the urgleichung and the law of total probability apply in different scenarios because and refer to different situations. is the probability that an agent assigns for obtaining outcome on her second of two planned measurements, that is, for obtaining outcome after first making the SIC measurement and obtaining one of the outcomes. , on the other hand, is the probability an agent assigns for obtaining outcome when she does not plan to first make the SIC measurement. The law of total probability is a consequence of coherence within the operational context of performing the two measurements as described. The urgleichung, in contrast, is a relation between different contexts which finds its justification in the predictive success of quantum physics.
The SIC representation of quantum states also provides a reformulation of quantum dynamics. Consider a quantum state with SIC representation . The time evolution of this state is found by applying a unitary operator to form the new state , which has the SIC representation
The second equality is written in the Heisenberg picture of quantum dynamics, with respect to which the time evolution of a quantum system is captured by the probabilities associated with a rotated SIC measurement of the original quantum state . Then the Schrödinger equation is completely captured in the urgleichung for this measurement:
In these terms, the Schrödinger equation is an instance of the Born rule applied to the passing of time; an agent uses it to relate how she will gamble on informationally complete measurements potentially performed at different times.
Those QBists who find this approach promising are pursuing a complete reconstruction of quantum theory featuring the urgleichung as the key postulate.[94] (The urgleichung has also been discussed in the context of category theory.[97]) Comparisons between this approach and others not associated with QBism (or indeed with any particular interpretation) can be found in a book chapter by Fuchs and Stacey[98] and an article by Appleby et al.[94] As of 2017, alternative QBist reconstruction efforts are in the beginning stages.[99]
See also
1. ^ a b Timpson, Christopher Gordon (2008). "Quantum Bayesianism: A study" (postscript). Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 39 (3): 579–609. arXiv:0804.2047. Bibcode:2008SHPMP..39..579T. doi:10.1016/j.shpsb.2008.03.006. S2CID 16775153.
2. ^ a b Mermin, N. David (2012-07-01). "Commentary: Quantum mechanics: Fixing the shifty split". Physics Today. 65 (7): 8–10. Bibcode:2012PhT....65g...8M. doi:10.1063/PT.3.1618. ISSN 0031-9228.
3. ^ Bub, Jeffrey (2016). Bananaworld: Quantum Mechanics for Primates. Oxford: Oxford University Press. p. 232. ISBN 978-0198718536.
4. ^ Ladyman, James; Ross, Don; Spurrett, David; Collier, John (2007). Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press. pp. 184. ISBN 9780199573097.
5. ^ For "participatory realism," see, e.g.,
Fuchs, Christopher A. (2017). "On Participatory Realism". In Durham, Ian T.; Rickles, Dean (eds.). Information and Interaction: Eddington, Wheeler, and the Limits of Knowledge. arXiv:1601.04360. Bibcode:2016arXiv160104360F. ISBN 9783319437606. OCLC 967844832.
Fuchs, Christopher A.; Timpson, Christopher G. "Does Participatory Realism Make Sense? The Role of Observership in Quantum Theory". FQXi: Foundational Questions Institute. Retrieved 2017-04-18.
7. ^ a b Mermin, N. David (2014-03-27). "Physics: QBism puts the scientist back into science". Nature. 507 (7493): 421–423. doi:10.1038/507421a. PMID 24678539.
8. ^ Tammaro, Elliott (2014-08-09). "Why Current Interpretations of Quantum Mechanics are Deficient". arXiv:1408.2093 [quant-ph].
9. ^ a b Schlosshauer, Maximilian; Kofler, Johannes; Zeilinger, Anton (2013-08-01). "A snapshot of foundational attitudes toward quantum mechanics". Studies in History and Philosophy of Science Part B. 44 (3): 222–230. arXiv:1301.1069. Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004. S2CID 55537196.
10. ^ a b c Mermin, N. David (2017-01-01). "Why QBism Is Not the Copenhagen Interpretation and What John Bell Might Have Thought of It". In Bertlmann, Reinhold; Zeilinger, Anton (eds.). Quantum [Un]Speakables II. The Frontiers Collection. Springer International Publishing. pp. 83–93. arXiv:1409.2454. doi:10.1007/978-3-319-38987-5_4. ISBN 9783319389851. S2CID 118458259.
11. ^ Hänsch, Theodor. "Changing Concepts of Light and Matter". The Pontifical Academy of Sciences. Retrieved 2017-04-18.
12. ^ a b Jaeger, Gregg (2009). "3.7. The radical Bayesian interpretation". Entanglement, information, and the interpretation of quantum mechanics (Online-Ausg. ed.). Berlin: Springer. pp. 170–179. ISBN 978-3-540-92127-1.
13. ^ a b c d e f von Baeyer, Hans Christian (2016). QBism: The Future of Quantum Physics. Cambridge, MA: Harvard University Press. ISBN 978-0674504646.
14. ^ a b c Stacey, Blake C. (2016-05-28). "Von Neumann Was Not a Quantum Bayesian". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 374 (2068): 20150235. arXiv:1412.2409. Bibcode:2016RSPTA.37450235S. doi:10.1098/rsta.2015.0235. ISSN 1364-503X. PMID 27091166. S2CID 16829387.
15. ^ Jaynes, E. T. (1990). "Probability in Quantum Theory". In Zurek, W. H. (ed.). Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley. p. 381.
16. ^ a b Gefter, Amanda. "A Private View of Quantum Reality". Quanta (in American English). Retrieved 2017-04-24.
17. ^ a b c d Fuchs, Christopher A.; Schlosshauer, Maximilian; Stacey, Blake C. (2014-05-10). "My Struggles with the Block Universe". arXiv:1405.2390 [quant-ph].
18. ^ Keynes, John Maynard (2012-01-01). "F. P. Ramsey". Essays in biography. Martino Fine Books. ISBN 978-1614273264. OCLC 922625832.
19. ^ a b c d e Fuchs, Christopher A.; Schack, Rüdiger (2013-01-01). "Quantum-Bayesian coherence". Reviews of Modern Physics. 85 (4): 1693–1715. arXiv:1301.3274. Bibcode:2013RvMP...85.1693F. doi:10.1103/RevModPhys.85.1693. S2CID 18256163.
20. ^ Fine, Arthur (2016-01-01). "The Einstein–Podolsky–Rosen Argument in Quantum Theory". In Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy (Fall 2016 ed.). Metaphysics Research Lab, Stanford University.
21. ^ The issue of the interpretation of probabilities equal to unity in quantum theory occurs even for probability distributions over a finite number of alternatives, and thus it is distinct from the issue of events that happen almost surely in measure-theoretic treatments of probability.
22. ^ a b c d Fuchs, Christopher A.; Stacey, Blake C. (2016-12-21). "QBism: Quantum Theory as a Hero's Handbook". arXiv:1612.07308 [quant-ph].
23. ^ a b c Fuchs, Christopher A.; Mermin, N. David; Schack, Ruediger (2014-07-22). "An introduction to QBism with an application to the locality of quantum mechanics". American Journal of Physics. 82 (8): 749–754. arXiv:1311.5253. Bibcode:2014AmJPh..82..749F. doi:10.1119/1.4874855. ISSN 0002-9505. S2CID 56387090.
24. ^ Fuchs, Christopher A. (2010-03-26). "QBism, the Perimeter of Quantum Bayesianism". arXiv:1003.5209 [quant-ph].
25. ^ Caves, Carlton M.; Fuchs, Christopher A.; Schack, Ruediger (2002-01-01). "Quantum probabilities as Bayesian probabilities". Physical Review A. 65 (2): 022305. arXiv:quant-ph/0106133. Bibcode:2002PhRvA..65b2305C. doi:10.1103/PhysRevA.65.022305. S2CID 119515728.
26. ^ a b C. A. Fuchs, "Quantum Mechanics as Quantum Information (and only a little more),'' in Quantum Theory: Reconsideration of Foundations, edited by A. Khrennikov (Växjö University Press, Växjö, Sweden, 2002), pp. 463–543. arXiv:quant-ph/0205039.
27. ^ "International School of Physics "Enrico Fermi"". Italian Physical Society. Retrieved 2017-04-18.
28. ^ a b c Mermin, N. David (2013-01-28). "Annotated Interview with a QBist in the Making". arXiv:1301.6551 [quant-ph].
29. ^ a b von Rauchhaupt, Ulf (9 February 2014). "Philosophische Quantenphysik : Ganz im Auge des Betrachters". Frankfurter Allgemeine Sonntagszeitung (in German). Vol. 6. p. 62. Retrieved 2017-04-18.
30. ^ "Q3: Quantum Metaphysics Panel". Vimeo. 13 February 2016. Retrieved 2017-04-18.
31. ^ a b Fuchs, Christopher A. (2017). "Notwithstanding Bohr, the Reasons for QBism". Mind and Matter. 15: 245–300. arXiv:1705.03483. Bibcode:2017arXiv170503483F.
32. ^ a b Nauenberg, Michael (2015-03-01). "Comment on QBism and locality in quantum mechanics". American Journal of Physics. 83 (3): 197–198. arXiv:1502.00123. Bibcode:2015AmJPh..83..197N. doi:10.1119/1.4907264. ISSN 0002-9505. S2CID 117823345.
33. ^ Bacciagaluppi, Guido (2014-01-01). "A Critic Looks at QBism". In Galavotti, Maria Carla; Dieks, Dennis; Gonzalez, Wenceslao J.; Hartmann, Stephan; Uebel, Thomas; Weber, Marcel (eds.). New Directions in the Philosophy of Science. The Philosophy of Science in a European Perspective. Springer International Publishing. pp. 403–416. doi:10.1007/978-3-319-04382-1_27. ISBN 9783319043814.
34. ^ Norsen, Travis (2014). "Quantum Solipsism and Non-Locality" (PDF). Int. J. Quant. Found. John Bell Workshop.
35. ^ Wallace, David (2007-12-03). "The Quantum Measurement Problem: State of Play". arXiv:0712.0149 [quant-ph].
36. ^ DeBrota, John B.; Fuchs, Christopher A. (2017-05-17). "Negativity Bounds for Weyl-Heisenberg Quasiprobability Representations". Foundations of Physics. 47 (8): 1009–1030. arXiv:1703.08272. Bibcode:2017FoPh...47.1009D. doi:10.1007/s10701-017-0098-z. S2CID 119428587.
37. ^ Fuchs, Christopher A.; Mermin, N. David; Schack, Ruediger (2015-02-10). "Reading QBism: A Reply to Nauenberg". American Journal of Physics. 83 (3): 198. arXiv:1502.02841. Bibcode:2015AmJPh..83..198F. doi:10.1119/1.4907361.
38. ^ Stairs, Allen (2011). "A loose and separate certainty: Caves, Fuchs and Schack on quantum probability one" (PDF). Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 42 (3): 158–166. Bibcode:2011SHPMP..42..158S. doi:10.1016/j.shpsb.2011.02.001.
39. ^ Fuchs, Christopher A.; Schack, Rüdiger (2015-01-01). "QBism and the Greeks: why a quantum state does not represent an element of physical reality". Physica Scripta. 90 (1): 015104. arXiv:1412.4211. Bibcode:2015PhyS...90a5104F. doi:10.1088/0031-8949/90/1/015104. ISSN 1402-4896. S2CID 14553716.
40. ^ Mermin, N. David (2012-11-30). "Measured responses to quantum Bayesianism". Physics Today. 65 (12): 12–15. Bibcode:2012PhT....65l..12M. doi:10.1063/PT.3.1803. ISSN 0031-9228.
41. ^ Mermin, N. David (2013-06-28). "Impressionism, Realism, and the aging of Ashcroft and Mermin". Physics Today. 66 (7): 8. Bibcode:2013PhT....66R...8M. doi:10.1063/PT.3.2024. ISSN 0031-9228.
42. ^ a b Healey, Richard (2016). "Quantum-Bayesian and Pragmatist Views of Quantum Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
43. ^ Mohrhoff, Ulrich (2014-09-10). "QBism: A Critical Appraisal". arXiv:1409.3312 [quant-ph].
44. ^ Marchildon, Louis (2015-07-01). "Why I am not a QBist". Foundations of Physics. 45 (7): 754–761. arXiv:1403.1146. Bibcode:2015FoPh...45..754M. doi:10.1007/s10701-015-9875-8. ISSN 0015-9018. S2CID 119196825.
Leifer, Matthew. "Interview with an anti-Quantum zealot". Elliptic Composability. Retrieved 10 March 2017.
45. ^ Marchildon, Louis (2015). "Multiplicity in Everett's interpretation of quantum mechanics". Studies in History and Philosophy of Modern Physics. 52 (B): 274–284. arXiv:1504.04835. Bibcode:2015SHPMP..52..274M. doi:10.1016/j.shpsb.2015.08.010. S2CID 118398374.
46. ^ Schlosshauer, Maximilian; Claringbold, Tangereen V. B. (2015). "Entanglement, scaling, and the meaning of the wave function in protective measurement". Protective Measurement and Quantum Reality: Towards a New Understanding of Quantum Mechanics. Cambridge University Press. pp. 180–194. arXiv:1402.1217. doi:10.1017/cbo9781107706927.014. ISBN 9781107706927. S2CID 118003617.
47. ^ Barnum, Howard N. (2010-03-23). "Quantum Knowledge, Quantum Belief, Quantum Reality: Notes of a QBist Fellow Traveler". arXiv:1003.4555 [quant-ph].
48. ^ Appleby, D. M. (2007-01-01). "Concerning Dice and Divinity". AIP Conference Proceedings. 889: 30–39. arXiv:quant-ph/0611261. Bibcode:2007AIPC..889...30A. doi:10.1063/1.2713444.
49. ^ See Chalmers, Matthew (2014-05-07). "QBism: Is quantum uncertainty all in the mind?". New Scientist (in American English). Retrieved 2017-04-09. Mermin criticized some aspects of this coverage; see Mermin, N. David (2014-06-05). "QBism in the New Scientist". arXiv:1406.1573 [quant-ph].
See also Webb, Richard (2016-11-30). "Physics may be a small but crucial fraction of our reality". New Scientist (in American English). Retrieved 2017-04-22.
See also Ball, Philip (2017-11-08). "Consciously quantum". New Scientist. Retrieved 2017-12-06.
50. ^ von Baeyer, Hans Christian (2013). "Quantum Weirdness? It's All in Your Mind". Scientific American. 308 (6): 46–51. Bibcode:2013SciAm.308f..46V. doi:10.1038/scientificamerican0613-46. PMID 23729070.
51. ^ a b Ball, Philip (2013-09-12). "Physics: Quantum quest". Nature. 501 (7466): 154–156. Bibcode:2013Natur.501..154B. doi:10.1038/501154a. PMID 24025823.
52. ^ Siegfried, Tom (2014-01-30). "'QBists' tackle quantum problems by adding a subjective aspect to science". Science News. Retrieved 2017-04-20.
53. ^ Waldrop, M. Mitchell. "Painting a QBist Picture of Reality". fqxi.org (in American English). Retrieved 2017-04-20.
54. ^ Frank, Adam (2017-03-13). Powell, Corey S. (ed.). "Materialism alone cannot explain the riddle of consciousness". Aeon. Retrieved 2017-04-22.
55. ^ Folger, Tim (May 2017). "The War Over Reality". Discover Magazine. Retrieved 2017-05-10.
56. ^ Ball, Philip (2018). Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different. London: Penguin Random House. ISBN 9781847924575. OCLC 1031304139.
57. ^ Ananthaswamy, Anil (2018). Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality. New York: Penguin Random House. ISBN 9781101986097. OCLC 1089112651.
58. ^ Rickles, Dean (2019). "Johntology: Participatory Realism and its Problems". Mind and Matter. 17 (2): 205–211.
59. ^ Bitbol, Michel (2020). "A Phenomenological Ontology for Physics: Merleau-Ponty and QBism". In Wiltsche, Harald; Berghofer, Philipp (eds.). Phenomenological Approaches to Physics. Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science). Vol. 429. Springer. pp. 227–242. doi:10.1007/978-3-030-46973-3_11. ISBN 978-3-030-46972-6. OCLC 1193285104.
60. ^ de La Tremblaye, Laura (2020). "QBism from a Phenomenological Point of View: Husserl and QBism". In Wiltsche, Harald; Berghofer, Philipp (eds.). Phenomenological Approaches to Physics. Synthese Library (Studies in Epistemology, Logic, Methodology, and Philosophy of Science). Vol. 429. Springer. pp. 243–260. doi:10.1007/978-3-030-46973-3_12. ISBN 978-3-030-46972-6. OCLC 1193285104.
61. ^ Peres, Asher (2002-03-01). "Karl Popper and the Copenhagen interpretation". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 33 (1): 23–34. arXiv:quant-ph/9910078. Bibcode:2002SHPMP..33...23P. doi:10.1016/S1355-2198(01)00034-X.
Żukowski, Marek (2017-01-01). "Bell's Theorem Tells Us Not What Quantum Mechanics Is, but What Quantum Mechanics Is Not". In Bertlmann, Reinhold; Zeilinger, Anton (eds.). Quantum [Un]Speakables II. The Frontiers Collection. Springer International Publishing. pp. 175–185. arXiv:1501.05640. doi:10.1007/978-3-319-38987-5_10. ISBN 9783319389851. S2CID 119214547.
Camilleri, Kristian (2009-02-01). "Constructing the Myth of the Copenhagen Interpretation". Perspectives on Science. 17 (1): 26–57. doi:10.1162/posc.2009.17.1.26. ISSN 1530-9274. S2CID 57559199.
62. ^ Peres, Asher (1984-07-01). "What is a state vector?". American Journal of Physics. 52 (7): 644–650. Bibcode:1984AmJPh..52..644P. doi:10.1119/1.13586. ISSN 0002-9505.
Caves, Carlton M.; Fuchs, Christopher A.; Schack, Rüdiger (2007-06-01). "Subjective probability and quantum certainty". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. Probabilities in quantum mechanics. 38 (2): 255–274. arXiv:quant-ph/0608190. Bibcode:2007SHPMP..38..255C. doi:10.1016/j.shpsb.2006.10.007. S2CID 119549678.
63. ^ Harrigan, Nicholas; Spekkens, Robert W. (2010-02-01). "Einstein, Incompleteness, and the Epistemic View of Quantum States". Foundations of Physics. 40 (2): 125–157. arXiv:0706.2661. Bibcode:2010FoPh...40..125H. doi:10.1007/s10701-009-9347-0. ISSN 0015-9018. S2CID 32755624.
64. ^ Spekkens, Robert W. (2007-01-01). "Evidence for the epistemic view of quantum states: A toy theory". Physical Review A. 75 (3): 032110. arXiv:quant-ph/0401052. Bibcode:2007PhRvA..75c2110S. doi:10.1103/PhysRevA.75.032110. S2CID 117284016.
65. ^ a b Leifer, Matthew S.; Spekkens, Robert W. (2013). "Towards a Formulation of Quantum Theory as a Causally Neutral Theory of Bayesian Inference". Phys. Rev. A. 88 (5): 052130. arXiv:1107.5849. Bibcode:2013PhRvA..88e2130L. doi:10.1103/PhysRevA.88.052130. S2CID 43563970.
66. ^ Bub, Jeffrey; Pitowsky, Itamar (2010-01-01). "Two dogmas about quantum mechanics". In Saunders, Simon; Barrett, Jonathan; Kent, Adrian; Wallace, David (eds.). Many Worlds?: Everett, Quantum Theory & Reality. Oxford University Press. pp. 433–459. arXiv:0712.4258. Bibcode:2007arXiv0712.4258B.
67. ^ Duwell, Armond (2011). "Uncomfortable bedfellows: Objective quantum Bayesianism and the von Neumann–Lüders projection postulate". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 42 (3): 167–175. Bibcode:2011SHPMP..42..167D. doi:10.1016/j.shpsb.2011.04.003.
68. ^ Brukner, Časlav; Zeilinger, Anton (2001). "Conceptual inadequacy of the Shannon information in quantum measurements". Physical Review A. 63 (2): 022113. arXiv:quant-ph/0006087. Bibcode:2001PhRvA..63b2113B. doi:10.1103/PhysRevA.63.022113. S2CID 119381924.
Brukner, Časlav; Zeilinger, Anton (2009). "Information Invariance and Quantum Probabilities". Foundations of Physics. 39 (7): 677–689. arXiv:0905.0653. Bibcode:2009FoPh...39..677B. doi:10.1007/s10701-009-9316-7. S2CID 73599204.
69. ^ Khrennikov, Andrei (2016). "Reflections on Zeilinger–Brukner information interpretation of quantum mechanics". Foundations of Physics. 46 (7): 836–844. arXiv:1512.07976. Bibcode:2016FoPh...46..836K. doi:10.1007/s10701-016-0005-z. S2CID 119267791.
70. ^ a b c Baez, John (2003-09-12). "Bayesian Probability Theory and Quantum Mechanics". Retrieved 2017-04-18.
71. ^ Youssef, Saul (1991). "A Reformulation of Quantum Mechanics" (PDF). Modern Physics Letters A. 6 (3): 225–236. doi:10.1142/S0217732391000191.
Youssef, Saul (1994). "Quantum Mechanics as Bayesian Complex Probability Theory". Modern Physics Letters A. 9 (28): 2571–2586. arXiv:hep-th/9307019. doi:10.1142/S0217732394002422. S2CID 18506337.
72. ^ Streater, R. F. (2007). Lost Causes in and beyond Physics. Springer. p. 70. ISBN 978-3-540-36581-5.
73. ^ Brukner, Časlav (2017-01-01). "On the Quantum Measurement Problem". In Bertlmann, Reinhold; Zeilinger, Anton (eds.). Quantum [Un]Speakables II. The Frontiers Collection. Springer International Publishing. pp. 95–117. arXiv:1507.05255. doi:10.1007/978-3-319-38987-5_5. ISBN 9783319389851. S2CID 116892322.
Marlow, Thomas (2006-03-07). "Relationalism vs. Bayesianism". arXiv:gr-qc/0603015.
Pusey, Matthew F. (2018-09-18). "An inconsistent friend". Nature Physics. 14 (10): 977–978. doi:10.1038/s41567-018-0293-7. S2CID 126294105.
74. ^ Pienaar, Jacques (2021). "QBism and Relational Quantum Mechanics compared". Foundations of Physics. 51 (5). doi:10.1007/s10701-021-00501-5. ISSN 0015-9018.
75. ^ Cabello, Adán; Gu, Mile; Gühne, Otfried; Larsson, Jan-Åke; Wiesner, Karoline (2016-01-01). "Thermodynamical cost of some interpretations of quantum theory". Physical Review A. 94 (5): 052127. arXiv:1509.03641. Bibcode:2016PhRvA..94e2127C. doi:10.1103/PhysRevA.94.052127. S2CID 601271.
76. ^ Smerlak, Matteo; Rovelli, Carlo (2007-02-26). "Relational EPR". Foundations of Physics. 37 (3): 427–445. arXiv:quant-ph/0604064. Bibcode:2007FoPh...37..427S. doi:10.1007/s10701-007-9105-0. ISSN 0015-9018. S2CID 11816650.
77. ^ Rovelli, Carlo (1996-08-01). "Relational quantum mechanics". International Journal of Theoretical Physics. 35 (8): 1637–1678. arXiv:quant-ph/9609002. Bibcode:1996IJTP...35.1637R. doi:10.1007/BF02302261. ISSN 0020-7748. S2CID 16325959.
78. ^ Tucci, Robert R. (1995-01-30). "Quantum bayesian nets". International Journal of Modern Physics B. 09 (3): 295–337. arXiv:quant-ph/9706039. Bibcode:1995IJMPB...9..295T. doi:10.1142/S0217979295000148. ISSN 0217-9792. S2CID 18217167.
79. ^ Moreira, Catarina; Wichert, Andreas (2016). "Quantum-Like Bayesian Networks for Modeling Decision Making". Frontiers in Psychology. 7: 11. doi:10.3389/fpsyg.2016.00011. PMC 4726808. PMID 26858669.
80. ^ Jones, K. R. W. (1991). "Principles of quantum inference". Annals of Physics. 207 (1): 140–170. Bibcode:1991AnPhy.207..140J. doi:10.1016/0003-4916(91)90182-8.
81. ^ Bužek, V.; Derka, R.; Adam, G.; Knight, P. L. (1998). "Reconstruction of Quantum States of Spin Systems: From Quantum Bayesian Inference to Quantum Tomography". Annals of Physics. 266 (2): 454–496. Bibcode:1998AnPhy.266..454B. doi:10.1006/aphy.1998.5802.
82. ^ Granade, Christopher; Combes, Joshua; Cory, D. G. (2016-01-01). "Practical Bayesian tomography". New Journal of Physics. 18 (3): 033024. arXiv:1509.03770. Bibcode:2016NJPh...18c3024G. doi:10.1088/1367-2630/18/3/033024. ISSN 1367-2630. S2CID 88521187.
83. ^ Størmer, E. (1969). "Symmetric states of infinite tensor products of C*-algebras". J. Funct. Anal. 3: 48–68. doi:10.1016/0022-1236(69)90050-0. hdl:10852/45014.
85. ^ J. Baez (2007). "This Week's Finds in Mathematical Physics (Week 251)". Retrieved 2017-04-18.
87. ^ Doherty, Andrew C.; Parrilo, Pablo A.; Spedalieri, Federico M. (2005-01-01). "Detecting multipartite entanglement" (PDF). Physical Review A. 71 (3): 032333. arXiv:quant-ph/0407143. Bibcode:2005PhRvA..71c2333D. doi:10.1103/PhysRevA.71.032333. S2CID 44241800.
88. ^ Chiribella, Giulio; Spekkens, Rob W. (2016). "Introduction". Quantum Theory: Informational Foundations and Foils. Fundamental Theories of Physics. Vol. 181. Springer. pp. 1–18. arXiv:1208.4123. doi:10.1007/978-94-017-7303-4. ISBN 978-94-017-7302-7. S2CID 118699215.
89. ^ Technical references on SIC-POVMs include the following:
Scott, A. J. (2006-01-01). "Tight informationally complete quantum measurements". Journal of Physics A: Mathematical and General. 39 (43): 13507–13530. arXiv:quant-ph/0604049. Bibcode:2006JPhA...3913507S. doi:10.1088/0305-4470/39/43/009. ISSN 0305-4470. S2CID 33144766.
Wootters, William K.; Sussman, Daniel M. (2007). "Discrete phase space and minimum-uncertainty states". arXiv:0704.1277 [quant-ph].
Appleby, D. M.; Bengtsson, Ingemar; Brierley, Stephen; Grassl, Markus; Gross, David; Larsson, Jan-Åke (2012-05-01). "The Monomial Representations of the Clifford Group". Quantum Information & Computation. 12 (5–6): 404–431. arXiv:1102.1268. Bibcode:2011arXiv1102.1268A. ISSN 1533-7146.
Hou, Zhibo; Tang, Jun-Feng; Shang, Jiangwei; Zhu, Huangjun; Li, Jian; Yuan, Yuan; Wu, Kang-Da; Xiang, Guo-Yong; Li, Chuan-Feng (2018-04-12). "Deterministic realization of collective measurements via photonic quantum walks". Nature Communications. 9 (1): 1414. arXiv:1710.10045. Bibcode:2018NatCo...9.1414H. doi:10.1038/s41467-018-03849-x. ISSN 2041-1723. PMC 5897416. PMID 29650977.
90. ^ Appleby, Marcus; Flammia, Steven; McConnell, Gary; Yard, Jon (2017-04-24). "SICs and Algebraic Number Theory". Foundations of Physics. 47 (8): 1042–1059. arXiv:1701.05200. Bibcode:2017FoPh..tmp...34A. doi:10.1007/s10701-017-0090-7. ISSN 0015-9018. S2CID 119334103.
91. ^ Fuchs, Christopher A.; Schack, Rüdiger (2010-01-08). "A Quantum-Bayesian Route to Quantum-State Space". Foundations of Physics. 41 (3): 345–356. arXiv:0912.4252. Bibcode:2011FoPh...41..345F. doi:10.1007/s10701-009-9404-8. ISSN 0015-9018. S2CID 119277535.
92. ^ Appleby, D. M.; Ericsson, Åsa; Fuchs, Christopher A. (2010-04-27). "Properties of QBist State Spaces". Foundations of Physics. 41 (3): 564–579. arXiv:0910.2750. Bibcode:2011FoPh...41..564A. doi:10.1007/s10701-010-9458-7. ISSN 0015-9018. S2CID 119296426.
93. ^ Rosado, José Ignacio (2011-01-28). "Representation of Quantum States as Points in a Probability Simplex Associated to a SIC-POVM". Foundations of Physics. 41 (7): 1200–1213. arXiv:1007.0715. Bibcode:2011FoPh...41.1200R. doi:10.1007/s10701-011-9540-9. ISSN 0015-9018. S2CID 119102347.
94. ^ a b c Appleby, Marcus; Fuchs, Christopher A.; Stacey, Blake C.; Zhu, Huangjun (2016-12-09). "Introducing the Qplex: A Novel Arena for Quantum Theory". The European Physical Journal D. 71 (7). arXiv:1612.03234. Bibcode:2017EPJD...71..197A. doi:10.1140/epjd/e2017-80024-y. S2CID 119240836.
95. ^ Słomczyński, Wojciech; Szymusiak, Anna (2020-09-30). "Morphophoric POVMs, generalised qplexes, and 2-designs". Quantum. 4: 338. arXiv:1911.12456. Bibcode:2019arXiv191112456S. doi:10.22331/q-2020-09-30-338. ISSN 2521-327X.
96. ^ Busch, Paul; Lahti, Pekka (2009-01-01). "Lüders Rule". In Greenberger, Daniel; Hentschel, Klaus; Weinert, Friedel (eds.). Compendium of Quantum Physics. Springer Berlin Heidelberg. pp. 356–358. doi:10.1007/978-3-540-70626-7_110. ISBN 9783540706229.
97. ^ van de Wetering, John (2018). "Quantum theory is a quasi-stochastic process theory". Electronic Proceedings in Theoretical Computer Science. 266 (2018): 179–196. arXiv:1704.08525. doi:10.4204/EPTCS.266.12. S2CID 53635011.
98. ^ Fuchs, Christopher A.; Stacey, Blake C. (2016-01-01). "Some Negative Remarks on Operational Approaches to Quantum Theory". In Chiribella, Giulio; Spekkens, Robert W. (eds.). Quantum Theory: Informational Foundations and Foils. Fundamental Theories of Physics. Springer Netherlands. pp. 283–305. arXiv:1401.7254. doi:10.1007/978-94-017-7303-4_9. ISBN 9789401773027. S2CID 116428784.
99. ^ Chiribella, Giulio; Cabello, Adán; Kleinmann, Matthias. "The Observer Observed: a Bayesian Route to the Reconstruction of Quantum Theory". FQXi: Foundational Questions Institute. Retrieved 2017-04-18.
External links
• Exotic Probability Theories and Quantum Mechanics: References
• Notes on a Paulian Idea: Foundational, Historical, Anecdotal and Forward-Looking Thoughts on the Quantum – Cerro Grande Fire Series, Volume 1
• My Struggles with the Block Universe – Cerro Grande Fire Series, Volume 2
• Why the multiverse is all about you – The Philosopher's Zone interview with Fuchs
• A Private View of Quantum Reality – Quanta Magazine interview with Fuchs
• Rüdiger Schack on quantum Bayesianism – Machine Intelligence Research Institute interview with Schack
• Participatory Realism – 2017 conference at the Stellenbosch Institute for Advanced Study
• Being Bayesian in a Quantum World – 2005 conference at the University of Konstanz
• Cabello, Adán (September 2017). "El puzle de la teoría cuántica: ¿Es posible zanjar científicamente el debate sobre la naturaleza del mundo cuántico?". Investigación y Ciencia.
• Fuchs, Christopher (presenter); Stacey, Blake (editor); Thisdell, Bill (editor) (2018-04-25). Some Tenets of QBism. YouTube. Retrieved 2018-05-17.
• DeBrota, John B.; Stacey, Blake C. (2018-10-31). "FAQBism". arXiv:1810.13401 [quant-ph]. |
98a4c07cc9a8fa5f | Checked content
Subject Index / Science / Physics
Background to the schools Wikipedia
SOS Children, which runs nearly 200 sos schools in the developing world, organised this selection. Before you decide about sponsoring a child, why not learn about different sponsorship charities first?
The physics articles in this 2008/9 Wikipedia schools selection have been divided into electricity and electronics, general physics, space transport, space and the planets. We have included Pluto in the planets since people might look for it there, not because we think Pluto is necessarily a planet.
ATLAS experiment Aberration of light Absolute zero
Acceleration Albert Einstein Angular momentum
Archimedes Atomic nucleus Atomic number
Big Bang Black hole Blaise Pascal
Boiling point Casimir effect Celsius
Color Condensed matter physics Dark matter
Density Electric charge Electricity
Electromagnetic radiation Electromagnetism Electron
Energy Entropy Euclidean vector
Force Fossil fuel Gas
Gravitation Half-life Heat
History of physics Ice Introduction to quantum mechanics
Introduction to special relativity Isaac Newton Isospin
Isotope Kelvin Kilogram
Kinetic energy Laser Light
Liquid List of particles Magnetism
Mass Matter Momentum
Motion (physics) Nature Neutron
Nobel Prize in Physics Nuclear fission Nuclear physics
Optical fiber Optical microscope Optics
Particle physics Phase (matter) Photon
Physical paradox Physics Plasma (physics)
Portal:Physics Proton Quantum field theory
Quark Radio frequency Redshift
Renormalization SI base unit Schrödinger equation
Second law of thermodynamics Solid Sound
Spacecraft propulsion Special relativity Speed of light
Spherical aberration Stable isotope Star
String theory Sun Supernova
Surface tension Telescope Temperature
Theory of relativity Thermodynamic temperature Third law of thermodynamics
Tide Time Time zone
Ultraviolet Wave Wave–particle duality
White dwarf Work (physics) Work (thermodynamics) |
6018b21d4e73dc91 | Understanding the Universe: From Probability to Quantum Theory
By Steven Gimbel, Ph.D., Gettysburg College
In the 20th century, chaos theory developed out of mathematical structures that scientists thought provided a picture of an elegant universe. But these mathematical structures actually revealed a much more complex and chaotic universe.
3D illustration of particle quantum entanglement.
Imaginative recreation of quantum forces at work. (Image: Jurik Peter/Shutterstock)
Dice and the Theory of Probability
Dice play a significant role in our understanding of probability and its relation to the universe. In 1654, a French nobleman, the Chevalier de Méré, noticed something while gambling. He was playing a game in which a pair of dice would be rolled 24 times and players bet on whether double sixes would be thrown or not. The Chevalier realized that he seemed to win more often when he bet against, but only slightly more often, than when he bet on.
He wanted to know if he was correct and contacted the famous French philosopher and mathematician Blaise Pascal. Pascal, in turn, got in touch with his friend and colleague, the great mathematician, Pierre de Fermat, and asked him. Fermat answered by creating the mathematical theory of probability, which helped prove that the Chevalier was, in fact, correct.
Learn more about a numerical way to make decisions.
Laplace and Probability in Science
Portrait of Pierre-Simon de Laplace (1745-1827), by James Posselwhite
Pierre-Simon Laplace was a French mathematician and philosopher, who wrote two books on probability. (Image: James Posselwhite/Public domain)
A century and a half later, Pierre-Simon Laplace, one of the greatest geniuses of the 19th century, became interested in extending Fermat’s notion of probability beyond games of chance to show how it functions in science. So, amidst all of his other great advances in physics, he wrote a pair of books on the subject.
Laplace’s first book was An Analytic Theory of Probability. Two years later, Laplace wrote another book called A Philosophical Essay on Probabilities. In it, Laplace argued that the use of probabilities in science is the result of our own lack of knowledge, not the result of a random world. In the second book, Laplace imagines an ‘intellect’, for whom ‘nothing would be uncertain, and the future, just like the past, would be present before its eyes’. This intellect had special powers.
Laplace’s Demon and the True Aim of Science
This intellect has been called ‘Laplace’s demon’. This demon was imagined to be capable of remembering an infinite amount of facts and would be able to compute with infinite quickness. Now give this super-brained demon two things: First, the true laws of nature, and second, complete information about all of the masses and energy in the universe at any one moment.
The demon could then predict with absolute certainty the state of the universe at any time in the future, or in the past. The universe, Laplace claimed, would be completely transparent to this mega-intellect. Laplace’s demon is the ultimate statement of the Enlightenment project embodied in Science.
The true aim of Science, according to this thought taken up by Laplace—and later, by Einstein and many others—is to develop a unified account that’s capable of predicting and explaining every event, every occurrence, everywhere. But, this makes four basic assumptions about science and the universe it’s trying to describe.
The Four Basic Assumptions about the Universe
The first assumption is that the universe is deterministic. This means that the state of the universe at any given time is completely determined by the state of the universe immediately before. If the universe is in state A, then it will always transition to state B. The second related assumption is that the rules have steady-state solutions. That means that the development of states over time is well-behaved and follows a simple pattern.
The third assumption is the stability of those steady-state solutions: that a small difference in initial the state makes only a small difference to the next state.
The fourth is predictability. The idea is that if we know the rules and the data, we can predict what is to come.
This would mean that the future is not only determined by the past, but determined in ways that are simple, elegant, and clean. Scientists use equations to describe the behavior of physical systems because mathematical language, the language of patterns, is presumed to apply to the behavior of the world.
However, as quantum theory developed in modern times, inherent randomness in the universe became apparent.
Learn more about randomness and its quantification through probability.
Playing Dice with the Universe
Erwin Schrödinger's picture taken in 1933 for the Nobel Commitee.
Erwin Schrödinger won the Nobel Prize for Physics for developing the Schrödinger equation which describes the wave function. (Image: Nobel Foundation/Public domain)
This unpredictability is apparent in many quantum solutions. For instance, Schrödinger’s equation for a physical system is a wave function; a mathematical combination of every possible state the system can occupy.
But the interesting thing is that we can never see all the states together. The moment an observer looks at it, only one of the many possible states is observed. This means that we are powerless to predict which state will be seen or observed, no matter how much we know about its past state.
It was this inability to determine the future from the past based on a complete scientific theory that upset Einstein. This led him to make his famous statement: ‘God does not play dice with the universe’. Einstein could not accept a random universe; he wanted it to be deterministic and predictable.
Common Questions about Probability and Quantum Theory
Q. How was the theory of probability created?
In 1654, the Chevalier de Méré noticed that he seemed to win more often in one type of game when he bet against the odds. He contacted Blaise Pascal, the mathematician. Pascal asked the great mathematician Pierre de Fermat. Fermat, in answering the Chevalier, created the mathematical theory of probability.
Q. What is Laplace’s Demon?
Laplace’s Demon is the name given to an intellect imagined by Laplace. This is an intellect capable of remembering an infinite amount of facts and would be able to compute with infinite quickness. Given enough knowledge, this intellect could correctly predict the state of the universe at any time in the future, or in the past.
Q. What is interesting about Schrödinger’s equation?
Schrödinger’s equation for a physical system is a wave function, which is a combination of all possible states. The interesting thing is that the moment an observer views the system, only one of all the possible states are observed. In addition, it can’t be predicted what state will be observed.
Q. What did Einstein mean by saying: ‘God does not play dice with the universe’?
By stating that ‘God does not play dice with the universe Einstein meant to say that he believed that the universe was deterministic and predictable.
Keep Reading
The Nature of Randomness
Our Random World—Probability Defined
How Einstein Solved the General Theory of Relativity |
a69f43d8e17fdb3c | The Einstein-Podolsky-Rosen Argument in Quantum Theory
First published Mon May 10, 2004; substantive revision Tue Oct 31, 2017
In the May 15, 1935 issue of Physical Review Albert Einstein co-authored a paper with his two postdoctoral research associates at the Institute for Advanced Study, Boris Podolsky and Nathan Rosen. The article was entitled “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” (Einstein et al. 1935). Generally referred to as “EPR”, this paper quickly became a centerpiece in debates over the interpretation of quantum theory, debates that continue today. Ranked by impact, EPR is among the top ten of all papers ever published in Physical Review journals. Due to its role in the development of quantum information theory, it is also near the top in their list of currently “hot“ papers. The paper features a striking case where two quantum systems interact in such a way as to link both their spatial coordinates in a certain direction and also their linear momenta (in the same direction), even when the systems are widely separated in space. As a result of this “entanglement”, determining either position or momentum for one system would fix (respectively) the position or the momentum of the other. EPR prove a general lemma connecting such strict correlations between spatially separated systems to the possession of definite values. On that basis they argue that one cannot maintain both an intuitive condition of local action and the completeness of the quantum description by means of the wave function. This entry describes the lemma and argument of that 1935 paper, considers several different versions and reactions, and explores the ongoing significance of the issues raised.
1.1 Setting and prehistory
By 1935 conceptual understanding of the quantum theory was dominated by Niels Bohr’s ideas concerning complementarity. Those ideas centered on observation and measurement in the quantum domain. According to Bohr’s views at that time, observing a quantum object involves an uncontrollable physical interaction with a measuring device that affects both systems. The picture here is of a tiny object banging into a big apparatus. The effect this produces on the measuring instrument is what issues in the measurement “result” which, because it is uncontrollable, can only be predicted statistically. The effect experienced by the quantum object limits what other quantities can be co-measured with precision. According to complementarity when we observe the position of an object, we affect its momentum uncontrollably. Thus we cannot determine both position and momentum precisely. A similar situation arises for the simultaneous determination of energy and time. Thus complementarity involves a doctrine of uncontrollable physical interaction that, according to Bohr, underwrites the Heisenberg uncertainty relations and is also the source of the statistical character of the quantum theory. (See the entries on the Copenhagen Interpretation and the Uncertainty Principle.)
Initially Einstein was enthusiastic about the quantum theory. By 1935, however, while recognizing the theory’s significant achievements, his enthusiasm had given way to disappointment. His reservations were twofold. Firstly, he felt the theory had abdicated the historical task of natural science to provide knowledge of significant aspects of nature that are independent of observers or their observations. Instead the fundamental understanding of the quantum wave function (alternatively, the “state function”, “state vector”, or “psi-function”) was that it only treated the outcomes of measurements (via probabilities given by the Born Rule). The theory was simply silent about what, if anything, was likely to be true in the absence of observation. That there could be laws, even probabilistic laws, for finding things if one looks, but no laws of any sort for how things are independently of whether one looks, marked quantum theory as irrealist. Secondly, the quantum theory was essentially statistical. The probabilities built into the state function were fundamental and, unlike the situation in classical statistical mechanics, they were not understood as arising from ignorance of fine details. In this sense the theory was indeterministic. Thus Einstein began to probe how strongly the quantum theory was tied to irrealism and indeterminism.
He wondered whether it was possible, at least in principle, to ascribe certain properties to a quantum system in the absence of measurement. Can we suppose, for instance, that the decay of an atom occurs at a definite moment in time even though such a definite decay time is not implied by the quantum state function? That is, Einstein began to ask whether the formalism provides a description of quantum systems that is complete. Can all physically relevant truths about systems be derived from quantum states? One can raise a similar question about a logical formalism: are all logical truths (or semantically valid formulas) derivable from the axioms. Completeness, in this sense, was a central focus for the Göttingen school of mathematical logic associated with David Hilbert. (See entry on Hilbert’s Program.) Werner Heisenberg, who had attended Hilbert’s lectures, picked up those concerns with questions about the completeness of his own, matrix approach to quantum mechanics. In response, Bohr (and others sympathetic to complementarity) made bold claims not just for the descriptive adequacy of the quantum theory but also for its “finality”, claims that enshrined the features of irrealism and indeterminism that worried Einstein. (See Beller 1999, Chapters 4 and 9, on the rhetoric of finality and Ryckman 2017, Chapter 4, for the connection to Hilbert.) Thus complementarity became Einstein’s target for investigation. In particular, Einstein had reservations about the uncontrollable physical effects invoked by Bohr in the context of measurement interactions, and about their role in fixing the interpretation of the wave function. EPR’s focus on completeness was intended to support those reservations in a particularly dramatic way.
Max Jammer (1974, pp. 166–181) locates the development of the EPR paper in Einstein’s reflections on a thought experiment he proposed during discussions at the 1930 Solvay conference. (For more on EPR and Solvay 1930 see Howard, 1990 and Ryckman, 2017, pp. 118–135.) The experiment imagines a box that contains a clock set to time precisely the release (in the box) of a photon with determinate energy. If this were feasible, it would appear to challenge the unrestricted validity of the Heisenberg uncertainty relation that sets a lower bound on the simultaneous uncertainty of energy and time. (See the entry on the Uncertainty Principle and also Bohr 1949, who describes the discussions at the 1930 conference.) The uncertainty relations, understood not just as a prohibition on what is co-measurable, but on what is simultaneously real, were a central component in the irrealist interpretation of the wave function. Jammer (1974, p. 173) describes how Einstein’s thinking about this experiment, and Bohr’s objections to it, evolved into a different photon-in-a-box experiment, one that allows an observer to determine either the momentum or the position of the photon indirectly, while remaining outside, sitting on the box. Jammer associates this with the distant determination of either momentum or position that, we shall see, is at the heart of the EPR paper. Carsten Held (1998) cites a related correspondence with Paul Ehrenfest from 1932 in which Einstein described an arrangement for the indirect measurement of a particle of mass m using correlations with a photon established through Compton scattering. Einstein’s reflections here foreshadow the argument of EPR, along with noting some of its difficulties.
Whatever their precursors, the ideas that found their way into EPR were discussed in a series of meetings between Einstein and his two assistants, Podolsky and Rosen. Podolsky was commissioned to compose the paper and he submitted it to Physical Review in March of 1935, where it was sent for publication the day after it arrived. Apparently Einstein never checked Podolsky’s draft before submission. He was not pleased with the result. Upon seeing the published version, Einstein complained that it obscured his central concerns.
For reasons of language this [paper] was written by Podolsky after several discussions. Still, it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by formalism [Gelehrsamkeit]. (Letter from Einstein to Erwin Schrödinger, June 19, 1935. In Fine 1996, p. 35.)
Unfortunately, without attending to Einstein’s reservations, EPR is often cited to evoke the authority of Einstein. Here we will distinguish the argument Podolsky laid out in the text from lines of argument that Einstein himself published in articles from 1935 on. We will also consider the argument presented in Bohr’s reply to EPR, which is possibly the best known version, although it differs from the others in important ways.
1.2 The argument in the text
The EPR text is concerned, in the first instance, with the logical connections between two assertions. One asserts that quantum mechanics is incomplete. The other asserts that incompatible quantities (those whose operators do not commute, like the x-coordinate of position and linear momentum in direction x) cannot have simultaneous “reality” (i.e., simultaneously real values). The authors assert the disjunction of these as a first premise (later to be justified): one or another of these must hold. It follows that if quantum mechanics were complete (so that the first assertion failed) then the second one would hold; i.e., incompatible quantities cannot have real values simultaneously. They take as a second premise (also to be justified) that if quantum mechanics were complete, then incompatible quantities (in particular coordinates of position and momentum) could indeed have simultaneous, real values. They conclude that quantum mechanics is incomplete. The conclusion certainly follows since otherwise (if the theory were complete) one would have a contradiction over simultaneous values. Nevertheless the argument is highly abstract and formulaic and even at this point in its development one can readily appreciate Einstein’s disappointment.
EPR now proceed to establish the two premises, beginning with a discussion of the idea of a complete theory. Here they offer only a necessary condition; namely, that for a complete theory “every element of the physical reality must have a counterpart in the physical theory.” The term “element“ may remind one of Mach, for whom this was a central, technical term connected to sensations. (See the entry on Ernst Mach.) The use in EPR of elements of reality is also technical but different. Although they do not define an “element of physical reality” explicitly (and, one might note, the language of elements is not part of Einstein’s usage elsewhere), that expression is used when referring to the values of physical quantities (positions, momenta, and so on) that are determined by an underlying “real physical state”. The picture is that quantum systems have real states that assign values to certain quantities. Sometimes EPR describe this by saying the quantities in question have “definite values”, sometimes “there exists an element of physical reality corresponding to the quantity”. Suppose we adapt the simpler terminology and call a quantity on a system definite if that quantity has a definite value; i.e., if the real state of the system assigns a value (an “element of reality”) to the quantity. The relation that associates real states with assignments of values to quantities is functional so that without a change in the real state there is no change among values assigned to quantities. In order to get at the issue of completeness, a primary question for EPR is to determine when a quantity has a definite value. For that purpose they offer a minimal sufficient condition (p. 777):
This sufficient condition for an “element of reality” is often referred to as the EPR Criterion of Reality. By way of illustration EPR point to those quantities for which the quantum state of the system is an eigenstate. It follows from the Criterion that at least these quantities have a definite value; namely, the associated eigenvalue, since in an eigenstate the corresponding eigenvalue has probability one, which we can determine (predict with certainty) without disturbing the system. In fact, moving from eigenstate to eigenvalue to fix a definite value is the only use of the Criterion in EPR.
With these terms in place it is easy to show that if, say, the values of position and momentum for a quantum system were definite (were elements of reality) then the description provided by the wave function of the system would be incomplete, since no wave function contains counterparts for both elements. Technically, no state function—even an improper one, like a delta function—is a simultaneous eigenstate for both position and momentum; indeed, joint probabilities for position and momentum are not well-defined in any quantum state. Thus they establish the first premise: either quantum theory is incomplete or there can be no simultaneously real (“definite”) values for incompatible quantities. They now need to show that if quantum mechanics were complete, then incompatible quantities could have simultaneous real values, which is the second premise. This, however, is not easily established. Indeed what EPR proceed to do is odd. Instead of assuming completeness and on that basis deriving that incompatible quantities can have real values simultaneously, they simply set out to derive the latter assertion without any completeness assumption at all. This “derivation” turns out to be the heart of the paper and its most controversial part. It attempts to show that in certain circumstances a quantum system can have simultaneous values for incompatible quantities (once again, for position and momentum), where these are definite values; that is, they are assigned by the real state of the system, hence are “elements of reality”.
They proceed by sketching an iconic thought experiment whose variations continue to be important and widely discussed. The experiment concerns two quantum systems that are spatially distant from one another, perhaps quite far apart, but such that the total wave function for the pair links both the positions of the systems as well as their linear momenta. In the EPR example the total linear momentum is zero along the x-axis. Thus if the linear momentum of one of the systems (we can call it Albert’s) along the x-axis were found to be p, the x-momentum of the other system (call it Niels’) would be found to be −p. At the same time their positions along x are also strictly correlated so that determining the position of one system on the x-axis allows us to infer the position of the other system along x. The paper constructs an explicit wave function for the combined (Albert+Niels) system that embodies these links even when the systems are widely separated in space. Although commentators later raised questions about the legitimacy of this wave function, it does appear to guarantee the required correlations for spatially separated systems, at least for a moment (Jammer 1974, pp. 225–38; see also Halvorson 2000). In any case, one can model the same conceptual situation in other cases that are clearly well defined quantum mechanically (see Section 3.1).
At this point of the argument (p. 779) EPR make two critical assumptions, although they do not call special attention to them. (For the significance of these assumptions in Einstein’s thinking see Howard 1985 and also section 5 of the entry on Einstein.) The first assumption (separability) is that at the time when the systems are separated, maybe quite far apart, each has its own reality. In effect, they assume that each system maintains a separate identity characterized by a real physical state, even though each system is also strictly correlated with the other in respect both to momentum and position. They need this assumption to make sense of another. The second assumption is that of locality. Given that the systems are far apart, locality supposes that “no real change can take place” in one system as a direct consequence of a measurement made on the other system. They gloss this by saying “at the time of measurement the two systems no longer interact.” Note that locality does not require that nothing at all about one system can be disturbed directly by a distant measurement on the other system. Locality only rules out that a distant measurement may directly disturb or change what is counted as “real“ with respect to a system, a reality that separability guarantees. On the basis of these two assumptions they conclude that each system can have definite values (“elements of reality”) for both position and momentum simultaneously. There is no straightforward argument for this in the text. Instead they use these two assumptions to show how one could be led to assign position and momentum eigenstates to one system by making measurements on the other system, from which the simultaneous attribution of elements of reality is supposed to follow. Since this is the central and most controversial part of the paper, it pays to go slowly here in trying to reconstruct an argument on their behalf.
Here is one attempt. (Dickson 2004 analyzes some of the modal principles involved and suggests one line of argument, which he criticizes. Hooker 1972 is a comprehensive discussion that identifies several generically different ways to make the case.) Locality affirms that the real state of a system is not affected by distant measurements. Since the real state determines which quantities are definite (i.e., have assigned values), the set of definite quantities is also not affected by distant measurements. So if by measuring a distant partner we can determine that a certain quantity is definite, then that quantity must have been definite all along. As we have seen, the Criterion of Reality implies that a quantity is definite if the state of the system is an eigenstate for that quantity. In the case of the strict correlations of EPR, measuring one system triggers a reduction of the joint state that results in an eigenstate for the distant partner. Hence any quantity with that eigenstate is definite. For example, since measuring the momentum of Albert’s system results in a momentum eigenstate for Niels’, the momentum of Niels’ system is definite. Likewise for the position of Niels’ system. Given separability, the combination of locality and the Criterion establish a quite general lemma; namely, when quantities on separated systems have strictly correlated values, those quantities are definite. Thus the strict correlations between Niels’ system and Albert’s in the EPR situation guarantee that both position and momentum are definite; i. e., that each system has definite position and momentum simultaneously.
EPR point out that position and momentum cannot be measured simultaneously. So even if each can be shown to be definite in distinct contexts of measurement, can both be definite at the same time? The lemma answers “yes”. What drives the argument is locality, which functions logically to decontextualize the reality of Niels’ system from goings on at Albert’s. Accordingly, measurements made on Albert’s system are probative for features corresponding to the real state of Niels’ system but not determinative of them. Thus even without measuring Albert’s system, features corresponding to the real state of Niels’ system remain in place. Among those features are a definite position and a definite momentum for Niels’ system along some particular coordinate direction.
The unreasonableness to which EPR allude in making “the reality [on the second system] depend upon the process of measurement carried out on the first system, which does not in any way disturb the second system” is just the unreasonableness that would be involved in renouncing locality understood as above. For it is locality that enables one to overcome the incompatibility of position and momentum measurements of Albert’s system by requiring their joint consequences for Niels’ system to be incorporated in a single, stable reality there. If we recall Einstein’s acknowledgment to Ehrenfest that getting simultaneous position and momentum was “not logically necessary”, we can see how EPR respond by making it become necessary once locality is assumed.
Here, then, are the key features of EPR.
• EPR is about the interpretation of state vectors (“wave functions”) and employs the standard state vector reduction formalism (von Neumann’s “projection postulate”).
• The Criterion of Reality affirms that the eigenvalue corresponding to the eigenstate of a system is a value determined by the real physical state of that system. (This is the Criterion’s only use.)
• (Separability) Spatially separated systems have real physical states.
• (Locality) If systems are spatially separate, the measurement (or absence of measurement) of one system does not directly affect the reality that pertains to the others.
• (EPR Lemma) If quantities on separated systems have strictly correlated values, those quantities are definite (i.e., have definite values). This follows from separability, locality and the Criterion. No actual measurements are required.
• (Completeness) If the description of systems by state vectors were complete, then definite values of quantities (values determined by the real state of a system) could be inferred from a state vector for the system itself or from a state vector for a composite of which the system is a part.
• In summary, separated systems as described by EPR have definite position and momentum values simultaneously. Since this cannot be inferred from any state vector, the quantum mechanical description of systems by means state vectors is incomplete.
The EPR experiment with interacting systems accomplishes a form of indirect measurement. The direct measurement of Albert’s system yields information about Niels’ system; it tells us what we would find if we were to measure there directly. But it does this at-a-distance, without any physical interaction taking place between the two systems. Thus the thought experiment at the heart of EPR undercuts the picture of measurement as necessarily involving a tiny object banging into a large measuring instrument. If we look back at Einstein’s reservations about complementarity, we can appreciate that by focusing on an indirect, non-disturbing kind of measurement the EPR argument targets Bohr’s program for explaining central conceptual features of the quantum theory. For that program relied on uncontrollable interaction with a measuring device as a necessary feature of any measurement in the quantum domain. Nevertheless the cumbersome machinery employed in the EPR paper makes it difficult to see what is central. It distracts from rather than focuses on the issues. That was Einstein’s complaint about Podolsky’s text in his June 19, 1935 letter to Schrödinger. Schrödinger responded on July 13 reporting reactions to EPR that vindicate Einstein’s concerns. With reference to EPR he wrote:
1.3 Einstein’s versions of the argument
In fact Einstein himself had tried this very route in May of 1927 where he proposed a way of “localizing the particle” by associating spatial trajectories and velocities with particle solutions to the Schrödinger equation. (See Belousek 1996 and Holland 2005; also Ryckman 2017.) Einstein abandoned the project and withdrew the draft from publication, however, after finding that certain intuitive independence conditions were in conflict with the product wave function used by quantum mechanics to treat the composition of independent systems. The problem here anticipates the more general issues raised by EPR over separability and composite systems. This proposal was Einstein’s one and only flirtation with the introduction of hidden variables into the quantum theory. In the following years he never embraced any proposal of that sort, although he hoped for progress in physics to yield a more complete theory, and one where the observer did not play a fundamental role. “We believe however that such a theory [“a complete description of the physical reality”] is possible” (p. 780). Commentators have often mistaken that remark as indicating Einstein’s predilection for hidden variables. To the contrary, after 1927 Einstein regarded the hidden variables project — the project of developing a more complete theory by starting with the existing quantum theory and adding things, like trajectories or real states — an improbable route to that goal. (See, for example, Einstein 1953a.) To improve on the quantum theory, he thought, would require starting afresh with quite different fundamental concepts. At Solvay he acknowledges Louis de Broglie’s pilot wave investigations as a possible direction to pursue for a more complete account of individual processes. But then he quickly turns to an alternative way of thinking, one that he continued to recommend as a better framework for progress, which is not to regard the quantum theory as describing individuals and their processes at all and, instead, to regard the theory as describing only ensembles of individuals. Einstein goes on to suggest difficulties for any scheme, like de Broglie’s and like quantum theory itself, that requires representations in multi-dimensional configuration space. These are difficulties that might move one further toward regarding quantum theory as not aspiring to a description of individual systems but as more amenable to an ensemble (or collective) point of view, and hence not a good starting point for building a better, more complete theory. His subsequent elaborations of EPR-like arguments are perhaps best regarded as no-go arguments, showing that the existing quantum theory does not lend itself to a sensible realist interpretation via hidden variables. If real states, taken as hidden variables, are added into the existing theory, which is then tailored to explain individual events, the result is either an incomplete theory or else a theory that does not respect locality. Hence, new concepts are needed. With respect to EPR, perhaps the most important feature of Einstein’s reflections at Solvay 1927 is his insight that a clash between completeness and locality already arises in considering a single variable (there, position) and does not require an incompatible pair, as in EPR.
In the letter to Schrödinger of June 19, Einstein points to a simple argument for the dilemma which, like the argument from the 1927 Solvay Conference, involves only the measurement of a single variable. Consider an interaction between the Albert and Niels systems that establishes a strict correlation between their positions. (We need not worry about momentum, or any other quantity.) Consider the evolved wave function for the total (Albert+Niels) system when the two systems are far apart. Now assume a principle of locality-separability (Einstein calls it a Trennungsprinzip—separation principle): Whether a determinate physical situation holds for Niels’ system (e.g., that a quantity has a particular value) does not depend on what measurements (if any) are made locally on Albert’s system. If we measure the position of Albert’s system, the strict correlation of positions implies that Niels’ system has a certain position. By locality-separability it follows that Niels’ system must already have had that position just before the measurement on Albert’s system. At that time, however, Niels’ system alone does not have a state function. There is only a state function for the combined system and that total state function does not single out an existing position for Niels’ system (i.e., it is not a product one of whose factors is an eigenstate for the position of Niels’ system). Thus the description of Niels’ system afforded by the quantum state function is incomplete. A complete description would say (definitely yes) if a quantity of Niels’ system had a certain value. (Notice that this argument does not even depend on the reduction of the total state function for the combined system.) In this formulation of the argument it is clear that locality-separability conflicts with the eigenvalue-eigenstate link, which holds that a quantity of a system has a value if and only if the state of the system is an eigenstate (or a proper mixture of eigenstates) of that quantity with that value as eigenvalue. The “only if” part of the link would need to be weakened in order to interpret quantum state functions as complete descriptions. (See the entry on Modal Interpretations and see Gilton 2016 for a history of the eigenvalue-eigenstate link.)
This argument rests on the ordinary and intuitive notion of completeness as not omitting relevant truths. Thus, in the argument, the description given by the state function of a system is judged incomplete when it fails to attribute a position to the system in circumstances where the system indeed has a position. Although this simple argument concentrates on what Einstein saw as the essentials, stripping away most technical details and distractions, he frequently used another argument involving more than one quantity. (It is actually buried in the EPR paper, p. 779, and a version also occurs in the June 19, 1935 letter to Schrödinger. Harrigan and Spekkens, 2010 suggest reasons for preferring a many-variables argument.) This second argument focuses clearly on the interpretation of quantum state functions in terms of “real states” of a system, and not on any issues about simultaneous values (real or not) for complementary quantities. It goes like this.
Suppose, as in EPR, that the interaction between the two systems links position and also linear momentum, and that the systems are far apart. As before, we can measure either the position or the momentum of Albert’s system and, in either case, we can infer (respectively) a position or a momentum for Niels’ system. It follows from the reduction of the total state function that, depending on whether we measure the position or the momentum of Albert’s system, Niels’ system will be left (respectively) either in a position eigenstate or in a momentum eigenstate. Suppose too that separability holds, so that Niels’ system has some real physical state of affairs. If locality holds as well, then the measurement of Albert’s system does not disturb the assumed “reality” for Niels’ system. However, that reality appears to be represented by quite different state functions, depending on which measurement of Albert’s system one chooses to carry out. If we understand a “complete description” to rule out that one and the same physical state can be described by state functions with distinct physical implications, then we can conclude that the quantum mechanical description is incomplete. Here again we confront a dilemma between separability-locality and completeness. Many years later Einstein put it this way (Schilpp 1949, p. 682);
(2) the real states of spatially separate objects are independent of each other.
It appears that the central point of EPR was to argue that any interpretation of quantum state functions that attributes real physical states to systems faces these alternatives. It also appears that Einstein’s different arguments make use of different notions of completeness. In the first argument completeness is an ordinary notion that amounts to not leaving out any relevant details. In the second, completeness is a technical notion which has been dubbed “bijective completeness“ (Fine 1996 ): no more than one quantum state should correspond to a real state. These notions are connected. If completeness fails in the bijective sense, and more than one quantum state corresponds to some real state, we can argue that the ordinary notion of completeness also fails. For distinct quantum states will differ in the values they assign to certain quantities. (For example, the observable corresponding to the projector on a state takes value 1 in one case but not in the other.) Hence each will omit something that the other affirms, so completeness in the ordinary sense will fail. Put differently, ordinary completeness implies bijective completeness. (The converse is not true. Even if the correspondence of quantum states to real states were one-to-one, the description afforded by a quantum state might still leave out some physically relevant fact about its corresponding real state.) Thus a dilemma between locality and “completeness“ in Einstein’s versions of the argument still implicates ordinary completeness. For if locality holds, then his two-variable argument shows that bijective completeness fails, and then completeness in the ordinary sense fails as well.
This line of thought evolves and dominates over problems with composite systems and locality in his last published reflections on incompleteness. Instead he focuses on problems with the stability of macro-descriptions in the transition to a classical level from the quantum.
The point is that after a year either the gunpowder will have exploded, or not. (This is the “real state” which in the EPR situation requires one to assume separability.) The state function, however, will have evolved into a complex superposition over these two alternatives. Provided we maintain the eigenvalue-eigenstate link, the quantum description by means of that state function will yield neither conclusion, and hence the quantum description is incomplete. For a contemporary response to this line of argument, one might look to the program of decoherence. (See Decoherence.) That program points to interactions with the environment which may quickly reduce the likelihood of any interference between the “exploded” and the “not-exploded” branches of the evolved psi-function. Then, breaking the eigenvalue-eigenstate link, decoherence adopts a perspective according to which the (almost) non-interfering branches of the psi-function allow that the gunpowder is indeed either exploded or not. Even so, decoherence fails to identify which alternative is actually realized, leaving the quantum description still incomplete. Such decoherence-based interpretations of the psi-function are certainly “artful”, and their adequacy is still under debate (see Schlosshauer 2007, especially Chapter 8).
The reader may recognize the similarity between Einstein’s exploding gunpowder example and Schrödinger’s cat (Schrödinger 1935a, p. 812). In the case of the cat an unstable atom is hooked up to a lethal device that, after an hour, is as likely to poison (and kill) the cat as not, depending on whether the atom decays. After an hour the cat is either alive or dead, but the quantum state of the whole atom-poison-cat system at this time is a superposition involving the two possibilities and, just as in the case of the gunpowder, is not a complete description of the situation (life or death) of the cat. The similarity between the gunpowder and the cat is hardly accidental since Schrödinger first produced the cat example in his reply of September 19, 1935 to Einstein’s August 8 gunpowder letter. There Schrödinger says that he has himself constructed “an example very similar to your exploding powder keg”, and proceeds to outline the cat (Fine 1996, pp. 82–83). Although the “cat paradox” is usually cited in connection with the problem of quantum measurement (see the relevant section of the entry on Philosophical Issues in Quantum Theory) and treated as a paradox separate from EPR, its origin is here as an argument for incompleteness that avoids the twin assumptions of separability and locality. Schrödinger’s development of “entanglement”, the term he introduced for the correlations that result when quantum systems interact, also began in this correspondence over EPR — along with a treatment of what he called quantum “steering” (Schrödinger 1935a, 1935b; see Quantum Entanglement and Information).
2. A popular form of the argument: Bohr’s response
The literature surrounding EPR contains yet another version of the argument, a popular version that—unlike any of Einstein’s—features the Criterion of Reality. Assume again an interaction between our two systems linking their positions and their linear momenta and suppose that the systems are far apart. If we measure the position of Albert’s system, we can infer that Niels’ system has a corresponding position. We can also predict it with certainty, given the result of the position measurement of Albert’s system. Hence, in this version, the Criterion of Reality is taken to imply that the position of Niels’ system constitutes an element of reality. Similarly, if we measure the momentum of Albert’s system, we can conclude that the momentum of Niels’ system is an element of reality. The argument now concludes that since we can choose freely to measure either position or momentum, it “follows” that both must be elements of reality simultaneously.
Bohr’s response to EPR begins, as do many of his treatments of the conceptual issues raised by the quantum theory, with a discussion of limitations on the simultaneous determination of position and momentum. As usual, these are drawn from an analysis of the possibilities of measurement if one uses an apparatus consisting of a diaphragm connected to a rigid frame. Bohr emphasizes that the question is to what extent can we trace the interaction between the particle being measured and the measuring instrument. (See Beller 1999, Chapter 7 for a detailed analysis and discussion of the “two voices” contained in Bohr’s account. See too Bacciagaluppi 2015.) Following the summary of EPR, Bohr (1935a, p. 700) then focuses on the Criterion of Reality which, he says, “contains an ambiguity as regards the meaning of the expression ‘without in any way disturbing a system’.” Bohr agrees that in the indirect measurement of Niels’ system achieved when one makes a measurement of Albert’s system “there is no question of a mechanical disturbance” of Niels’ system. Still, Bohr claims that a measurement on Albert’s system does involve “an influence on the very conditions which define the possible types of predictions regarding the future behavior of [Niels’] system.” The meaning of this claim is not at all clear. Indeed, in revisiting EPR fifteen years later, Bohr would comment,
Rereading these passages, I am deeply aware of the inefficiency of expression which must have made it very difficult to appreciate the trend of the argumentation (Bohr 1949, p. 234).
Unfortunately, Bohr takes no notice there of Einstein’s later versions of the argument and merely repeats his earlier response to EPR. In that response, however inefficiently, Bohr appears to be directing attention to the fact that when we measure, for example, the position of Albert’s system conditions are in place for predicting the position of Niels’ system but not its momentum. The opposite would be true in measuring the momentum of Albert’s system. Thus his “possible types of predictions” concerning Niels’ system appear to correspond to which variable we measure on Albert’s system. Bohr proposes then to block the EPR Criterion by counting, say, the position measurement of Albert’s system as an “influence” on the distant system of Niels. If we assume it is an influence that disturbs Niels’ system, then the Criterion could not be used, as in Bohr’s version of the argument, in producing an element of reality for Niels’ system that challenges completeness.
There are two important things to notice about this response. The first is this. In conceding that Einstein’s indirect method for determining, say, the position of Niels’ system does not mechanically disturb that system, Bohr departs from his original program of complementarity, which was to base the uncertainty relations and the statistical character of quantum theory on uncontrollable physical interactions, interactions that were supposed to arise inevitably between a measuring instrument and the system being measured. Instead Bohr now distinguishes between a genuine physical interaction (his “mechanical disturbance”) and some other sort of “influence” on the conditions for specifying (or “defining”) sorts of predictions for the future behavior of a system. In emphasizing that there is no question of a robust interaction in the EPR situation, Bohr retreats from his earlier, physically grounded conception of complementarity.
The second important thing to notice is how Bohr’s response needs to be implemented in order to block the argument of EPR and Einstein’s later arguments that pose a dilemma between principles of locality and completeness. In these arguments the locality principle makes explicit reference to the reality of the unmeasured system: the reality pertaining to Niels’ system does not depend on what measurements (if any) are made locally on Albert’s system. Hence Bohr’s suggestion that those measurements influence conditions for specifying types of predictions would not affect the argument unless one includes those conditions as part of the reality of Niels’ system. This is exactly what Bohr goes on to say, “these conditions constitute an inherent element of the description of any phenomena to which the term ‘physical reality’ can be properly attached” (Bohr 1935a, p. 700). So Bohr’s picture is that these “influences”, operating directly across any spatial distances, result in different physically real states of Niels’ system depending on the type of measurement made on Albert’s. (Recall EPR warning against just this move.)
The quantum formalism for interacting systems describes how a measurement on Albert’s system reduces the composite state and distributes quantum states and associated probabilities to the component systems. Here Bohr redescribes that formal reduction using EPR’s language of influences and reality. He turns ordinary local measurements into “influences” that automatically change physical reality elsewhere, and at any distance whatsoever. This grounds the quantum formalism in a rather magical ontological framework, a move quite out of character for the usually pragmatic Bohr. In his correspondence over EPR, Schrödinger compared ideas like that to ritual magic.
This assumption arises from the standpoint of the savage, who believes that he can harm his enemy by piercing the enemy’s image with a needle. (Letter to Edward Teller, June 14, 1935, quoted in Bacciagaluppi 2015)
It is as though EPR’s talk of “reality” and its elements provoked Bohr to adopt the position of Moliere’s doctor who, pressed to explain why opium is a sedative, invents an inherent dormative virtue, “which causes the senses to become drowsy.” Usually Bohr sharply deflates any attempt like this to get behind the formalism, insisting that “the appropriate physical interpretation of the symbolic quantum-mechanical formalism amounts only to predictions, of determinate or statistical character” (Bohr 1949, p. 238).
Could this portrait of nonlocal influences automatically shaping a distant reality be a by-product of Bohr’s “inefficiency of expression”? Despite Bohr’s seeming tolerance for a breakdown of locality in his response here to EPR, in other places Bohr rejects nonlocality in the strongest terms. For example in discussing an electron double slit experiment, which is Bohr’s favorite model for illustrating the novel conceptual features of quantum theory, and writing only weeks before the publication of EPR, Bohr argues as follows.
It is uncanny how closely Bohr’s language mirrors that of EPR. But here Bohr defends locality and regards the very contemplation of nonlocality as “irrational” and “completely incomprehensible”. Since “the circumstance of whether this [other] hole was open or closed” does affect the possible types of predictions regarding the electron’s future behavior, if we expand the concept of the electron’s “reality”, as he appears to suggest for EPR, by including such information, we do “disturb” the electron around one hole by opening or closing the other hole. That is, if we give to “disturb” and to “reality” the very same sense that Bohr appears to give them when responding to EPR, then we are led to an “incomprehensible” nonlocality, and into the territory of the irrational (like Schrödinger’s savage).
There is another way of trying to understand Bohr’s position. According to one common reading (see Copenhagen Interpretation), after EPR Bohr embraced a relational (or contextual) account of property attribution. On this account to speak of the position, say, of a system presupposes that one already has put in place an appropriate interaction involving an apparatus for measuring position (or at least an appropriate frame of reference for the measurement; Dickson 2004). Thus “the position” of the system refers to a relation between the system and the measuring device (or measurement frame). (See Relational Quantum Mechanics, where a similar idea is developed independently of measurements.) In the EPR context this would seem to imply that before one is set up to measure the position of Albert’s system, talk of the position of Niels’ system is out of place; whereas after one measures the position of Albert’s system, talk of the position of Niels’ system is appropriate and, indeed, we can then say truly that Niels’ system “has” a position. Similar considerations govern momentum measurements. It follows, then, that local manipulations carried out on Albert’s system, in a place we may assume to be far removed from Niels’ system, can directly affect what is meaningful to say about, as well as factually true of, Niels’ system. Similarly, in the double slit arrangement, it would follow that what can be said meaningfully and said truly about the position of the electron around the top hole would depend on the context of whether the bottom hole is open or shut. One might suggest that such relational actions-at-a-distance are harmless ones, perhaps merely “semantic”; like becoming the “best” at a task when your only competitor—who might be miles away—fails. Note, however, that in the case of ordinary relational predicates it is not inappropriate (or “meaningless”) to talk about the situation in the absence of complete information about the relata. So you might be the best at a task even if your competitor has not yet tried it, and you are definitely not an aunt (or uncle) until one of your siblings gives birth. But should we say that an electron is nowhere at all until we are set up to measure its position, or would it be inappropriate (meaningless?) even to ask?
In the light of all this it is difficult to know whether a coherent response can be attributed to Bohr reliably that would derail EPR. (In different ways, Dickson 2004 and Halvorson and Clifton 2004 make an attempt on Bohr’s behalf. These are examined in Whitaker 2004 and Fine 2007. See also the essays in Faye and Folse 2017.) Bohr may well have been aware of the difficulty in framing the appropriate concepts clearly when, a few years after EPR, he wrote,
3. Development of EPR
3.1 Spin and The Bohm version
For about fifteen years following its publication, the EPR paradox was discussed at the level of a thought experiment whenever the conceptual difficulties of quantum theory became an issue. In 1951 David Bohm, a protégé of Robert Oppenheimer and then an untenured Assistant Professor at Princeton University, published a textbook on the quantum theory in which he took a close look at EPR in order to develop a response in the spirit of Bohr. Bohm showed how one could mirror the conceptual situation in the EPR thought experiment by looking at the dissociation of a diatomic molecule whose total spin angular momentum is (and remains) zero; for instance, the dissociation of an excited hydrogen molecule into a pair of hydrogen atoms by means of a process that does not change an initially zero total angular momentum (Bohm 1951, Sections 22.15–22.18). In the Bohm experiment the atomic fragments separate after interaction, flying off in different directions freely to separate experimental wings. Subsequently, in each wing, measurements are made of spin components (which here take the place of position and momentum), whose measured values would be anti-correlated after dissociation. In the so-called singlet state of the atomic pair, the state after dissociation, if one atom’s spin is found to be positive with respect to the orientation of an axis perpendicular to its flight path, the other atom would be found to have a negative spin with respect to a perpendicular axis with the same orientation. Like the operators for position and momentum, spin operators for different non-orthogonal orientations do not commute. Moreover, in the experiment outlined by Bohm, the atomic fragments can move to wings far apart from one another and so become appropriate objects for assumptions that restrict the effects of purely local actions. Thus Bohm’s experiment mirrors the entangled correlations in EPR for spatially separated systems, allowing for similar arguments and conclusions involving locality, separability, and completeness. Indeed, a late note of Einstein’s, that may have been prompted by Bohm’s treatment, contains a very sketchy spin version of the EPR argument – once again pitting completeness against locality (“A coupling of distant things is excluded.” Sauer 2007, p. 882). Following Bohm (1951) a paper by Bohm and Aharonov (1957) went on to outline the machinery for a plausible experiment in which entangled spin correlations could be tested. It has become customary to refer to experimental arrangements involving determinations of spin components for spatially separated systems, and to a variety of similar set-ups (especially ones for measuring photon polarization), as “EPRB” experiments—“B” for Bohm. Because of technical difficulties in creating and monitoring the atomic fragments, however, there seem to have been no immediate attempts to perform a Bohm version of EPR.
3.2 Bell and beyond
That was to remain the situation for almost another fifteen years, until John Bell utilized the EPRB set-up to construct a stunning argument, at least as challenging as EPR, but to a different conclusion (Bell 1964). Bell considers correlations between measurement outcomes for systems in separate wings where the measurement axes of the systems differ by angles set locally. In his original paper, essentially using the lemma from EPR governing strict correlations, Bell shows that correlations measured in different runs of an EPRB experiment satisfy a system of constraints, known as the Bell inequalities. Later demonstrations by Bell and others, using related assumptions, extend this class of inequalities. In certain of these EPRB experiments, however, quantum theory predicts correlations that violate particular Bell inequalities by an experimentally significant amount. Thus Bell shows (see the entry on Bell’s Theorem) that the quantum statistics are inconsistent with the given assumptions. Prominent among these is an assumption of locality, similar to the locality assumptions tacitly assumed in EPR and (explicitly) in the one-variable and many-variable arguments of Einstein. One important difference is that for Einstein locality restricts factors that might influence the (assumed) real physical states of spatially separated systems (separability). For Bell, locality is focused instead on factors that might influence outcomes of measurements in experiments where both systems are measured. (See Fine 1996, Chapter 4.) These differences are not usually attended to and Bell’s theorem is often characterized simply as showing that quantum theory is nonlocal. Even so, since assumptions other then locality are needed in any derivation of the Bell inequalities (roughly, assumptions guaranteeing a classical representation of the quantum probabilities; see Fine 1982a, and Malley 2004), one should be cautious about singling out locality (in Bell’s sense, or Einstein’s) as necessarily in conflict with the quantum theory, or refuted by experiment.
Bell’s results have been explored and deepened by various theoretical investigations and they have stimulated a number of increasingly sophisticated and delicate EPRB-type experiments designed to test whether the Bell inequalities hold where quantum theory predicts they should fail. With a few anomalous exceptions, the experiments appear to confirm the quantum violations of the inequalities. (Brunner et al 2014 is a comprehensive technical review.) The confirmation is quantitatively impressive, although not fully conclusive. There are a number of significant requirements on the experiments whose failures (generally downplayed as “loopholes”) allow for models of the experimental data that embody locality (in Bell’s sense), so-called local realist models. One family of “loopholes” (sampling) arises from possible losses (inefficiency) between emission and detection and from the delicate coincidence timing required to compute correlations. All the early experiments to test the Bell inequalities were subject to this loophole, so all could be modeled locally and realistically. (The prism and synchronization models in Fine 1982b are early models of this sort. Larsson 2014 is a general review.) Another “loophole” (locality) concerns whether Niels’ system, in one wing, could learn about what measurements are set to be performed in Albert’s wing in time to adjust its behavior. Experiments insuring locality need to separate the wings and this can allow losses or timing glitches that open them to models exploiting sampling error. Perversely, experiments to address sampling may require the wings to be fairly close together, close enough generally, it turns out, to allow information sharing and hence local realist models. There are now a few experiments that claim to close both loopholes together. They too have problems. (See Bednorz 2017 for a critical discussion.)
There is also a third major complication or “loophole”. It arises from the need to ensure that causal factors affecting measurement outcomes are not correlated with the choices of measurement settings. Known as “measurement independence” or sometimes “free choice”, it turns out that even statistically small violations of this independence requirement allow for local realism (Putz and Gisin 2016). Since connections between outcomes and settings might occur anywhere in the causal past of the experiment, there is really no way to insure measurement independence completely. Suitably random choices of settings might avoid this loophole within the time frame of the experiment, or even extend that time some years into the past. An impressive, recent experiment pushes the time frame back about six hundred years by using the color of Milky Way starlight (blue or red photons) to choose the measurement settings. (Handsteiner et al 2017). Of course traveling between the Milky Way and the detectors in Vienna a lot of starlight is lost (over seventy per cent), which leaves the experiment wide open to the sampling loophole. Moreover, there is an obvious common cause for settings and outcomes (and all); namely, the big bang. With that in mind one might be inclined to dismiss free choice as not serious even for a “loophole”. It may seem like an ad hoc hypothesis that postulates a cosmic conspiracy on the part of Nature just to the save the Bell inequalities. Note, however, that ordinary inefficiency can also be modeled locally as a violation of free choice, because an individual measurement that produces no usable result can just as well be regarded as not currently available. Since inefficiency is not generally counted as a violation of local causality or a restriction on free will, nor as a conspiracy (well, not a cosmic one), measurement dependence should not be dismissed so quickly. Instead, one might see measurement dependent correlations as normal limitations in a system subject to dynamical constraints or boundary conditions, and thus use them as clues, along with other guideposts, in searching for a covering local theory. (See Weinstein 2009.)
Experimental tests of the Bell inequalities continue to be refined. Their analysis is delicate, employing sophisticated statistical models and simulations. (See Elkouss and Wehner 2016 and Graft 2016.) The significance of the tests remains a lively area for critical discussion. Meanwhile the techniques developed in the experiments, and related ideas for utilizing the entanglement associated with EPRB-type interactions, have become important in their own right. These techniques and ideas, stemming from EPRB and the Bell theorem, have applications now being advanced in the field of quantum information theory — which includes quantum cryptography, teleportation and computing (see Quantum Entanglement and Information).
To go back to the EPR dilemma between locality and completeness, it would appear from the Bell theorem that Einstein’s preference for locality at the expense of completeness may have fixed on the wrong horn. Even though the Bell theorem does not rule out locality conditions conclusively, it should certainly make one wary of assuming them. On the other hand, since Einstein’s exploding gunpowder argument (or Schrödinger’s cat), along with his later arguments over macro-systems, support incompleteness without assuming locality, one should be wary of adopting the other horn of the dilemma, affirming that the quantum state descriptions are complete and “therefore” that the theory is nonlocal. It may well turn out that both horns need to be rejected: that the state functions do not provide a complete description and that the theory is also nonlocal (although possibly still separable; see Winsberg and Fine 2003). There is at least one well-known approach to the quantum theory that makes a choice of this sort, the de Broglie-Bohm approach (Bohmian Mechanics). Of course it may also be possible to break the EPR argument for the dilemma plausibly by questioning some of its other assumptions (e.g., separability, the reduction postulate, the eigenvalue-eigenstate link, or measurement independence). That might free up the remaining option, to regard the theory as both local and complete. Perhaps some version of the Everett Interpretation would come to occupy this branch of the interpretive tree, or perhaps Relational Quantum Mechanics.
• Bacciagaluppi, G., 2015, “Did Bohr understand EPR?” in F. Aaserud and H. Kragh (eds.), One Hundred Years of the Bohr Atom (Scientia Danica, Series M, Mathematica et physica, Volume 1), Copenhagen: Royal Danish Academy of Sciences and Letters, pp. 377–396.
• Bednorz, A., 2017, “Analysis of assumptions of recent tests of local realism”, Physical Review A, 95: 042118.
• Bell, J. S., 1964, “On the Einstein-Podolsky-Rosen paradox”, Physics, 1: 195–200, reprinted in Bell 1987.
• Belousek, D. W., 1996, “Einstein’s 1927 unpublished hidden-variable theory: its background, context and significance”, Studies in History and Philosophy of Modern Physics, 27: 437–461.
• Bohm, D., and Y. Aharonov, 1957, “Discussion of experimental proof for the paradox of Einstein, Rosen and Podolski”, Physical Review, 108: 1070–1076.
• Bohr, N., 1935a, “Can quantum-mechanical description of physical reality be considered complete?”, Physical Review, 48: 696–702.
• –––, 1935b, “Space and time in nuclear physics”, Ms. 14, March 21, Manuscript Collection, Archive for the History of Quantum Physics, American Philosophical Society, Philadelphia.
• Brunner, N. et al., 2014, “Bell nonlocality”, Reviews of Modern Physics, 86: 419–478.
• Dickson, M., 2004, “Quantum reference frames in the context of EPR”, Philosophy of Science, 71: 655–668.
• Elkouss, D and S. Wehner, 2016, “ (Nearly) optimal P values for all Bell inequalities ”, NPJ Quantum Information, 2: 16026.
• Faye, J. and H. Folse, 2017, Niels Bohr and the Philosophy of Physics, London: Bloomsbury Academic.
• –––, 1982a, “Hidden variables, joint probability and the Bell inequalities”, Physical Review Letters, 48: 291–295.
• –––, 1982b, “Some local models for correlation experiments”, Synthese 50: 279–94.
• –––, 2007, “Bohr’s response to EPR: Criticism and defense”, Iyyun, The Jerusalem Philosophical Quarterly, 56: 31–56.
• Gilton, M. J. R., 2016, “Whence the eigenstate-eigenvalue link?”, Studies in History and Philosophy of Modern Physics, 55: 92–100.
• Graft, D. A., 2016, “ Clauser-Horne/Eberhard inequality violation by a local model”, Advanced Science, Engineering and Medicine, 8: 496–502.
• Halvorson, H., 2000, “The Einstein-Podolsky-Rosen state maximally violates Bell’s inequality”, Letters in Mathematical Physics, 53: 321–329.
• Halvorson, H. and R. Clifton, 2004, “Reconsidering Bohr’s reply to EPR.” In J. Butterfield and H. Halvorson, eds., Quantum Entanglements: Selected Papers of Rob Clifton, Oxford: Oxford University Press, pp. 369–393.
• Handsteiner, J. et al , 2017, “ Cosmic Bell test: Measurement settings from Milky Way stars”, Physical Review Letters, 118: 060401.
• Harrigan, N. and R. W., Spekkens, 2010, “Einstein, incompleteness, and the epistemic view of quantum states”, Foundations of Physics, 40: 125–157.
• Holland, P., 2005, “What’s wrong with Einstein’s 1927 hidden-variable interpretation of quantum mechanics?”, Foundations of Physics, 35: 177–196.
• Howard, D., 1985, “Einstein on locality and separability.” Studies in History and Philosophy of Science 16: 171–201.
• Howard, D., 1990, “‘Nicht Sein Kann Was Nicht Sein Darf’, or the Prehistory of EPR, 1909–1935”, in A. I. Miller (ed.), Sixty-Two Years of Uncertainty, New York: Plenum Press, pp. 61–111.
• Larsson, J.-A., 2014, “Loopholes in Bell inequality tests of local realism”, Journal of Physics A, 47: 424003.
• Malley, J., 2004, “All Quantum observables in a hidden-variable model must commute simultaneously”, Physical Review A, 69 (022118): 1–3.
• Putz, G. and N. Gisin, 2016, “Measurement dependent locality”, New Journal of Physics, 18: 05506.
• Ryckman, T., 2017, Einstein, New York and London: Routledge.
• –––, 1935b, “Discussion of probability relations between separated systems”, Proceedings of the Cambridge Philosophical Society, 31: 555–562.
• Trimmer, J. D., 1980, “The present situation in quantum mechanics: A translation of Schrödinger’s ‘cat paradox’ paper”, Proceedings of the American Philosophical Society, 124: 323–338
• Weinstein, S. 2009, “Nonlocality without nonlocality”, Foundations of Physics, 39: 921–936.
• Whitaker, M. A. B., 2004, “The EPR Paper and Bohr’s response: A re-assessment”, Foundations of Physics, 34: 1305–1340.
• Winsberg, E., and A. Fine, 2003, “Quantum life: Interaction, entanglement and separation”, Journal of Philosophy, C: 80–97.
Other Internet Resources
Copyright © 2017 by
Arthur Fine <>
Please note that some links may no longer be functional.
[an error occurred while processing this directive] |
0ad25ce5336ad091 | next up previous contents
Next: A DMC Algorithm Up: Diffusion Quantum Monte Carlo Previous: Diffusion Quantum Monte Carlo
The Method
The basis of DMC is to write the Schrödinger equation in imaginary time, taking
tex2html_wrap_inline6333 can be expanded as:
The tex2html_wrap_inline6335 's and tex2html_wrap_inline5849 's are the energy eigenvectors and eigenvalues respectively of the time-independent Schrödinger equation.
We can write down the formal solution of the imaginary-time Schrödinger equation, Eq.(gif),
noting that tex2html_wrap_inline6339 is the imaginary-time evolution operator. Furthermore, expressing tex2html_wrap_inline6333 in the form of Eq.(gif) implies that
Letting tex2html_wrap_inline6343 be tex2html_wrap_inline6345 and hence tex2html_wrap_inline6347 , the evolution time for the system implies that, as long as tex2html_wrap_inline6345 is not orthogonal to the ground-state, tex2html_wrap_inline6351 , then
where tex2html_wrap_inline6353 is the ground-state energy. This can be seen by remembering that all other states have higher energies, tex2html_wrap_inline5849 , than the groundstate energy tex2html_wrap_inline6353 , and will therefore decay away faster. In the tex2html_wrap_inline6359 representation, Eq.(gif) becomes:
We now introduce an arbitrary energy offset term tex2html_wrap_inline6195 , such that the imaginary-time Schrödinger equation, Eq.(gif), is recast as:
Then if tex2html_wrap_inline6195 is adjusted to be the true ground-state energy, tex2html_wrap_inline6353 , the asymptotic solution is a steady-state solution.
We now use the fact that the Schrödinger equation in imaginary time looks like the diffusion equation. Explicitly writing out the Hamiltonian in Eq.(gif), gives:
This is just a 3N-dimensional diffusion equation, with tex2html_wrap_inline6369 playing the role of the density of diffusing particles. The tex2html_wrap_inline6371 term is a rate term, and describes the branching (or creation/annihilation) processes. The entire equation can be simulated by a combination of a diffusion and branching processes, in which the number of diffusing particles increases or decreases at a given point proportional to the density of diffusers and the potential energy at that point in configuration space.
It turns out that solving Eq.(gif) this way is a very inefficient way to simulate the Schrödinger equation on a computer. This is because the branching rate, which is proportional to tex2html_wrap_inline6373 , can diverge to tex2html_wrap_inline6375 for systems of particles interacting via the Coulomb interaction. This leads to large fluctuations in the number of diffusing particles which leads to a large variance in the estimate of the energy. These fluctuations can be dramatically reduced by the introduction of importance sampling [28] in a similar way to the implementation in the VMC algorithm (see section gif).
We will follow the scheme of Ref.[29] for the introduction of importance sampling. The first step is to introduce a guiding function, tex2html_wrap_inline6377 . We now define a new distribution tex2html_wrap_inline6379 which, if tex2html_wrap_inline6369 satisfies the Schrödinger equation, is also a solution of the Schrödinger equation. Substituting tex2html_wrap_inline6383 into Eq.(gif) yields
Where tex2html_wrap_inline6385 can now be interpreted as a ``quantum force''. We follow [30] in defining this term as
and tex2html_wrap_inline6387 , the branching term as
which is defined in terms of the local energy of the guiding function
We now have a drift-diffusion equation for tex2html_wrap_inline6383 . The branching term is proportional to the ``excess local energy'', tex2html_wrap_inline6391 , which with a good choice of tex2html_wrap_inline6393 need not become singular when tex2html_wrap_inline6373 does. To control branching we need to choose tex2html_wrap_inline6393 such that tex2html_wrap_inline6399 is everywhere as smooth as possible, i.e. we want as little variance as possible in tex2html_wrap_inline6401 . Methods for optimising trial wavefunctions with exactly this property are described in detail in chapter gif. In general, the trial wavefunction, Eq.(gif), used as the input to a VMC calculation, makes a suitable choice of guiding wavefunction.
As well as reducing the fluctuations in the number of diffusing particles, tex2html_wrap_inline6393 also has another important role for fermionic systems. It determines the position of the nodes of the final wavefunction, due to the necessity of using the fixed-node approximation where the nodal structure of the exact groundstate wavefunction is assumed to be the same as the nodal structure of tex2html_wrap_inline6393 to ensure that f is always of the same sign (see section gif). The accuracy of the position of the nodes in tex2html_wrap_inline6393 therefore determines how good the estimate of the ground-state energy, tex2html_wrap_inline5997 , is. This can be seen by considering the fact that at long (imaginary) times the distribution tex2html_wrap_inline6383 approaches tex2html_wrap_inline6415 , up to the constraint (imposed by the fixed-node approximation) that tex2html_wrap_inline6417 must vanish at the nodes of tex2html_wrap_inline6377 . This implies that the long-time limit is the true fermionic ground-state if and only if the nodes of tex2html_wrap_inline6377 correspond to the exact nodes of the ground-state wavefunction. The fixed-node energy is an upper bound to the exact fermionic energy[31]. We simulate the equation within only a small number of nodal regions. Each walker moves within one nodal region and rejects all moves that attempt to cross the nodal surface into another nodal region. This contradicts the requirement that we stipulated earlier that the walk must be ergodic. The tiling theorem [32, 33], however, states that the nodal regions of the true ground-state eigenfunction of a system of identical fermions are all related by permutation symmetry. Furthermore the nodal regions of the determinant of LDA eigenfunctions are also related by permutation symmetry because the wavefunction is the ground-state of a Hamiltonian, although it is not the many-body Hamiltonian of the system we are interested in. This means that we can simulate the equation within one nodal region and still be guaranteed to obtain the ``best'' variational result.
Eq.(gif) can be written in integral form, in doing this we follow the procedure of Ref. [30]:
where tex2html_wrap_inline6423 is the Green's function for the case tex2html_wrap_inline6425 . The energy shift tex2html_wrap_inline6427 plays the role of an arbitrary time-dependent renormalisation, chosen in such a way that the probability distribution f remains finite and non-vanishing in the limit tex2html_wrap_inline6431 .
The three terms on the left-hand side of Eq.(gif) describe respectively, diffusion, drift and growth/decay. An approximate Green's function can be formed, with an error of tex2html_wrap_inline6433 for small tex2html_wrap_inline6435 , by the product of Green's functions for diffusion, drift and growth/decay[34]:
In order to deal with the nodes in the fermion ground-state we must use the fixed-node approximation, as stated earlier (this and the various schemes to try to improve upon it are discussed in section gif). What this actually entails is that if a move is such that a walker would cross the node then it is immediately rejected. In other words, the node acts as an infinite potential barrier. It should be noted that the use of importance sampling does not introduce any extra approximations beyond the fixed node approximation already mentioned. With the exception of the fixed node approximation it is possible to calculate exact energies (and expectation values of any other operator that commutes with the Hamiltonian, tex2html_wrap_inline5881 ) from the distribution f, using
next up previous contents
Andrew Williamson
Tue Nov 19 17:11:34 GMT 1996 |
8e9d846230655138 | Psychology Wiki
Copenhagen Interpretation
34,203pages on
this wiki
Add New Page
Talk0 Share
Quantum physics
Quantum psychology
Schrödinger cat
Quantum mechanics
Introduction to...
Mathematical formulation of...
Fundamental concepts
Decoherence · Interference
Uncertainty · Exclusion
Transformation theory
Ehrenfest theorem · Measurement
Double-slit experiment
Davisson-Germer experiment
Stern–Gerlach experiment
EPR paradox · Schrodinger's Cat
Schrödinger equation
Pauli equation
Klein-Gordon equation
Dirac equation
Advanced theories
Quantum field theory
Quantum electrodynamics
Quantum chromodynamics
Quantum gravity
Feynman diagram
Copenhagen · Quantum logic
Hidden variables · Transactional
Many-worlds · Many-minds · Ensemble
Consistent histories · Relational
Consciousness causes collapse
Orchestrated objective reduction
Bohm ·
The Copenhagen interpretation is an interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wave function, proposed by Max Born. Their interpretation attempts to answer some perplexing questions which arise as a result of the quantum mechanics, such as wave-particle duality and the measurement problem.
The meaning of the wave functionEdit
-- Aage Petersen paraphrasing Niels Bohr, Quantum Reality by Nick Herbert
There is no definitive statement of the Copenhagen Interpretation [1]since it consists of the views developed by a number of scientists and philosophers at the turn of the 20th Century. The following have been associated with the Copenhagen interpretation
1. A system is completely described by a wave function \psi, which represents an observer's knowledge of the system. (Heisenberg)
2. The description of nature is essentially probabilistic. The probability of an event is related to the square of the amplitude of the wave function. (Max Born)
3. Heisenberg's uncertainty principle ensures that it is not possible to know the values of all of the properties of the system at the same time; those properties that are not known with precision must be described by probabilities.
4. (Complementary Principle) Matter exhibits a wave-particle duality. An experiment can show the particle like properties of matter, or wave-like properties, but not both at the same time.(Niels Bohr)
6. The Correspondence Principle of Bohr and Heisenberg. The quantum mechanical description of large systems should closely approximate to the classical description.
The Copenhagen Interpretation denies that the wave function is real, it is a mathematical tool for calculating probabilities of specific experiments. The concept of collapse of a "real" wave function was introduced by John Von Neumann and was not part of the original formulation of the Copenhagen Interpretation[How to reference and link to summary or text]. There are some who say that there are variants of the Copenhagen Interpretation that allow for a "real" wave function [2];, but it is questionable whether that view is really consistent with Positivism and some of Bohr's statements.
Niels Bohr emphasized that Science is concerned with the predictions of experiments, additional questions are not scientific but rather meta-physical. Bohr was heavily influenced by Positivism.
Acceptance among physicistsEdit
According to a poll at a Quantum Mechanics workshop in 1997, the Copenhagen interpretation is the most widely-accepted specific interpretation of quantum mechanics, followed by the Many-worlds interpretation.[1] Although current trends show substantial competition from alternative interpretations, throughout much of the twentieth century the Copenhagen interpretation had strong acceptance among physicists.
1. Schrödinger's Cat - A cat is put in a box with a radioactive source and a radiation detector. There is a 50-50 chance that a particle will be emitted and detected by the detector. If a particle is detected, a poisonous gas will be released and the cat killed. The wave function is in a 50-50 mixture of alive cat and dead cat. How can the cat be both alive and dead?
The Copenhagen Interpretation: The wave function reflects our knowledge of the system. The wave function (|dead\rangle + |alive\rangle)/\sqrt 2 simply means that there is a 50-50 chance that the cat is alive or dead.
2. Wigner's Friend - Wigner puts his friend in with the cat. The external observer believes the system is in the state (|dead\rangle + |alive\rangle)/\sqrt 2. His friend however is convinced that cat is alive. I.e. for him, the cat is in the state |alive>. How can Wigner and his friend see different wave functions?
The Copenhagen Interpretation: Wigner's friend highlights the subject nature of probability. Each observer (Wigner and his friend) have different information and therefore different wave functions. The distinction between the "objective" nature of reality and the subjective nature of probability has lead to a great deal of controversy. C.f. Bayesian versus Frequentist interpretations of probability.
3. Double Slit Diffraction - Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave?
The same experiment can in theory be performed with electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants and planets. In practice it has been performed for light, electrons, buckminsterfullerene, and some atoms. Matter in general exhibits both particle and wave behaviors.
4. EPR paradox. Entangled "particles" are emitted by a common source. Conservation laws ensure that the measured spin of one particle is the opposite of the measured spin of the other. The spin of one particle is measured. The spin of the other particle is now instantaneously known. (If the waveform is real, then one observer has caused the waveform to collapse instanteously).
The Copenhagen Interpretation: Assuming wave functions are not real, wave function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin of the other. However another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light.
Copenhagenists claim that interpretations of quantum mechanics where the wave funtion is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of Many worlds [3] and the Transactional interpretation [4] [5] dispute that their theories are fatally non-local.
Criticisms Edit
Experimental tests of Bell's inequality using entangled particles have supported the predictions of quantum mechanics.
The Copenhagen Interpretation gives special status to measurement processes without cleanly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of the electron." Heisenberg says,
-- Heisenberg, Physics and Philosophy, p. 137
Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."[6] and "Do you really think the moon isn't there if you aren't looking at it?" exemplify this. Bohr, in response, said "Einstein, don't tell God what to do". Erwin Schrödinger devised the Schrödinger's cat experiment.
Alternatives Edit
The Ensemble Interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "copenhagen done right". Consciousness causes collapse is often confused with the Copenhagen interpretation.
If the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Dropping the principle that the wave function is a complete description results in a hidden variable theory.
Many physicists have subscribed to the null interpretation of quantum mechanics summarized by Paul Dirac's famous dictum "Shut up and calculate!" (often attributed to Richard Feynman).[7]
A list of alternatives can be found at Interpretation of quantum mechanics.
2. 'While participating in a colloquium at Cambridge, von Weizsäcker (1971) denied that the CI asserted: "What cannot be observed does not exist". He suggested instead that the CI follows the principle: "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."'John Cramer on the Copenhagen Interpretation
3. Michael price on nonlocality in Many Worlds
4. Relativity and Causality in the Transactional Interpretation
5. Collapse and Nonlocality in the Transactional Interpretation
6. "God does not throw dice" quote
7. "Shut up and calculate" quote.
See alsoEdit
Further readingEdit
Video DemonstrationEdit
External linksEdit
Ad blocker interference detected!
|
68c39e32c6eb0fee | måndag 19 december 2016
New Quantum Mechanics 21: Micro as Macro
The new quantum mechanics as realQM explored in this sequence of posts offers a model for the microscopic physics of atoms which is of the same form as the classical continuum mechanical models of macroscopic physics such as Maxwell's equations for electro-magnetics, Navier's equations for solid mechanics and Navier-Stokes equations for fluid mechanics in terms of deterministic field variables depending on a common 3d space coordinate and time.
realQM thus describes an atom with $N$ electrons realQM as a nonlinear system of partial differential equations in $N$ electronic wave functions depending on a common 3d space coordinate and time.
On the other hand, the standard model of quantum mechanics, referred to as stdQM, is Schrödinger's equation as a linear partial differential equation for a probabilistic wave function in $3N$ spatial coordinates and time for an atom with $N$ electrons.
With realQM the mathematical models for macroscopic and microscopic physics thus have the same form and the understanding of physics can then take the same form. Microphysics can then be understood to the same extent as macrophysics.
On the other hand, the understanding of microphysics according to stdQM is viewed to be fundamentally different from that of macroscopic physics, which effectively means that stdQM is not understood at all, as acknowledged by all prominent physicists.
As an example of the confusion on difference, consider what is commonly viewed to be a basic property of stdQM, namely that there is limit to the accuracy that both position and velocity can be determined on atomic scales, as expressed in Heisenberg's Uncertainty Principle (HUP).
This feature of stdQM is compared with the situation in macroscopic physics, where the claim is that both position and velocity can be determined to arbitrary precision, thus making the case that microphysics and microphysics are fundamentally different.
But the position of a macroscopic body cannot be precisely determined by one point coordinate, since a macroscopic body is extended in space and thus occupies many points in space. No one single point determines the position of and extended body. There is thus also a Macroscopic Uncertainty Principle (MUP).
The argument is then that if the macroscopic body is a pointlike particle, then both its position and velocity can have precise values and thus there is no MUP. But a pointlike body is not a macroscopic body and so the argument lacks logic.
The idea supported by stdQM that the microscopic world is so fundamentally different from the macroscopic world that it can never be understood, thus may well lack logic. If so that could open to understanding of microscopic physics for human beings with experience from macroscopic physics.
If you think that there is little need of making sense of stdQM, recall Feynman's testimony:
• We have always had a great deal of difficulty understanding the world view that quantum mechanics represents. At least I do, because I’m an old enough man that I haven’t got to the point that this stuff is obvious to me. Okay, I still get nervous with it ... You know how it always is: every new idea, it takes a generation or two until it becomes obvious that there’s no real problem. I cannot define the real problem, therefore I suspect that there is no real problem, but I’m not sure there’s no real problem. (Int. J. Theoret. Phys. 21, 471 (1982).)
It is total confusion, if it is totally unclear if there is a problem or no problem and it is totally clear that nobody understands stdQM....
Recall that stdQM is based on a linear multi-dimensional Schrödinger equation, which is simply picked from the sky using black magic ad hoc formalism, which could be anything, and is then taken as a revelation about real physics when interpreted by reversing the black magics.
This is like scribbling down a sign/equation at random without intentional meaning, and then giving the sign/equation an interpretation as if it had an original meaning, which may well be meaningless, instead of expressing a meaning in a sign/equation to discover consequences and deeper meaning.
fredag 16 december 2016
New Quantum Mechanics 20: Shell Structure
Further computational exploration of realQM supports the following electronic shell structure of an atom:
Electrons are partitioned into an increasing sequence of main spherical shells $S_1$, $S_2$,..,$S_M$ with each main shell $S_m$ subdivided into two half-spherical shells each of which for $m>2$ is divided into two angular directions into $m\times m$ electron domains thus with a total of $2m^2$ electrons in each full shell $S_m$. The case $m=2$ is special with the main shell divided radially into two subshells which are each divided into half-spherical subshells each of which is finally divided azimuthally, into $2\times 2$ electron domains for $S_2$ subshell, thus with a total of $2m^2$ electrons in each main shell $S_m$ when fully filled, for $m=1,...,M$, see figs below.
This gives the familiar sequence 2, 8, 18, 32,.. as the number of electrons in each main shell.
4 subshell of S_2
8 shell as variant of full S_2 shell
9=3x3 halfshell of S_3
The electron structure can thus be described as follows with parenthesis around main shells and radial subshell partition within parenthesis:
• (2)+(4+4)
• (2)+(4+4)+(2)
• ...
• (2)+(4+4)+(4+4)
• (2)+(4+4)+(8)+(2)
• ....
• (2)+(4+4)+(18)+(2)
• ...
• (2)+(4+4)+(18)+(8)
Below we show computed ground state energies assuming full spherical symmetry with a radial resolution of 1000 mesh points, where the electrons in each subshell are homogenised azimuthally, with the electron subshell structure indicated and table values in parenthesis. Notice that the 8 main shell structure is repeated so that in particular Argon with 18 electrons has the form 2+(4+4)+(4+4):
Lithium (2)+1: -7.55 (-7.48) 1st ionisation: (0.2)
Beryllium (2)+(2): -15.14 (-14.57) 1st ionisation: 0.5 (0.35)
Boron (2)+(2+1): -25.3 (-24.53) 1st ionisation: 0.2 (0.3)
Carbon (2)+(2+2): -38.2 (-37.7) 1st ionisation 0.5 (0.4)
Nitrogen (2)+(3+2): -55.3 (-54.4) 1st ionisation 0.5 (0.5)
Oxygen (2)+(3+3): -75.5 (-74.8) 1st ionisation 0.5 (0.5)
Fluorine (2)+(3+4): -99.9 (-99.5) 1st ionisation 0.5 (0.65)
Neon (2)+(4+4): -132.4 (-128.5 ) 1st ionisation 0.6 (0.8)
Sodium (2)+(4+4)+(1): -165 (-162)
Magnesium (2)+(4+4)+(2): -202 (-200)
Aluminium (2)+(4+4)+(2+1): -244 (-243)
Silicon (2)+(4+4)+(2+2): -291 (-290)
Phosphorus (2)+(4+4)+(3+2): -340 (-340)
Sulphur (2)+(4+4)+(4+2): -397 (-399)
Chlorine (2)+(4+4)+(3+4): -457 (-461)
Argon: (2)+(4+4)+(4+4): -523 (-526)
Calcium: (2)+(4+4)+(8)+(2): -670 (-680)
Titanium: (2)+(4+4)+(10)+(2): -848 (-853)
Chromium: (2)+(4+4)+(12)+(2): -1039 (-1050)
Iron: (2)+(4+4)+(14)+(2): -1260 (-1272)
Nickel: (2)+(4+4)+(16)+(2): -1516 (-1520)
Zinc: (2)+(4+4)+(18)+(2): -1773 (-1795)
Germanium: (2)+(4+4)+(18)+(2+2): -2089 (-2097)
Selenium: (2)+(4+4)+(18)+(4+2):- 2416 (-2428)
Krypton: (2)+(4+4)+(18)+(4+4): -2766 (-2788)
Xenon: (2)+(4+4)+(18)+(18)+(4+4): -7355 (-7438)
Radon: (2)+(4+4)+(18)+(32)+(18)+(4+4): -22800 (-23560)
We see good agreement even with the crude approximation of azimuthal homogenisation used in the computations.
To see the effect of the subshell structure we compare Neon: (2)+(4+4) with Neon: (2)+(8) without the (4+4) subshell structure, which has a ground state energy of -153, which is much smaller than the observed -128.5. We conclude that somehow the (4+4) subdivision of the second is preferred before a subdivision without subshells. The difference between (8) and (4+4) is the homogeneous Neumann condition acting between subshells, tending to increase the width of the shell and thus increase the energy.
The deeper reason for this preference remains to describe, but the intuition suggests that it relates to the shape or size of the domain occupied by an electron. With subshells electron domains are obtained by subdivision in both radial and azimuthal direction, while without subshells there is only azimuthal/angular subdivision of each shell.
We observe that ionisation energies, which are of similar size in different shells, become increasingly small as compared to ground state energies, and thus are delicate to compute as the difference between the ground state energies of atom and ion.
Here are sample outputs for Boron and Magnesium as functions of distance $r$ from the kernel along the horizontal axis :
We observe that the red curve depicting shell charge $\psi^2(r)r^2dr$ per shell radius increment $dr$, is roughly constant in radius $r$, as a possible emergent design principle. More precisely, $\psi (r)\sim \sqrt{Z}/r$ mathches with $d_m\sim m^2/Z$ and $r_m\sim m^3/Z$ with $d_m$ the width of shell $S_m$ and thus the width of the subshells of $S_m$ scaling with $m/Z$, and thus the width of electrons in $S_m$ scaling with $m/Z$.
We thus have $\sum_mm^2\sim M^3\sim Z$ and with $d_m\sim m^2/Z$ the atomic radius $\sum_md_m\sim M^3/Z\sim 1$ is basically the same for all atoms, in accordance with observation.
Further, the kernel potential energy and thus the total energy in $S_m$ scales with $Z^2/m$ and the total energy by summation over shells scales with $\log(M)Z^2\sim \log(Z)Z^2$, in close correspondence with $Z^{\frac{1}{3}}Z^2$ by density functional theory.
Recall that the electron configuration of stdQM is based on the eigen-functions for Schrödinger's equation for the Hydrogen atom with one electron, while as we have seen that of realQM rather relates to spatial partitioning. Of course, eigen-functions express some form of partitioning, and so there is a connection, but the basic problem may concern partitioning of many electrons rather than eigen-functions for one electron.
torsdag 8 december 2016
Quantum Mechanics as Theory Still Without Meaning
Yet another poll (with earlier polls in references) shows that physicists still today after 100 years of deep thinking and fierce debate show little agreement about the stature of quantum mechanics as the prime scientific advancement of modern physics.
The different polls indicate that less than 50% of all physicists today adhere to the Copenhagen Interpretation, as the main text book interpretation of quantum mechanics. This means that quantum mechanics today after 100 years of fruitless search for a common interpretation, remains a mystery without meaning. Theory without interpretation has no meaning and science without meaning cannot be real science.
If only 50% of physicists would agree on the meaning of the basic text book theories of classical physics embodied in Newton/Lagranges equations of motion, Navier's equation for solid mechanics, Navier-Stokes equations for fluid dynamics and Maxwell's equations for electromagnetic, that would signify a total collapse of classical physics as science and subject of academic study.
But this not so: classical physics is the role model of science because there is virtually no disagreement on the formulation and meaning of these basic equations.
But the polls show that there is no agreement on the role and meaning of Schrödinger's equation as the basis of quantum mechanics, and physicists do not seem to believe this will ever change. This is far from satisfactory from scientific point of view.
This is my motivation to search for a meaningful quantum mechanics in the form of realQM presented in recent posts. Of course you may say that for many reasons my chances of finding some meaning are very small, but science without meaning cannot be real science.
PS Lubos Motl, as a strong proponent of a textbook all-settled Copenhagen interpretation defined by himself, reacts to the polls with
• The foundations of quantum mechanics were fully built in the 1920s, mostly in 1925 or at most 1926, and by 1930, all the universal rules of the theory took their present form...as the Copenhagen interpretation. If you subtract all these rules, all this "interpretation", you will be left with no physical theory whatsoever. At most, you will be left with some mathematics – but pure mathematics can say nothing about the world around us or our perceptions.
• In virtually all questions, the more correct answers attracted visibly greater fractions of physicists than the wrong answers.
Lubos claims that more correct views, with the true correct views carried by only Lubos himself, gathers a greater fraction than less correct views, and so everything is ok from Lubos point of view. But is greater fraction sufficient from scientific point of view, as if scientific truth is to be decided by democratic voting? Shouldn't Lobos ask for 99.9% adherence to his one and only correct view? If physics is to keep its position as the king science?
Or is modern physics instead to be viewed as the root of modernity through a collapse of classical ideals of rationality, objectivity and causality? |
3a68dfc0f8718374 | Sunday, October 12, 2014
Mind control
Here's a pre-edited version of my piece for the Observer today, with a little bit more stuff still in it and some links. This was a great topic to research, and a bit disconcerting at times too.
Be careful what you wish for. That’s what Joel, played by Jim Carrey, discovers in Charlie Kaufmann’s 2004 film Eternal Sunshine of the Spotless Mind, when he asks a memory-erasure company Lacuna Inc. to excise the recollections of a painful breakup from his mind. While the procedure is happening, Joel realizes that he doesn’t want every happy memory of the relationship to vanish, and seeks desperately to hold on to a few fragments.
The movie offers a metaphor for how we are defined by our memories, how poignant is both their recall and their loss, and how unreliable they can be. So what if Lacuna’s process is implausible? Just enjoy the allegory.
Except that selective memory erasure isn’t implausible at all. It’s already happening.
Researchers and clinicians are now using drugs to suppress the emotional impact of traumatic memories. They have been able to implant false memories in flies and mice, so that innocuous environments or smells seem to be “remembered” as threatening. They are showing that memory is not like an old celluloid film, fixed but fading; it is constantly being changed and updated, and can be edited and falsified with alarming ease.
“I see a world where we can reactivate any kind of memory we like, or erase unwanted memories”, says neuroscientist Steve Ramirez of the Massachusetts Institute of Technology. “I even see a world where editing memories is something of a reality. We’re living in a time where it’s possible to pluck questions from the tree of science fiction and ground them in experimental reality.” So be careful what you wish for.
But while it’s easy to weave capabilities like this into dystopian narratives, most of which the movies have already supplied – the authoritarian memory-manipulation of Total Recall, the mind-reading police state of Minority Report, the dream espionage of Inception – research on the manipulation of memory could offer tremendous benefits. Already, people suffering from post-traumatic stress disorder (PTSD), such as soldiers or victims of violent crime, have found relief from the pain of their dark memories through drugs that suppress the emotional associations. And the more we understand about how memories are stored and recalled, the closer we get to treatments for neurodegenerative conditions such as Alzheimer’s and other forms of dementia.
So there are good motivations for exploring the plasticity of memory – how it can be altered or erased. And while there are valid concerns about potential abuses, they aren’t so very different from those that any biomedical advance accrues. What seems more fundamentally unsettling, but also astonishing, about this work is what it tells us about us: how we construct our identity from our experience, and how our recollections of that experience can deceive us. The research, says Ramirez, has taught him “how unstable our identity can be.”
Best forgotten
Your whole being depends on memory in ways you probably take for granted. You see a tree, and recognize it as a tree, and know it is called “tree” and that it is a plant that grows. You know your language, your name, your loved ones. Few things are more devastating, to the individual and those close to them, than the loss of these everyday facts. As the memories fade, the person seems to fade with them. Christopher Nolan’s film Memento echoes the case of Henry Molaison, who, after a brain operation for epilepsy in the 1950s, lost the ability to record short-term memories. Each day his carers had to introduce themselves to him anew.
Molaison’s surgery removed a part of his brain called the hippocampus, giving a clue that this region is involved in short-term memory. Yet he remembered events and facts learnt long ago, and could be taught new ones, indicating that long-term memory is stored somewhere else. Using computer analogies for the brain is risky, but it’s reasonable here to compare our short-term memory with a computer’s ephemeral working memory or RAM, and the long-term memory with the hard drive that holds information more durably. While short-term memory is associated with the hippocampus, long-term memory is more distributed throughout the cortex. Some information is stored long-term, such as facts and events we experience repeatedly or that have an emotional association; other items vanish within hours. If you look up the phone number of a plumber, you’ll probably have forgotten it by tomorrow, but you may remember the phone number of your family home from childhood.
What exactly do we remember? Recall isn’t total – you might retain the key aspects of a significant event but not what day of the week it was, or what you were wearing, or exactly what was said. Your memories are a mixed bag: facts, feelings, sights, smells. Ramirez points out that, while Eternal Sunshine implies that all these features of a memory are bundled up and stored in specific neurons in a single location in the brain, in fact it’s now clear that different aspects are stored in different locations. The “facts”, sometimes called episodic memory, are filed in one place, the feelings in another (generally in a brain region called the amygdala). All the same, those components of the memory do each have specific addresses in the vast network of our billions of neurons. What’s more, these fragments remain linked and can be recalled together, so that the event we reconstruct in our heads is seamless, if incomplete. “Memory feels very cohesive, but in reality it’s a reconstructive process”, says Ramirez.
Given all this filtering and parceling out, it’s not surprising that memory is imperfect. “The fidelity of memory is very poor”, says psychologist Alain Brunet of McGill University in Montreal. “We think we remember exactly what happens, but research demonstrates that this is a fallacy.” It’s our need for a coherent narrative that misleads us: the brain elaborates and fills in gaps, and we can’t easily distinguish the “truth” from the invention. You don’t need fancy technologies to mess with memory – just telling someone they experienced something they didn’t, or showing them digitally manipulated photos, can be enough to seed a false conviction. That, much more than intentional falsehood, is why eye-witness accounts may be so unreliable and contradictory.
It gets worse. One of the most extraordinary findings of modern neuroscience, reported in 2000 by neurobiologist Joseph LeDoux and his colleagues at New York University, is that each time you remember something, you have to rebuild the memory again. LeDoux’s team reported that when rats were conditioned to associate a particular sound with mild electric shocks, so that they showed a “freezing” fear response when they heard the sound subsequently, this association could be broken by infusing the animals’ amygdala with a drug called anisomycin. The sound then no longer provoked fear – but only if the drug was administered within an hour or so of the memory being evoked. Anisomycin disrupts biochemical processes that create proteins, and the researchers figured that this protein manufacture was essential for restoring a memory after it has arisen. This is called reconsolidation: it starts a few minutes after recall, and takes a few hours to complete.
So those security questions asking you for the name of your first pet are even more bothersome than you thought, because each time you have to call up the answer (sorry if I just made you do it again), your brain then has to write the memory back into long-term storage. A computer analogy is again helpful. When we work on a file, the computer makes a copy of the stored version and we work on that – if the power is cut, we still have the original. But as Brunet explains, “When we remember something, we bring up the original file.” If we don’t write it back into the memory, it’s gone.
This rewriting process can, like repeated photocopying, degrade the memory a little. But LeDoux’s work showed that it also offers a window for manipulating the memory. When we call it up, we have the opportunity to change it. LeDoux found that a drug called propranolol can weaken the emotional impact of a memory without affecting the episodic content. This means that the effect of painful recollections causing PTSD can be softened. Propranolol is already known to be safe in humans: it is a beta blocker used to treat hypertension, and (tellingly) also to combat anxiety, because it blocks the action of the stress hormone epinephrine in the amygdala. A team at Harvard Medical School has recently discovered that xenon, the inert gas used as an anaesthetic, can also weaken the reconsolidation of fear memories in rats. An advantage of xenon over propranolol is that it gets in and out of the brain very quickly, taking about three minutes each way. If it works well for humans, says Edward Meloni of the Harvard team, “we envisage that patients could self-administer xenon immediately after experiencing a spontaneous intrusive traumatic memory, such as awakening from a nightmare.” The timing of the drug relative to reactivation of the trauma memory may, he says, be critical for blocking the reconsolidation process.
These techniques are now finding clinical use. Brunet uses propranolol to treat people with PTSD, including soldiers returned from active combat, rape victims and people who have suffered car crashes. “It’s amazingly simple,” he says. They give the patients a pill containing propranolol, and then about an hour later “we evoke the memory by having patients write it down and then read it out.” That’s often not easy for them, he says – but they manage it. The patients are then asked to continue reading the script regularly over the next several weeks. Gradually they find that its emotional impact fades, even though the facts are recalled clearly.
“After three or four weeks”, says Brunet, “our patients say things like ‘I feel like I’m smiling inside, because I feel like I’m reading someone else’s script – I’m no longer personally gripped by it.’” They might feel empathy with the descriptions of the terrible things that happened to this person – but that person no longer feels like them. No “talking cure” could do that so quickly and effectively, while conventional drug therapies only suppress the symptoms. “Psychiatry hasn’t cured a single patient in sixty years”, Brunet says.
These cases are extreme, but aren’t even difficult memories (perhaps especially those) part of what makes us who we are? Should we really want to get rid of them? Brunet is confident about giving these treatments to patients who are struggling with memories so awful that life becomes a torment. “We haven’t had a single person say ‘I miss those memories’”, he says. After all, there’s nothing unnatural about forgetting. “We are in part the sum of our memories, and it’s important to keep them”, Brunet says. “But forgetting is part of the human makeup too. We’re built to forget.”
Yet it’s not exactly forgetting. While propranolol and xenon can modify a memory by dampening its emotional impact, the memory remains: PTSD patients still recall “what happened”, and even the emotions are only reduced, not eliminated. We don’t yet really understand what it means to truly forget something. Is it ever really gone or just impossible to recall? And what happens when we learn to overcome fearful memories – say, letting go of a childhood fear of dogs as we figure that they’re mostly quite friendly? “Forgetting is fairly ill-defined”, says neuroscientist Scott Waddell at the University of Oxford. “Is there some interfering process that out-competes the original memory, or does the original memory disappear altogether?” Some research on flies suggests that forgetting isn’t just a matter of decay but an active process in which the old memory is taken apart. Animal experiments have also revealed the spontaneous re-emergence of memories after they were apparently eliminated by re-training, suggesting that memories don’t vanish but are just pushed aside. “It’s really not clear what is going on”, Waddell admits.
Looking into a fly’s head
That’s not so surprising, though, because it’s not fully understood how memory works in the first place. Waddell is trying to figure that out – by training fruit flies and literally looking into their brains. What makes flies so useful is that it’s easy to breed genetically modified strains, so that the role of specific genes in brain activity can be studied by manipulating or silencing them. And the fruit fly is big and complex enough to show sophisticated behavior, such as learning to associate a particular odour with a reward like sugar, while being simple enough to comprehend – it has around 100,000 neurons, compared to our many billions.
What’s more, a fruit fly’s brain is transparent enough to look right through it under the microscope, so that one can watch neural processing while the fly is alive. By attaching fluorescent molecules to particular neurons, Waddell can identify the neural circuitry linked to a particular memory. In his lab in Oxford he showed me an image of a real fly’s brain: a haze of bluish-coloured neurons, with bright green spots and filaments that are, in effect, a snapshot of a memory. The memory might be along the lines of “Ah, that smell – the last time I followed it, it led to something tasty.”
How do you find the relevant neurons among thousands of others? The key is that when neurons get active to form a memory, they advertise their state of busyness. They produce specific proteins, which can be tagged with other light-emitting proteins by genetic engineering of the respective genes. One approach is to inject benign viruses that stitch the light-emission genes right next to the gene for the protein you want to tag; another is to engineer particular cells to produce a foreign protein to which the fluorescent tags will bind. When these neurons get to work forming a memory, they light up. Ramirez compares it to the way lights in the windows of an office block at night betray the location of workers inside.
This ability to identify and target individual memories has enabled researchers like Waddell and Ramirez to manipulate them experimentally in, well, mind-boggling ways. Rather than just watching memories form by fluorescent tagging, they can use tags that act as light-activated switches to turn gene activity on or off with laser light directed down an optical fibre into the brain. This technique, called optogenetics, is driving a revolution in neuroscience, Ramirez says, because it gives researchers highly selective control over neural activity – enabling them in effect to stimulate or suppress particular thoughts and memories.
Waddell’s lab is not a good place to bring a banana for lunch. The fly store is packed with shelves of glass bottles, each full of flies feasting on a lump of sugar at the bottom. Every bottle is carefully labeled to identify the genetic strain of the insects it contains: which genes have been modified. But surely they get out from time to time, I wonder – and as if on cue, a fly buzzes past. Is that a problem? “They don’t survive for long on the outside,” Waddell reassures me.
Having spent the summer cursing the plague of flies gathering around the compost bin in the kitchen, I’m given fresh respect for these creatures when I inspect one under the microscope and see the bejeweled splendor of its red eyes. It’s only sleeping: you can anaesthetize fruit flies with a puff of carbon dioxide. That’s important for mapping neurons to memories in the microscope, because there’s not much going on in the mind of a dead fly.
These brain maps are now pretty comprehensive. We know, for example, which subset of neurons (about 2,000 in all) is involved in learning to recognize odours, and which neurons can give those smells good or bad associations. And thanks to optogenetics, researchers have been able to switch on some of these “aversive” neurons while flies smell a particular odour, so that they avoid it even though they have actually experienced nothing bad (such as shock treatment) in its presence – in other words, you might say, to stimulate a fictitious false memory. For a fly, it’s not obvious that we can call this “fear”, Waddell says, but “it’s certainly something they don’t like”. In the same way, by using molecular switches that are flipped with heat rather than light, Waddell and his colleagues were able to give flies good vibes about a particular smell. Flies display these preferences by choosing to go in particular directions when they are placed in little plastic mazes, some of them masterfully engineered with little gear-operated gates courtesy of the lab’s 3D printer.
Ramirez, working in a team at MIT led by Susumu Tonegawa, has practiced similar deceptions on mice. In an experiment in 2012 they created a fear memory in a mouse by putting it in a chamber where it experienced mild electric shocks to the feet. While this memory was being laid down, the researchers used optogenetic methods to make the corresponding neurons, located in the hippocampus, switchable with light. Then they put the mouse in a different chamber, where it seemed perfectly at ease. But when they reactivated the fear memory with light, the mouse froze: suddenly it had bad feelings about this place.
That’s not exactly implanting a false memory, however, but just reactivating a true one. To genuinely falsify a recollection, the researchers devised a more elaborate experiment. First, they placed a mouse in a chamber and labeled the neurons that recorded the memory of that place with optogenetic switches. Then the mouse was put in a different chamber and given mild shocks – but while these were delivered, the memory of the first chamber was triggered using light. When the mouse was then put back in the first chamber it froze. Its memory insisted, now without any artificial prompting, that the first chamber was a nasty place, even though nothing untoward had ever happened there. It is not too much to say that a false reality had been directly written into the mouse’s brain.
You must remember this
The problem with memory is often not so much that we totally forget something or recall it wrongly, but that we simply can’t find it even though we know it’s in there somewhere. What triggers memory recall? Why does a fly only seem to recall a food-related odour when it is hungry? Why do we feel fear only if we’re in actual danger, and not all the time? Indeed, it is the breakdown of these normal cues that produces PTSD, where the fear response gets triggered in inappropriate situations.
A good memory is largely about mastering this triggering process. Participants in memory competitions that involve memorizing long sequences of arbitrary numbers are advised to “hook” the information onto easily recalled images. A patient named Solomon Shereshevsky, studied in the early twentieth century by the neuropsychologist Alexander Luria, exploited his condition of synaesthesia – the crosstalk between different sensory experiences such as sound and colour – to tag information with colours, images, sounds or tastes so that he seemed able to remember everything he heard or read. Cases like this show that there is nothing implausible about Jorge Luis Borges’ fictional character Funes the Memorious, who forgets not the slightest detail of his life. We don’t forget because we run out of brain space, even if it sometimes feels like that.
Rather than constructing a complex system of mnemonics, perhaps it is possible simply to boost the strength of the memory as it is imprinted. “We know that emotionally arousing situations are more likely to be remembered than mundane ones”, LeDoux has explained. “A big part of the reason is that in significant situations chemicals called neuromodulators are released, and they enhance the memory storage process.” So memory sticks when the brain is aroused: emotional associations will do it, but so might exercise, or certain drugs. And because of reconsolidation, it seems possible to enhance memory after it has already been laid down. LeDoux has found that a chemical called isoproterenol has the opposite effect from propranolol on reconsolidation of memory in rats, making fear memories even stronger as they are rewritten into long-term storage in the amygdala. If it works for humans too, he speculates that the drug might help people who have “sluggish” memories.
Couldn’t we all do with a bit of that, though? Ramirez regards chemical memory enhancement as perfectly feasible in principle, and in fact there is already some evidence that caffeine can enhance long-term memory. But then what is considered fair play? No one quibbles about students going into an exam buoyed up by an espresso, but where do we draw the line?
Mind control
It’s hard to come up with extrapolations of these discoveries that are too far-fetched to be ruled out. You can tick off the movies one by one. The memory erasure of Eternal Sunshine is happening right now to some degree. And although so far we know only how to implant a false memory if it has actually been experienced in another context, as our understanding of the molecular and cellular encoding of memory improves Ramirez thinks it might be feasible to construct memories “from the ground up”, as in Total Recall or the implanted childhood recollections of the replicant Rachael in Blade Runner. As Rachael so poignantly found out, that’s the way to fake a whole identity.
If we know which neurons are associated with a particular memory, we can look into a brain and know what a person is thinking about, just by seeing which neurons are active: we can mind-read, as in Minority Report. “With sufficiently good technology you could do that”, Ramirez affirms. “It’s just a problem of technical limitations.” By the same token, we might reconstruct or intervene in dreams, as in Inception (Ramirez and colleagues called their false-memory experiment Project Inception). Decoding the thought processes of dreams is “a very trendy area, and one people are quite excited about”, says Waddell.
How about chips implanted in the brain to control neural activity, Matrix-style? Theodore Berger of the University of Southern California has implanted microchips in rats’ brains that can duplicate the role of the hippocampus in forming long-term memories, recording the neural signals involved and then playing them back. His most recent research shows that the same technique of mimicking neural signals seems to work in rhesus monkeys. The US Defense Advanced Research Projects Agency (DARPA) has two such memory-prosthesis projects afoot. One, called SUBNETS, aims to develop wireless implant devices that could treat PTSD and other combat-related disorders. The other, called RAM (Restoring Active Memories), seeks to restore memories lost through brain injury that are needed for specialized motor skills, such as how to drive a car or operate machinery. The details are under wraps, however, and it’s not clear how feasible it will be to record and replay specific memories. LeDoux professes that he can’t imagine how it could work, given that long-term memories aren’t stored in a single location. To stimulate all the right sites, says Waddell, “you’d have to make sure that your implantation was extremely specific – and I can’t see that happening.”
Ramirez says that it’s precisely because the future possibilities are so remarkable, and perhaps so unsettling, that “we’re starting this conversation today so that down the line we have the appropriate infrastructure.” Are we wise enough to know what we want to forget, to remember, or to think we remember? Do we risk blanking out formative, instructive and precious experiences, or finding ourselves one day being told, as Deckard tells Rachael in Blade Runner, “those aren’t your memories – they’re someone else’s”?
“The problems are not with the current research, but with the question of what we might be able to do in 10-15 years,” says Brunet. It’s one thing to bring in legislation to restrict abuses, just as we do for other biomedical technologies. But the hardest arguments might be about not what we prohibit but what we allow. Should individuals be allowed to edit their own memories or have false ones implanted? Ramirez is upbeat, but insists that the ethical choices are not for scientists alone to thrash out. “We all have some really big decisions ahead of us,” he says.
Thursday, October 09, 2014
Do we tell the right stories about evolution?
A tale of many electrons
In what I hope might be a timely occasion with Nobel-fever in the air, here is my leader for the latest issue of Nature Materials. This past decision was a nice one for physics, condensed matter and materials – although curiously it was a chemistry prize.
Density functional theory, invented half a century ago, now supplies one of the most convenient and popular shortcuts for dealing with systems of many electrons. It was born in a fertile period when theoretical physics stretched from abstruse quantum field theory to practical electrical engineering.
It’s often pointed out that quantum theory is not just a source of counter-intuitive mystery but also an extraordinarily effective intellectual foundation for engineering. It supplies the theoretical basis for the transistor and superconductor, for understanding molecular interactions relevant from mineralogy to biology, and for describing the basic properties of all matter, from superhard alloys to high-energy plasmas. But popular accounts of quantum physics rarely pay more than lip service to this utilitarian virtue – there is little discussion of what it took to turn the ideas of Bohr, Heisenberg and Schrödinger into a theory that works at an everyday level.
One of the milestones in that endeavour occurred 50 years ago, when Pierre Hohenberg and Walter Kohn published a paper [1] that laid the foundations of density functional theory (DFT). This provided a tool for transforming the fiendishly complicated Schrödinger equation of a many-body system such as the atomic lattice of a solid into a mathematically tractable problem that enables the prediction of properties such as structure and electrical conductivity. The milieu in which this advance was formulated was rich and fertile, and from the distance of five decades it is hard not to idealize it as a golden age in which scientists could still see through the walls that now threaten to isolate disciplines. Kohn, exiled from his native Austria as a young Jewish boy during the Nazi era and educated in Canada, was located at the heart of this nexus. Schooled in quantum physics by Julian Schwinger at Harvard amidst peers including Philip Anderson, Rolf Landauer and Joaquin Luttinger, he was also familiar with the challenges of tangible materials systems such as semiconductors and alloys. In the mid-1950s Kohn worked as a consultant at Bell Labs, where the work of John Bardeen, Walter Brattain and William Shockley on transistors a few years earlier had generated a focus on the solid-state theory of semiconductors. And his ground-breaking paper with Hohenberg came from research on alloys at the Ecole Normale Supérieure in Paris, hosted by Philippe Nozières.
Now that DFT is so familiar a technique, used not only to understand electronic structures of molecules and materials but also as a semi-classical approach for studying the atomic structures of fluids, it is easy to forget what a bold hypothesis its inception required. In principle one may write the electron density n(r) of an N-electron system as the integral over space of the N-electron wavefunction, and then to use this to calculate the total energy of the system as a functional of n(r) and the potential energy v(r) of each electron interacting with all the fixed nuclei. (A functional here is a “function of a function” – the energy is a function of the function v(r), say.) Then one could do the calculation by invoking some approximation for the N-electron wavefunction. But Kohn inverted the idea: what if you didn’t start from the complicated N-body wavefunction, but just from the spatially varying electron density n(r)? That’s to say, maybe the external potential v(r), and thus the total energy (for the ground state of the system), depend only on the equilibrium n(r)? Then, that density function is all you needed to know. As Andrew Zangwill puts it in a recent commentary on Kohn’s career [2], “This was a deep question. Walter realized he wasn’t doing alloy theory any more.”
Kohn figured out a proof of this remarkable conjecture, but it seemed so simple that he couldn’t believe it hadn’t been noticed before. So he asked Hohenberg, a post-doc in Nozières’ lab, to help. Together the pair formulated a rigorous proof of the conjecture for the case of an inhomogeneous electron gas; since their 1964 paper, several other proofs have been found. That paper was formal and understated to the point of desiccation, and one needed to pay it close attention to see how remarkable the result was. The initial response was muted, and Hohenberg moved subsequently into other areas, such as hydrodynamics, phase transitions and pattern formation.
Kohn, however, went on to develop the idea into a practical method for calculating the electronic ground states of molecules and solids, working in particular with Hong Kong-born postdoc Lu-Jeu Sham. Their crucial paper3 was much more explicit about the potential of this approach as an approximation for calculating real materials properties of solids, such as cohesive energies and elastic constants, from quantum principles. It is now one of the most highly cited papers in all of physics, but was an example of a “sleeper”: still the community took some time to wake up to what was on offer. Not until the work of John Pople in the early 1990s did chemists begin to appreciate that DFT could offer a simple and convenient way to calculate electronic structures. It was that work which led to the 1998 Nobel prize in chemistry for Pople and Kohn – incongruous for someone so immersed in physics.
Zangwill argues that DFT defies the common belief that important theories reflect the Zeitgeist: it was an idea that was not in the air at all in the 1960s, and, says Zangwill, “might be unknown today if Kohn had not created it in the mid-1960s.” Clearly that’s impossible to prove. But there’s no mistaking the debt that materials and molecular sciences owe to Kohn’s insight, and so if Zangwill is right, all the more reason to ask if we still create the right sort of environments for such fertile ideas to germinate.
1. Hohenberg, P. & Kohn, W. Phys. Rev. 136, B864-871 (1964).
2. Zangwill, A., (2014).
3. Kohn, W. & Sham, L. J. Phys. Rev. 140, A1133-1138 (1965).
Wednesday, October 08, 2014
The moment of uncertainty
How did Heisenberg interpret it in physical terms?
Was Heisenberg disturbed by the implications of what he was doing?
Was anyone besides Heisenberg and Bohr troubled?
How did the uncertainty principle get communicated to a broader public?
How did the public react?
And today?
Has the uncertainty principle been used for serious philosophical purposes?
Uncertain about uncertainty
Tuesday, October 07, 2014
Waiting for the green (and blue) light
|
b2e9234637cd4d3e | Psychology Wiki
Eigenvalue, eigenvector and eigenspace
Revision as of 08:38, November 11, 2013 by Rotlink (Talk | contribs)
34,200pages on
this wiki
Mona Lisa with eigenvector
Fig. 1. In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) was not modified, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction - i.e., parallel to this vector - are also eigenvectors, with the same eigenvalue. Together with the zero-vector, they form the eigenspace for this eigenvalue.
In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector of a given linear transformation is a vector which is multiplied by a constant called the eigenvalue during that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues).
For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction). An eigenspace is an example of a subspace of a vector space.
In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below.
These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.
Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency.
Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix.[1] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[2] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[3]
Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[4] Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues.[2] This was extended by Hermite in 1855 to what are now called Hermitian matrices.[3] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[2] and Clebsch found the corresponding result for skew-symmetric matrices.[3] Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[2]
In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm-Liouville theory.[5] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[6]
At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[7] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic" or "individual"—emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[8]
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by Francis and Kublanovskaya in 1961.[9]
Definitions: the eigenvalue equationEdit
See also: Eigenplane
Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function A is linear if it has the following two properties:
additivity \ A(\mathbf{x}+\mathbf{y})=A(\mathbf{x})+A(\mathbf{y})
homogeneity \ A(\alpha \mathbf{x})=\alpha A(\mathbf{x})
where x and y are any two vectors of the vector space L and α is any real number. Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L.
Given a linear transformation A, a non-zero vector x is defined to be an eigenvector of the transformation if it satisfies the eigenvalue equation A \mathbf{x} = \lambda \mathbf{x} for some scalar λ. In this situation, the scalar λ is called an eigenvalue of A corresponding to the eigenvector x.
The key equation in this definition is the eigenvalue equation, Ax = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by A, so that Ax is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors. But in the usual case, eigenvectors are few and far between. They are the "normal modes" of the system, and they act independently.[10]
The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. An eigenvalue can be, and usually is, also a complex number. In the definition given above, eigenvectors and eigenvalues do not occur independently. Instead, each eigenvector is associated with a specific eigenvalue. For this reason, an eigenvector x and a corresponding eigenvalue λ are often referred to as an eigenpair. One eigenvalue can be associated with several or even with infinite number of eigenvectors. But conversely, if an eigenvector is given, the associated eigenvalue for this eigenvector is unique. Indeed, from the equality Ax = λx = λ'x and from x0 it follows that λ = λ'.[11]
File:Eigenvalue equation.svg
Geometrically (Fig. 2), the eigenvalue equation means that under the transformation A eigenvectors experience only changes in magnitude and sign — the direction of Ax is the same as that of x. This type of linear transformation is defined as homothety (dilatation[12], similarity transformation). The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation I under which a vector x remains unchanged, Ix = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection.
If x is an eigenvector of the linear transformation A with eigenvalue λ, then any vector y = αx is also an eigenvector of A with the same eigenvalue. From the homogeneity of the transformation A it follows that Ay = α(Ax) = α(λx) = λ(αx) = λy. Similarly, using the additivity property of the linear transformation, it can be shown that any linear combination of eigenvectors with eigenvalue λ has the same eigenvalue λ.[13] Therefore, any non-zero vector in the line through x and the zero vector is an eigenvector with the same eigenvalue as x. Together with the zero vector, those eigenvectors form a subspace of the vector space called an eigenspace. The eigenvectors corresponding to different eigenvalues are linearly independent[14] meaning, in particular, that in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different eigenvalues.[15] The vectors of the eigenspace generate a linear subspace of A which is invariant (unchanged) under this transformation.[16]
If a basis is defined in vector space Ln, all vectors can be expressed in terms of components. Polar vectors can be represented as one-column matrices with n rows where n is the space dimensionality. Linear transformations can be represented with square matrices; to each linear transformation A of Ln corresponds a square matrix of rank n. Conversely, to each square matrix of rank n corresponds a linear transformation of Ln at a given basis. Because of the additivity and homogeneity of the linear trasformation and the eigenvalue equation (which is also a linear transformation — homothety), those vector functions can be expressed in matrix form. Thus, in a the two-dimensional vector space L2 fitted with standard basis, the eigenvector equation for a linear transformation A can be written in the following matrix representation:
\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix},
where the juxtaposition of matrices means matrix multiplication. This is equivalent to a set of n linear equations, where n is the number of basis vectors in the basis set. In these equations both the eigenvalue λ and the components of x are unknown variables.
The eigenvectors of A as defined above are also called right eigenvectors because they are column vectors that stand on the right side of the matrix A in the eigenvalue equation. If there exists a transposed matrix AT that satifies the eigenvalue equation, that is, if ATx = λx, then λxT = (λx)T = (ATx)T = xTA, or xTA = λxT. The last equation is similar to the eigenvalue equation but instead of the column vector x it contains its transposed vector, the row vector xT, which stands on the left side of the matrix A. The eigenvectors that satisfy the eigenvalue equation xTA = λxT are called left eigenvectors. They are row vectors.[17] In many common applications, only right eigenvectors need to be considered. Hence the unqualified term "eigenvector" can be understood to refer to a right eigenvector. Eigenvalue equations, written in terms of right or left eigenvectors (Ax = λx and xTA = λxT) have the same eigenvalue λ.[18]
An eigenvector is defined to be a principal or dominant eigenvector if it corresponds to the eigenvalue of largest magnitude (for real numbers, largest absolute value). Repeated application of a linear transformation to an arbitrary vector results in a vector proportional (collinear) to the principal eigenvector.[18]
The applicability the eigenvalue equation to general matrix theory extends the use of eigenvectors and eigenvalues to all matrices, and thus greatly extends the scope of use of these mathematical constructs not only to transformations in linear vector spaces but to all fields of science that use matrices: linear equations systems, optimization, vector and tensor calculus, all fields of physics that use matrix quantities, particularly quantum physics, relativity, and electrodynamics, as well as many engineering applications.
Characteristic equationEdit
Main article: Characteristic equation
Main article: Characteristic polynomial
The determination of the eigenvalues and eigenvectors is important in virtually all areas of physics and many engineering problems, such as stress calculations, stability analysis, oscillations of vibrating systems, etc. It is equivalent to matrix diagonalization, and is the first step of orthogonalization, finding of invariants, optimization (minimization or maximization), analysis of linear systems, and many other common applications.
The usual method of finding all eigenvectors and eigenvalues of a system is first to get rid of the unknown components of the eigenvectors, then find the eigenvalues, plug those back one by one in the eigenvalue equation in matrix form and solve that as a system of linear equations to find the components of the eigenvectors. From the identity transformation Ix = x, where I is the identity matrix, x in the eigenvalue equation can be replaced by Ix to give:
A \mathbf{x} = \lambda I \mathbf{x}
The identity matrix is needed to keep matrices, vectors, and scalars straight; the equation (A − λ) x = 0 is shorter, but mixed up since it does not differentiate between matrix, scalar, and vector.[19] The expression in the right hand side is transferred to left hand side with a negative sign, leaving 0 on the right hand side:
A \mathbf{x} - \lambda I \mathbf{x} = 0
The eigenvector x is pulled out behind parentheses:
(A - \lambda I) \mathbf{x} = 0
This can be viewed as a linear system of equations in which the coefficient matrix is the expression in the parentheses, the matrix of the unknowns is x, and the right hand side matrix is zero. According to Cramer's rule, this system of equations has non-trivial solutions (not all zeros, or not any number) if and only if its determinant vanishes, so the solutions of the equation are given by:
\det(A - \lambda I) = 0 \,
This equation is defined as the characteristic equation (less often, secular equation) of A, and the left-hand side is defined as the characteristic polynomial. The eigenvector x or its components are not present in the characteristic equation, so at this stage they are dispensed with, and the only unknowns that remain to be calculated are the eigenvalues (the components of matrix A are given, i. e, known beforehand). For a vector space L2, the transformation A is a 2 × 2 square matrix, and the characteristic equation can be written in the following form:
\begin{vmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda\end{vmatrix} = 0
Expansion of the determinant in the left hand side results in a characteristic polynomial which is a monic (its leading coefficient is 1) polynomial of the second degree, and the characteristic equation is the quadratic equation
\lambda^2 - \lambda (a_{11} + a_{22}) + (a_{11} a_{22} - a_{12} a_{21}) = 0, \,
which has the following solutions (roots):
\lambda_{1,2} = \frac{1}{2} \left [(a_{11} + a_{22}) \pm \sqrt{4a_{12} a_{21} + (a_{11} - a_{22})^2} \right ].
For real matrices, the coefficients of the characteristic polynomial are all real. The number and type of roots depends on the value of the discriminant, Δ. For cases Δ = 0, Δ > 0, or Δ < 0, respectively, the roots are one real, two real, or two complex. If the roots are complex, they are also complex conjugates of each other. When the number of roots is less than the degree of the characteristic polynomial (the latter is also the rank of the matrix, and the number of dimensions of the vector space) the equation has a multiple root. In the case of a quadratic equation with one root, this root is a double root, or a root with multiplicity 2. A root with a multiplicity of 1 is a simple root. A quadratic equation with two real or complex roots has only simple roots. In general, the algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The spectrum of a transformation on a finite dimensional vector space is defined as the set of all its eigenvalues. In the infinite-dimensional case, the concept of spectrum is more subtle and depends on the topology of the vector space.
The general formula for the characteristic polynomial of an n-square matrix is
p(\lambda) = \sum_{k=0}^n (-1)^k S_k \lambda^{n-k},
where S0 = 1, S1 = tr(A), the trace of the transformation matrix A, and Sk with k > 1 are the sums of the principal minors of order k.[20] The fact that eigenvalues are roots of an n-order equation shows that a linear transformation of an n-dimensional linear space has at most n different eigenvalues.[21] According to the fundamental theorem of algebra, in a complex linear space, the characteristic polynomial has at least one zero. Consequently, every linear transformation of a complex linear space has at least one eigenvalue. [22][23] For real linear spaces, if the dimension is an odd number, the linear transformation has at least one eigenvalue; if the dimension is an even number, the number of eigenvalues depends on the determinant of the transformation matrix: if the determinant is negative, there exists at least one positive and one negative eigenvalue, if the determinant is positive nothing can be said about existence of eigenvalues.[24] The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space), n. Thus, for n = 3, eigenvalues are roots of the cubic equation, for n = 4 — roots of the quartic equation. For n > 4 there are no exact solutions and one has to resort to root-finding algorithms, such as Newton's method (Horner's method) to find numerical approximations of eigenvalues. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors.
In order to find the eigenvectors, the eigenvalues thus found as roots of the characteristic equations are plugged back, one at a time, in the eigenvalue equation written in a matrix form (illustrated for the simplest case of a two-dimensional vector space L2):
\left (\begin{bmatrix} a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix} - \lambda \begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix} \right ) \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a_{11} - \lambda & a_{12}\\a_{21} & a_{22} - \lambda \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix},
where λ is one of the eigenvalues found as a root of the characteristic equation. This matrix equation is equivalent to a system of two linear equations:
\left ( a_{11} - \lambda \right ) x + a_{12} y = 0 \\
a_{21} x + \left ( a_{22} - \lambda \right ) y = 0
The equations are solved for x and y by the usual algebraic or matrix methods. Often, it is possible to divide both sides of the equations to one or more of the coefficients which makes some of the coefficients in front of the unknowns equal to 1. This is called normalization of the vectors, and corresponds to choosing one of the eigenvectors (the normalized eigenvector) as a representative of all vectors in the eigenspace corresponding to the respective eigenvalue. The x and y thus found are the components of the eigenvector in the coordinate system used (most often Cartesian, or polar).
Using the Cayley-Hamilton theorem which states that every square matrix satisfies its own characteristic equation, it can be shown that (most generally, in the complex space) there exists at least one non-zero vector that satisfies the eigenvalue equation for that matrix.[25] As it was said in the Definitions section, to each eigenvalue correspond an infinite number of colinear (linearly dependent) eigenvectors that form the eigenspace for this eigenvalue. On the other hand, the dimension of the eigenspace is equal to the number of the linearly independent eigenvectors that it contains. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace. A multiple eigenvalue may give rise to a single eigenvector so that its algebraic multiplicity may be different than the geometric multiplicity.[26] However, as already stated, different eigenvalues are paired with linearly independent eigenvectors.[14] From the aforementioned, it follows that the geometric multiplicity cannot be greater than the algebraic multiplicity.[27]
For instance, an eigenvector of a rotation in three dimensions is a vector located within the axis about which the rotation is performed. The corresponding eigenvalue is 1 and the corresponding eigenspace contains all the vectors along the axis. As this is a one-dimensional space, its geometric multiplicity is one. This is the only eigenvalue of the spectrum (of this rotation) that is a real number.
The examples that follow are for the simplest case of two-dimensional vector space L2 but they can easily be applied in the same manner to spaces of higher dimensions.
Homothety, identity, point reflection, and null transformationEdit
File:Homothety in two dim.svg
As a one-dimensional vector space L1, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. This type of transformation is called homothety (similarity transformation). For a two-dimensional vector space L2, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at a fixed point on the balloon surface are stretched equally with the same scaling factor λ. The homothety transformation in two-dimensions is described by a 2 × 2 square matrix, acting on an arbitrary vector in the plane of the stretching/shrinking surface. After doing the matrix multiplication, one obtains:
A \mathbf{x} = \begin{bmatrix}\lambda & 0\\0 & \lambda\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix}\lambda . x + 0 . y \\0 . x + \lambda . y\end{bmatrix} = \lambda \begin{bmatrix} x \\ y \end{bmatrix} = \lambda \mathbf{x},
which, expressed in words, means that the transformation is equivalent to multiplying the length of the vector by λ while preserving its original direction. The equation thus obtained is exactly the eigenvalue equation. Since the vector taken was arbitrary, in homothety any vector in the vector space undergoes the eigenvalue equation, i. e. any vector lying on the balloon surface can be an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking.
Several other transformations can be considered special types of homothety with some fixed, constant value of λ: in identity which leaves vectors unchanged, λ = 1; in reflection about a point which preserves length and direction of vectors but changes their orientation to the opposite one, λ = −1; and in null transformation which transforms each vector to the zero vector, λ = 0. The null transformation does not give rise to an eigenvector since the zero vector cannot be an eigenvector but it has eigenspace since eigenspace contains also the zero vector by definition.
Unequal scalingEdit
For a slightly more complicated example, consider a sheet that is stretched uneqally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k1 for the scaling in direction x, and k2 for the scaling in direction y. The transformation matrix is \begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}, and the characteristic equation is λ2 − λ (k1 + k2) + k1k2 = 0. The eigenvalues, obtained as roots of this equation are λ1 = k1, and λ2 = k2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging k1 back in the eigenvalue equation gives one of the eigenvectors:
\begin{bmatrix}k_1 - k_1 & 0\\0 & k_2 - k_1\end{bmatrix} \begin{bmatrix} x \\ y\end{bmatrix} = \begin{cases}
\left ( k_1 - k_1 \right ) x + 0 . y \\
0 . x + \left ( k_2 - k_1 \right ) y
\end{cases} = \left ( k_2 - k_1 \right ) y = 0.
Dividing the last equation by k2k1, one obtains y = 0 which represents the x axis. A vector with lenght 1 taken along this axis represents the normalized eigenvector corresponding to the eigenvalue λ1. The eigenvector corresponding to λ2 which is a unit vector along the y axis is found in a similar way. In this case, both eigenvalues are simple (with algebraic and geometric multiplicities equal to 1). Depending on the values of λ1 and λ2, there are several notable special cases. In particular, if λ1 > 1, and λ2 = 1, the transformation is a stretch in the direction of axis x. If λ2 = 0, and λ1 = 1, the transformation is a projection of the surface L2 on the axis x because all vectors in the direction of y become zero vectors.
Let the rubber sheet is stretched along the x axis (k1 > 1) and simultaneously shrunk along the y axis (k2 < 1). Then λ1 = k1 will be the principal eigenvalue. Repeatedly applying this transformation of stretching/shrinking many times to the rubber sheet will turn the latter more and more similar to a rubber string. Any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching), that is, it will become collinear with the principal eigenvector.
Mona LisaEdit
Mona Lisa with eigenvector
For the example shown on the right, the matrix that would produce a shear transformation similar to this would be
A=\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix}.
The set of eigenvectors \mathbf{x} for A is defined as those vectors which, when multiplied by A, result in a simple scaling \lambda of \mathbf{x}. Thus,
A\mathbf{x} = \lambda\mathbf{x}.
If we restrict ourselves to real eigenvalues, the only effect of the matrix on the eigenvectors will be to change their length, and possibly reverse their direction. So multiplying the right hand side by the Identity matrix I, we have
A\mathbf{x} = (\lambda I)\mathbf{x},
and therefore
(A-\lambda I)\mathbf{x}=0.
In order for this equation to have non-trivial solutions, we require the determinant \det(A - \lambda I), which is called the characteristic polynomial of the matrix A, to be zero. In our example we can calculate the determinant as
\det\!\left(\begin{bmatrix}1 & 0\\ -\frac{1}{2} & 1\end{bmatrix} - \lambda\begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix} \right)=(1-\lambda)^2,
and now we have obtained the characteristic polynomial (1-\lambda)^2 of the matrix A. There is in this case only one distinct solution of the equation (1-\lambda)^2 = 0, \lambda=1. This is the eigenvalue of the matrix A. As in the study of roots of polynomials, it is convenient to say that this eigenvalue has multiplicity 2.
Having found an eigenvalue \lambda=1, we can solve for the space of eigenvectors by finding the nullspace of A-(1)I. In other words by solving for vectors \mathbf{x} which are solutions of
\begin{bmatrix}1-\lambda & 0\\ -\frac{1}{2} & 1-\lambda \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0
Substituting our obtained eigenvalue \lambda=1,
\begin{bmatrix}0 & 0\\ -\frac{1}{2} & 0 \end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix}=0
Solving this new matrix equation, we find that vectors in the nullspace have the form
\mathbf{x} = \begin{bmatrix}0\\ c\end{bmatrix}
where c is an arbitrary constant. All vectors of this form, i.e. pointing straight up or down, are eigenvectors of the matrix A. The effect of applying the matrix A to these vectors is equivalent to multiplying them by their corresponding eigenvalue, in this case 1.
In general, 2-by-2 matrices will have two distinct eigenvalues, and thus two distinct eigenvectors. Whereas most vectors will have both their lengths and directions changed by the matrix, eigenvectors will only have their lengths changed, and will not change their direction, except perhaps to flip through the origin in the case when the eigenvalue is a negative number. Also, it is usually the case that the eigenvalue will be something other than 1, and so eigenvectors will be stretched, squashed and/or flipped through the origin by the matrix.
Other examplesEdit
Standing wave
Fig. 2. A standing wave in a rope fixed at its boundaries is an example of an eigenvector, or more precisely, an eigenfunction of the transformation giving the acceleration. As time passes, the standing wave is scaled by a sinusoidal oscillation whose frequency is determined by the eigenvalue, but its overall shape is not modified.
Assume the rope is a continuous medium. If one considers the equation for the acceleration at every point of the rope, its eigenvectors, or eigenfunctions, are the standing waves. The standing waves correspond to particular oscillations of the rope such that the acceleration of the rope is simply its shape scaled by a factor—this factor, the eigenvalue, turns out to be -\omega^2 where \omega is the angular frequency of the oscillation. Each component of the vector associated with the rope is multiplied by a time-dependent factor \sin(\omega t). If damping is considered, the amplitude of this oscillation decreases until the rope stops oscillating, corresponding to a complex ω. One can then associate a lifetime with the imaginary part of ω, and relate the concept of an eigenvector to the concept of resonance. Without damping, the fact that the acceleration operator (assuming a uniform density) is Hermitian leads to several important properties, such as that the standing wave patterns are orthogonal functions.
However, it is sometimes unnatural or even impossible to write down the eigenvalue equation in a matrix form. This occurs for instance when the vector space is infinite dimensional, for example, in the case of the rope above. Depending on the nature of the transformation T and the space to which it applies, it can be advantageous to represent the eigenvalue equation as a set of differential equations. If T is a differential operator, the eigenvectors are commonly called eigenfunctions of the differential operator representing T. For example, differentiation itself is a linear transformation since
(f(t) and g(t) are differentiable functions, and a and b are constants).
Consider differentiation with respect to t. Its eigenfunctions h(t) obey the eigenvalue equation:
\displaystyle\frac{dh}{dt} = \lambda h,
where λ is the eigenvalue associated with the function. Such a function of time is constant if \lambda = 0, grows proportionally to itself if \lambda is positive, and decays proportionally to itself if \lambda is negative. For example, an idealized population of rabbits breeds faster the more rabbits there are, and thus satisfies the equation with a positive lambda.
The solution to the eigenvalue equation is g(t)= \exp (\lambda t), the exponential function; thus that function is an eigenfunction of the differential operator d/dt with the eigenvalue λ. If λ is negative, we call the evolution of g an exponential decay; if it is positive, an exponential growth. The value of λ can be any complex number. The spectrum of d/dt is therefore the whole complex plane. In this example the vector space in which the operator d/dt acts is the space of the differentiable functions of one variable. This space has an infinite dimension (because it is not possible to express every differentiable function as a linear combination of a finite number of basis functions). However, the eigenspace associated with any given eigenvalue λ is one dimensional. It is the set of all functions g(t)= A \exp (\lambda t), where A is an arbitrary constant, the initial population at t=0.
Spectral theoremEdit
For more details on this topic, see spectral theorem.
In its simplest version, the spectral theorem states that, under certain conditions, a linear transformation of a vector \mathbf{v} can be expressed as a linear combination of the eigenvectors, in which the coefficient of each eigenvector is equal to the corresponding eigenvalue times the scalar product (or dot product) of the eigenvector with the vector \mathbf{v}. Mathematically, it can be written as:
\mathcal{T}(\mathbf{v})= \lambda_1 (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + \lambda_2 (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots
where \mathbf{v}_1, \mathbf{v}_2, \dots and \lambda_1, \lambda_2, \dots stand for the eigenvectors and eigenvalues of \mathcal{T}. The theorem is valid for all self-adjoint linear transformations (linear transformations given by real symmetric matrices and Hermitian matrices), and for the more general class of (complex) normal matrices.
If one defines the nth power of a transformation as the result of applying it n times in succession, one can also define polynomials of transformations. A more general version of the theorem is that any polynomial P of \mathcal{T} is given by
P(\mathcal{T})(\mathbf{v}) = P(\lambda_1) (\mathbf{v}_1 \cdot \mathbf{v}) \mathbf{v}_1 + P(\lambda_2) (\mathbf{v}_2 \cdot \mathbf{v}) \mathbf{v}_2 + \cdots
The theorem can be extended to other functions of transformations, such as analytic functions, the most general case being Borel functions.
Main article: Eigendecomposition (matrix)
The spectral theorem for matrices can be stated as follows. Let \mathbf{A} be a square (n\times n) matrix. Let \mathbf{q}_1 ... \mathbf{q}_k be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of \mathbf{A}. If k=n, then \mathbf{A} can be written
where \mathbf{Q} is the square (n\times n) matrix whose ith column is the basis eigenvector \mathbf{q}_i of \mathbf{A} and \mathbf{\Lambda} is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. \Lambda_{ii}=\lambda_i.
Infinite-dimensional spacesEdit
If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which \left(T-\lambda\right)^{-1} is not defined; that is, such that T-\lambda has no bounded inverse.
Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space \ell^2(\mathbf{Z}) (the space of all sequences of scalars \dots a_{-1}, a_0, a_1,a_2,\dots such that \cdots + |a_{-1}|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + \cdots converges) has no eigenvalue but has spectral values.
In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of spectrum.)
Exponential functions are eigenfunctions of the derivative operator (the derivative of exponential functions are proportional to themself). Exponential growth and decay therefore provide examples of continuous spectra, as does the vibrating string example illustrated above. The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified).
Schrödinger equationEdit
An example of an eigenvalue equation where the transformation \mathcal{T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
H\psi_E = E\psi_E \,
Molecular orbitalsEdit
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of implicit eigenvalue equation. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
Geology and Glaciology: (Orientation Tensor)Edit
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [28], [29], or as a Stereonet on a Wulff Net [30]. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 [31] are in the order E1 > E2 > E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the Eigenvector, on a compass rose of 360°. Dip is measured as the Eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). Various values of E1, E2 and E3 mean different things, as can be seen in the book 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 [32].
Factor analysisEdit
In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability.
Fig. 5. Eigenfaces as examples of eigenvectors
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here:
Similar to this concept, eigenvoices concept is also developed which represents the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.
Tensor of inertiaEdit
In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass.
Stress tensorEdit
Eigenvalues of a graphEdit
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix, which is either TA or IT 1/2AT −1/2, where T is a diagonal matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
See also Edit
The Book of Mathematical Proofs may have more about this subject.
1. See Hawkins (1975), §2.
2. 2.0 2.1 2.2 2.3 See Hawkins (1975), §3.
3. 3.0 3.1 3.2 See Kline 1972, pp. 807-808
4. See Kline 1972, p. 673
5. See Kline 1972, pp. 715-716
6. See Kline 1972, pp. 706-707
7. See Kline 1972, p. 1063
8. See Aldrich (2006).
9. See Golub & van Loan 1996, §7.3; Meyer 2000, §7.3
10. See Strang 2006, p. 249
11. See Sharipov 1996, p. 66
12. See Bowen & Wang 1980, p. 148
13. For a proof of this lemma, see Shilov 1969, p. 131, and Lemma for the eigenspace
14. 14.0 14.1 For a proof of this lemma, see Shilov 1969, p. 130, Hefferon 2001, p. 364, and Lemma for linear independence of eigenvectors
15. See Shilov 1969, p. 131
16. For proof, see Sharipov 1996, Theorem 4.4 on p. 68
17. See Shores 2007, p. 252
18. 18.0 18.1 For a proof of this theorem, see Weisstein, Eric W. Eigenvector From MathWorld − A Wolfram Web Resource
19. See Strang 2006, footnote to p. 245
20. For details and proof, see Meyer 2000, p. 494-495
21. See Greub 1975, p. 118
22. See Greub 1975, p. 119
23. For proof, see Gelfand 1971, p. 115
24. For proof, see Greub 1975, p. 119
25. For details and proof, see Kuttler 2007, p. 151
26. See Shilov 1969, p. 134
27. See Shilov 1969, p. 135 and Problem 11 to Chapter 5
28. Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477
29. Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150
30. GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system
31. Stereo32
32. Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107
• Korn, Granino A.; Korn, Theresa M. (2000), Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review, 1152 p., Dover Publications, 2 Revised edition, ISBN 0-486-41147-8 .
• John Aldrich, Eigenvalue, eigenfunction, eigenvector, and related terms. In Jeff Miller (Editor), Earliest Known Uses of Some of the Words of Mathematics, last updated 7 August 2006, accessed 22 August 2006.
• Strang, Gilbert (1993), Introduction to linear algebra, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-961-40885-5 .
• Strang, Gilbert (2006), Linear algebra and its applications, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-030-10567-6 .
• Claude Cohen-Tannoudji, Quantum Mechanics, Wiley (1977). ISBN 0-471-16432-1. (Chapter II. The mathematical tools of quantum mechanics.)
• John B. Fraleigh and Raymond A. Beauregard, Linear Algebra (3rd edition), Addison-Wesley Publishing Company (1995). ISBN 0-201-83999-7 (international edition).
• T. Hawkins, Cauchy and the spectral theory of matrices, Historia Mathematica, vol. 2, pp. 1–29, 1975.
• Kline, Morris (1972), Mathematical thought from ancient to modern times, Oxford University Press, ISBN 0-195-01496-0 .
• Brown, Maureen, "Illuminating Patterns of Perception: An Overview of Q Methodology" October 2004
• Gene H. Golub and Henk A. van der Vorst, "Eigenvalue computation in the 20th century," Journal of Computational and Applied Mathematics 123, 35-65 (2000).
• Max A. Akivis and Vladislav V. Goldberg, Tensor calculus (in Russian), Science Publishers, Moscow, 1969.
• Gelfand, I. M. (1971), Lecture notes in linear algebra, Russian: Science Publishers, Moscow
• Pavel S. Alexandrov, Lecture notes in analytical geometry (in Russian), Science Publishers, Moscow, 1968.
• Carter, Tamara A., Richard A. Tapia, and Anne Papaconstantinou, Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students, Rice University, Online Edition, Retrieved on 2008-02-19.
• Steven Roman, Advanced linear algebra 3rd Edition, Springer Science + Business Media, LLC, New York, NY, 2008. ISBN 978-0-387-72828-5
• Shilov, G. E. (1969), Finite-dimensional (linear) vector spaces, Russian: State Technical Publishing House, 3rd Edition, Moscow .
• Kuttler, Kenneth (2007), An introduction to linear algebra, Online e-book in PDF format, Brigham Young University, .
• James W. Demmel, Applied Numerical Linear Algebra, SIAM, 1997, ISBN 0-89871-389-7.
• Robert A. Beezer, A First Course In Linear Algebra, Free online book under GNU licence, University of Puget Sound, 2006
• Lancaster, P. Matrix Theory (in Russian), Science Publishers, Moscow, 1973, 280 p.
• Paul R. Halmos, Finite-Dimensional Vector Spaces, 8th Edition, Springer-Verlag, New York, 1987, 212 p., ISBN 0387900934
• Pigolkina, T. S. and Shulman, V. S., Eigenvector (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
• Larson, Ron and Bruce H. Edwards, Elementary Linear Algebra, 5th Edition, Houghton Mifflin Company, 2003, ISBN 0618335676.
• Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, Online e-book in PDF format, Bashkir State University, Ufa, arXiv:math/0405323v1, ISBN 5-7477-0099-5, Archived from the original on 2009-10-26, .
External linksEdit
Algebra may have more about this subject.
Linear Algebra may have more about this subject.
Around Wikia's network
Random Wiki |
acf3f1bf56cb5a4d | Tuesday, June 25, 2013
Is the wavefunction ontological or epistemological?
Part 1: The EPR argument
Quantum mechanics is a fascinating subject, extremely rich in mathematical, physical, philosophical, and historical content. Studying quantum mechanics is school with its classical problems of solving the hydrogen atom is only the very first step in a long journey. The quantum foundation area with its diversified views is an equally fascinating domain. At first sight, it looks like the majority of the current interpretations are “obviously” misguided except for your own, whatever that may be, and all other interpretations must be rooted into classical prejudices. However this is not the case and it takes some time and effort to fully appreciate and accept all points of views in interpreting quantum mechanics.
Into all this mix, I am proposing yet another quantum mechanics interpretation, and I will attempt to show that quantum mechanics is actually intuitive and it all follows from clear physical principles in a reconstruction program. Since the principle names the theory (e.g the theory of relativity got its name form the relativity principle), I will call quantum mechanics: the theory of elliptic composability and I will show that all primitive concepts like for example ontology and epistemology has to be adjusted to their corresponding composability class. In particular the quantum wavefunction is neither ontological nor epistemological, meaning it is not “parabolic-ontological” not “parabolic-epistemological” but it will be shown to be “elliptic-ontological”.
I will start this journey following arguments in historical fashion, and I will start with the EPR argument. I have no clear idea how many parts this series will contain, probably around 10 but I will keep an open format.
At the dawn of quantum mechanics, Bohr struggled with its interpretation, and the ideas of complementarity and uncontrollable disturbances was a major part of the discussion. Today this is no longer the case dues to advances in understanding of the mathematical structure of quantum mechanics. Even today most textbooks are painting the wrong picture of the uncertainty principle due to sloppy mathematical formulation and this probably deserves a post of its own for clarification.
For the EPR argument suffices to state that one cannot measure simultaneously with perfect accuracy both the position and the momentum of elementary particles. Then Einstein, Podolski, and Rosen argued along the following lines: what if I have a system which disintegrates into subsystem 1 and subsystem 2 and we measure position on subsystem 1 and momentum on subsystem 2. If the original system was initially at rest, conservation of momentum implies that measuring the momentum of subsystem 2 implies we know with absolute precision the momentum of subsystem 1. But wait a minute, on subsystem 1 we measure with perfect accuracy the position as well so it seems that we succeeded on beating the uncertainty principle. Quantum mechanics does not allow that which means quantum mechanics must be incomplete.
The whole argument holds provided two critical assumptions hold as well:
• “On the other hand, since at the time of measurement the two systems no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system.”
Both assumptions are actually wrong and later on, John Bell refuted the EPR conclusion based on the second assertion (that of locality). Arguing on similar lines with Bell, one can show that the first assumption is invalid as well.
The remote effect due to local measurement is called quantum steering and while it cannot be used to send signals faster than the speed of light, it does change the remote state. Such effects were observed in actual experiments. In the elliptic composability quantum mechanics reconstruction project it is easy to understand its root cause:
In classical or quantum mechanics, observables play a dual role, that of observables and of generators. But while in classical mechanics (parabolic composability) the observables for a total system factorizes neatly into a product of observables for each subsystem, in quantum mechanics (elliptic composability) observables and generators are mixed together and the factorization is not possible in general (see Fig 3 in http://arxiv.org/pdf/1303.3935v1.pdf) . In other words, the system becomes “entangled”.
In the next post I will show Bell's refutation of EPR argument based on locality.
Thursday, June 20, 2013
Quantum mechanics and unitarity (part 4 of 4)
Now we can put the whole thing together and attempt to solve the measurement problem. But is there a problem to begin with? Here is a description of the problem as written by Roderich Tumulka http://www.math.rutgers.edu/~tumulka/teaching/fall11/325/script2.pdf (see page 53):
Start with 3 assertions:
• In each run of the experiment, there is a unique outcome.
• The wave function is a complete description of a system’s physical state.
• The evolution of the wave function of an isolated system is always given by the
Schrödinger equation
Then in the standard formulation of quantum mechanics at least one of them has to be refuted. From the quantum mechanics reconstruction work, the last two bullets are iron-clad and cannot be violated without collapsing the entire theory. This means that GRW theory, and Bohmian interpretations are automatically excluded. Also the usual Copenhagen interpretation is not viable either because it makes use of classical physics (we know that we cannot have a consistent theory of classical and quantum mechanics). Epistemic approaches in the spirit of Peres are not the whole story either because while collapse is naturally understood as information update, this means that Leibniz identity is violated as well.
So what do we have left? Only the many-worlds interpretation (MWI), or its more modern form of Zurek’s relative state interpretation http://arxiv.org/abs/0707.2832.
However, I will argue for another fully unitary solution different than MWI/relative state interpretation (and I agree with Zurek that the old fashion MWI gives up too soon on finding the solution), but in the same spirit of Zurek’s approach. The basic idea is that measurement is not a primitive operation. The experimental outcome creates huge numbers of information copies. The key difference between Zurek’s quantum Darwinism and the new explanation is on who succeeds in creating the information copies: the full wavefunction (as in quantum Darwinism), or the one and only experimental outcome. In other words, the Grothendieck equivalence relationship is broken by the measurement amplification effects: only one equivalent representative of the Grothendieck group element succeeds in making information copies and statistically overwhelms all the other ones (for all practical purposes). The information in the “collapsed part of the wavefunction” is not erased, but becomes undetectable.
Of course there are still open problems of delicate technical nature to be solved in this new paradigm, but they do seem to get their full answer in this framework. Solving them is a work in progress, and the solution is not yet ready for public disclosure.
In subsequent posts I’ll show how the wavefunction is neither epistemological, nor ontological and I will touch on Bell’s theorem, and the recent PBR result among other things.
Tuesday, June 18, 2013
Quantum mechanics and unitarity (part 3 of 4)
In part 2 we have see how to construct the Grothendieck group. Can we do this for the composability monoid in the case of classical or quantum mechanics? The construction works only if we have an equivalence relationship and this naturally exists only for quantum mechanics.
There is no Grothendieck group of the tensor product for classical mechanics, and there is no “ontological collapse” there other than an epistemic update of information in an ignorance interpretation.
In quantum mechanics the situation is different because of unitarity and one can construct an equivalence relationship starting from a property called envariance : http://arxiv.org/abs/quant-ph/0405161 Skipping the boring technical details on how to prove the usual properties of an equivalence relationship, here is the basic idea: whatever I can change using unitarity for the system over here, can be undone by another unitary evolution on the environment over there.
Therefore the correct way to write a wavefunction in quantum mechanics is not |psi>, but as in Grothendieck way: the Cartesian product: (|psi>, null) with the second element representing the “negative elements”, or the environment degrees of freedom which will absorb the “collapsed information” during measurement.
The measurement device should be represented as (null, |measurement apparatus and environment>) and the contact between the system and the measurement device should be represented as the tensor product of the two Grothendieck elements resulting into:
(|system to be measured>, |measurement apparatus and environment>)
By the equivalence relationship this is the same as:
(|collapsed system A>, |measurement apparatus displaying A and environment>)
as well as all other potential experimental outcomes:
(|collapsed system B>, |measurement apparatus displaying B and environment>)
(|collapsed system C>, |measurement apparatus displaying C and environment>)…
But then since only one outcome is recorded, we either need to resort to MWI interpretation, or we need to find another explanation for this.
The explanation is that the measuring apparatus is an unstable system which provides massive information copies (think of the Wilson’s cloud chamber in Mott’s problem). Measurement is not a neat and primitive operation, and the one and only outcome creates an extremely large number of information copies which dwarfs the information about the other potential outcomes which are now hidden in the environmental degrees of freedom.
Sir Neville Mott showed that in a cloud chamber two atoms cannot both be ionized unless they lie in a straight line with the radioactive nucleus. In other words, we only need to understand the very first ionization. Similarly, in the Schrödinger’s cat scenario, we only need to understand the first decay, and we do not need to hide “the other cat” in the environment degrees of freedom.
Please stay tuned for the conclusion in part 4.
Saturday, June 15, 2013
Quantum mechanics and unitarity (part 2 of 4)
When talking about measurement, one talks about the collapse postulate. Let us take a look of what happens with the underlying Hilbert space. During collapse, the dimensionality of the Hilbert space is reduced to the dimensionality of the subspace where the wavefunction is projected to. A key point is that the dimensionality of a Hilbert space is its sole characteristic.
Measurement is initiated by first doing the tensor product of the Hilbert space of system wavefunction with the Hilbert space of the measurement apparatus. This operation increases the dimensionality of the original Hilbert space. Then the collapse decreases the dimensionality.
As an abstract operation, the tensor product respects the properties of a commutative monoid. Short of the existence of an inverse element, this is almost a mathematical group http://en.wikipedia.org/wiki/Group_(mathematics).
To model the collapse in a fully unitary way (and free of interpretations) we would like to construct the tensor product group from the tensor product commutative monoid. Is such a construction possible? Indeed it is and it is called the Grothendieck group construction http://en.wikipedia.org/wiki/Grothendieck_group Let us explain this using a simple challenge: let’s construct the group of integers Z starting from the abelian monoid of natural numbers N. We would need to introduce negative integers using only positive numbers!!! At first sight this seems impossible. How can such a thing be even possible? N and by itself is not enough, but with the addition of an equivalence relationship it can be done.
So consider a Cartesian product NxN and we would call the first element a positive number, and the second element a negative number: p = (p,0) n = (0, n) We would like to do something like this (p,0)+(0,n) = (p, n) = p-n
Also : (0,-q) = (q, 0) All this works in general, but the definition of a Z number is no longer unique. For example: 7=(7,0) =(8,1)=(9,2)=… and -3=(0,3)=(1,4)=(2,5)=…
Therefore we need an equivalence relationship such that two pairs (a,b) and (p,q) are considered equivalent if a+q=b+p Notice that in the equivalence relationship we used only the “+” operation of the original monoid N. The formal definition of the equivalence relationship is slightly more complex due to the need to prove the transitivity property of an equivalence relationship. We call two pairs equivalent: (a,b)~(p,q) if there is a number t such that a+q+t =b+p+t
Now since Grothendieck construction is categorical (universal), it can be applied to the tensor product commutative monoid and this will explain the collapse postulate in a pure unitary way. Please stay tuned for part 3.
Wednesday, June 12, 2013
Quantum mechanics and unitarity (part 1 of 4)
I will start a sequence of posts showing why quantum mechanics demands only unitary time evolution despite the collapse postulate and how to solve the problem. For reference, this is based on http://arxiv.org/abs/1305.3594
The quantum mechanics reconstruction project presented in http://arxiv.org/abs/1303.3935 shows that in the algebraic approach, the Leibniz identity plays a central and early role. But what is the Leibniz identity? It is the chain derivation rule: D(fg) = D(f) g + f D(g).
All standard calculus follows from this rule. For example using recursion one proves D(X^n) = n X^(n-1) and form this and the Taylor series, the derivation rules for all usual functions follow.
In the algebraic formalism of quantum mechanics, the Leibniz identity corresponds in the state space to unitarity. Any breaking of unitarity means that Leibnitz identity is violated as well. This is the case for example in the epistemological interpretation of the wavefunction where the collapse postulate is understood as simply information update. However (and here is the big problem), breaking the Leibniz identity destroys the entire quantum mechanics formalism. In other words, any non-unitary time evolution is fatal for quantum mechanics.
So how can we understand the collapse postulate? Is quantum mechanics inconsistent? Should quantum mechanics be augmented by classical physics to describe the system and the measurement apparatus? From http://arxiv.org/abs/1303.3935 we know that there cannot be any consistent classical-quantum description of nature. Also the formalism which highlighted the problem shows the way out of the conundrum. Part 2 of the series will present preliminary mathematical structures which will be used to show how quantum mechanics can be fully unitary even during measurement.
Wednesday, June 5, 2013
New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 5)
“Lagrangian-Only Quantum Theory” by Ken Wharton
What I liked about Ken’s talk was the novelty of his idea. Working in the quantum mechanics reconstruction area I always ask the “what if” questions: what if the universe would obey different kinds of mathematical relationships? So what if there are no dynamical equations, not even stochastic ones? This is the premise of Ken’s approach in arXiv:1301.7012.
As an inspiration one can pick the example of statistical mechanics and work out along the following lines:
1. Consider all possible microstates
2. Eliminate inconsistent states
3. Assign an equal a priori probably
4. Calculate probabilities as Bayesian updates
Now in nature there are distinguished mathematical structures, including the existence of Hamiltonian equations of motion. If we do not consider the dynamics, we can open the door for all kinds of other kinds of ontologies, like for example simulated virtual realities. By killing the dynamics Ken is basically going outside the realm of physics in hope to discover new insights. If I understood him well in a private conversation after the talk, Ken’s approach is basically opposite that of t`Hoofts: start with a chaotic system in the IR domain and arrive at quantum mechanics in the UV area. In the process Ken claims he recovers something that asymptotically becomes Born’s rule. If correct, this would represent a genuine new insight into the origin of Born’s rule besides the ones from Gleason theorem or Zurek’s program.
One may object that going outside physics is a fool’s errand, but then we are reminded by the unphysical PR-boxes which proved fruitful into better understanding quantum mechanics. Will Ken’s approach prove as fruitful? Only time will tell and I wish him luck.
One last word about this series of posts from the conference. Here is the link to the conference page http://carnap.umd.edu/philphysics/conference.html where one can find all the brief descriptions of the talks.
Monday, June 3, 2013
New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 4)
“Quantum information and quantum gravity” by Seth Lloyd
A thought provoking talk at the conference was that of Seth Lloyd. He showed how to derive Einstein’s general relativity equation from quantum limits for measuring space-time geometry and an additional black hole assumption.
One way to think of measuring the geometry of space-time is to think of a comprehensive GPS system. Measuring time amounts to measuring the number of clock ticks and this requires energy. Everybody is familiar with the position-momentum uncertainty principle, but the energy-time uncertainty principle is not so clear cut. This is because in quantum mechanics time is a parameter, and not an operator, and care has to be exercised in interpreting the energy-time uncertainty principle.
Margolus and Levitin had obtained a bound of quantum evolution time in terms of the initial mean energy of the system E: E delta t >= hbar pi/2. From this the total possible number of clock ticks in a bounded region of space time (of radius r and time span t) cannot exceed 2Et/pi hbar. In principle, quantum mechanics does not limit the accuracy for measuring time, and all you need is to do is add enough energy. But in general relativity, adding energy in a bounded region will eventually lead to the creation of a black hole. So here is a general relativity assumption: we want the radius of the bonded region to be larger than the Schwarzschild radius Rs = 2GM/c^2
From this (in terms of the Plank time Tp and Plank distance Lp) one obtains the maximum number of clock ticks achievable in a bounded region of space time before creating a black hole: r t / pi Lp Tp
Now r*t is an area and naive field theory would suggest r^3 t. Also naïve string theory would suggest at first sight r^2 t.
From those kinds of area considerations, Seth was able to deduce general relativity equations inspired in part by Ted Jacobson’s ideas (in fact Seth collaborated with Ted on this result). Now you may ask (as I certainly did) if you start with Schwarzschild’s radius and you derive Einstein’s equations, are you not vulnerable to charges of circularity? Perhaps, but the result is still interesting.
(I have one more story to tell from the conference. Please stay tuned for part 5-the last one.)
Saturday, June 1, 2013
New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 3)
“What is the alternative to quantum theory” by John Preskill
As promised, here is the second story from the subsequent discussions following Preskill’s talk.
After the talk I was listening in a discussion between John Preskill and Chris Fucks and at some point John asked the question: “is there anything else besides classical and quantum mechanics?” Later in the day I approached John and I told him that I know the answer to his question and I started to present my ideas captured in http://arxiv.org/abs/1303.3935 The basic idea is simple. Suppose I have on my left side a physical system A subject to the laws of nature. Also suppose I have on my right side a physical system B subject to the same laws of nature. Then if I perform the tensor product composition of system A with system B, I would get the larger system “A tensor B” subject to the same laws of physics. From this I can extract very hard constraints on the allowed form of the laws of nature. In fact it can be shown that there are only 3 such consistent solutions. One is an “elliptic” solution which corresponds to quantum mechanics, one is a “parabolic” solution which corresponds to classical mechanics, and there is a third “hyperbolic” solution which corresponds to something we do not fully understand at this time. Another way to look at the 3 solutions is by Plank’s constant: positive, zero, or imaginary.
Mathematically the whole thing can be naturally expressed in terms of category theory, and physically it corresponds to the invariance of the laws of nature under tensor composition. Now when I explain this to different people, I usually get a polite nod followed by a polite excuse to end the discussion. However, I did not get this reaction from Preskill who asked cogent clarification questions. Also he told me to look up the recent preprint of an Anton Kapustin. I did not remember the name, and the next day I asked John to type it for me on the archive search field and lo and behold I found out this preprint: http://arxiv.org/abs/1303.6917 titled: “Is there Life beyond QM?” Now the core inspiration for my result was a 70s paper by Grgin and Petersen: http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1103900192 and Kapustin had the same inspiration. Reading his preprint it struck me that we had independently discovered the same thing and I only managed to upload my preprint 11 days before him. Then the mystery of John’s reaction evaporated. Preskill is colleague with Kapustin at Caltech.
So now I had good news and bad news. The good news was that I am right and my credibility got a boost. The bad news is that I have competition in an area where I thought I worked alone. When I uploaded my preprint, I left out a piece of it, related to the unitary realization of the collapse postulate. So I rushed to package this result as a separate paper and I uploaded a few days after the conference: http://arxiv.org/abs/1305.3594
The problem is that any violation of unitarity is fatal to QM as shown by the QM reconstruction project. This includes the collapse during measurement even though it can be interpreted as Bayesian information update. There is an easy remedy to this though suggested by this composability/category theory formalism and it is based on the Grothendiek group construction. (I’ll explain how this works in detail in a subsequent post). As a side benefit this solves the measurement problem and eliminates the MWI interpretation as well.
The QM reconstruction projects is an area coming of age and Luca Bombelli maintains a page keeping tabs on all such projects: http://www.phy.olemiss.edu/~luca/Topics/qm/axioms.html I believe that such approaches will eventually lead to the elimination of all known QM interpretations as QM will be just as easily and naturally be derivable as special relativity. After all, how many conferences dedicated to the “correct” interpretation of special relativity and “ict-imaginary time” do you know? |
99e41b8c475ca1fb | Элементарные частицы в Стандартной модели
Бозон – частица с целым значением спина.
Калибровочные бозоны – бозоны, которые действуют как переносчики фундаментальных взаимодействий.
Фотон – квант электромагнитного излучения, имеет спиновое квантовое число s=1, то есть величина спинового момента импульса равна S=\sqrt{s(s+1)}\hbar=\sqrt2\hbar. Самая распространённая по численности частица во Вселенной.
W+, W и Z0-бозоны – переносчики слабого взаимодействия.
Глюон – переносчик сильного взаимодействия, спин 1, безмассовый, несёт цвет-антицвет.
Фермион – частица (или квази-частица) с полуцелым значением спина. Подчиняется принципу исключения: в одном квантовом состоянии может находиться не более одной частицы.
Элементарная частица – микрообъект субъядерного масштаба, который невозможно расщепить на составные части.
Фундаментальная частица – бесструктурная элементарная частица.
Лептон – фундаментальная частица, фермион, не участвующий в сильном взаимодействии, спин 1/2: электрон e^- и электронное нейтрино \nu_e, мюон \mu^- (в 207 раз тяжелее электрона; через 2.2 мкс распадается на электрон, мюонное нейтрино и электронное антинейтрино) и мюонное нейтрино \nu_{\mu}, тау-лептон \tau^- (распадается на мюон \mu^-, тау-нейтрино\nu_{\tau} и мюонное антинейтрино \bar{\nu}_{\mu}) и тау-нейтрино \nu_{\tau}, плюс шесть их античастиц. Нейтрино имеют ненулевую массу.
Адроны – класс элементарных частиц, подверженных сильному взаимодействию. Состоят из кварков.
Кварк – фундаментальная частица, фермион, входящая в состав адронов. Спин 1/2. Величина заряда – 1/3 или 2/3 заряда электрона. Барионное число 1/3 (у антикварков -1/3). Порождаются глюонами парой кварк-антикварк.
Барионы – адроны, фермионы, состоящие из трёх – красного, зелёного и синего – кварков.
Фундаментальные фермионы – лептоны и кварки.
Мезоны – составные элементарные частицы, адроны, бозоны, состоящие из равного числа кварков и антикварков. Все мезоны нестабильны.
Пионы (пи-мезоны)\pi^-, \pi^+ (273 массы электрона), \pi^0 (264 массы электрона. Имеют наименьшую массу среди мезонов и нулевой спин. \pi^- распадается на \mu^- и антинейтрино, \pi^+ распадается на \mu^+ и нейтрино, \pi^0 распадается на два фотона.
Particle Physics and Cosmology
Fundamental particles: Each particle has an antiparticle; some particles are their own antiparticles. Particles can be created and destroyed, some of them (including electrons and positrons) only in pairs or in conjunction with other particles and antiparticles.
Particles serve as mediators for the fundamental interactions. The photon is the mediator of the electromagnetic interaction. Yukawa predicted the existence of mesons to mediate the nuclear interaction. Mediating particles that can exist only because of the uncertainty principle for energy are called virtual particles.
Particle accelerators and detectors: Cyclotrons, synchrotrons, and linear accelerators are used to accelerate charged particles to high energies to experiment with particle interactions. Only part of the beam energy is available to cause reactions with targets at rest. The problem is avoided in colliding-beam experiments.
Particles and interactions: Four fundamental interactions are found in nature: the strong, electromagnetic, weak, and gravitational interactions. Particles can be described in terms of their interactions and of quantities that are conserved in all or some of the interactions.
Fermions have half-integer spins; bosons have integer spins. Leptons, which are fermions, have no strong interactions. Strongly interacting particles are called hadrons. They include mesons, which are always bosons, and baryons, which are always fermions. There are conservation laws for three different lepton numbers and for baryon number. Additional quantum numbers, including strangeness and charm, are conserved in some interactions.
Quarks: Hadrons are composed of quarks. There thought to be six types of quarks. The interaction between quarks is mediated by gluons. Quarks and gluons have an additional attribute called color.
Symmetry and the unification of interactions: Symmetry considerations play a central role in all fundamental-particle theories. The electromagnetic and weak interactions become unified at high energies into the electroweak interaction. In grand unified theories the strong interaction is also unified with these interactions, but at much higher energies.
The expanding universe and its composition: The Hubble law shows that galaxies are receding from each other and that the universe is expanding. Observations show that the rate of expansion is accelerating due to the presence of dark energy, which makes up 65.8% of energy in the universe. Only 4.9% of the energy in the universe is in the form of conventional matter; the remaining 26.6% is dark matter, whose nature is poorly understood.
The history of the universe: In the standard model of the universe, a Big Bang gave rise to the first fundamental particles. They eventually formed into the lightest atoms as the universe expanded and cooled. The cosmic background radiation is a relic of the time when these atoms formed. The heavier elements were manufactured much later by fusion reactions inside stars.
Nuclear Physics
Nuclear properties: A nucleus is composed of A nucleons (Z protons and N neutrons). All nuclei have about the same density. The radius of a nucleus with mass number A is given approximately by equation R=R_0\sqrt[3]A (R_0=1.2\times10^{-15}\,\mathrm{m}). A single nuclear species of a given Z and N is called a nuclide. Isotopes are nuclides of the same element (same Z) that have different number of neutrons. Nuclear masses are measured in atomic mass units. Nucleons have an angular momentum and a magnetic moment.
Nuclear binding and structure: The mass of a nucleus is always less than the mass of the protons and neutrons within it. The mass difference multiplied by c^2 gives the binding energy E_{\mathrm{B}}=(ZM_{\mathrm{H}}+Nm_{\mathrm{n}}-\,_Z^AM)c^2. The binding energy for a given nuclide is determined by the nuclear force, which is short range and favors pairs of particles, and by the electrical repulsion between protons. A nucleus is unstable if A or Z is too large or if the ratio N/Z is wrong. Two widely used models of the nucleus are the liquid-drop model and the shell model; the latter is analogous to the central-field approximation for atomic structure.
Radioactive decay: Unstable nuclides usually emit an alpha-particle (a \,_2^4\mathrm{He} nucleus) or a beta-particle (an electron) in the process of change to another nuclide, sometimes followed by a gamma-ray photon. The rate of decay of an unstable nucleus is described by the decay constant \lambda, the half-life T_{1/2}, or the lifetime T_{\mathrm{mean}}: T_{\mathrm{mean}}=\frac1{\lambda}=\frac{T_{1/2}}{\ln2}=\frac{T_{1/2}}{0.693}. If the number of nuclei at time t=0 is N_0 and no more are produced, the number at time t is given by equation N(t)=N_0e^{-\lambda t}.
Biological effects of radiation: The biological effect of any radiation depends on the product of the energy absorbed per unit mass and the relative biological effectiveness (RBE), which is different for different radiations.
Nuclear reactions: In a nuclear reaction, two nuclei or particles collide to produce two new nuclei or particles. Reactions can be exoergic or endoergic. Several conservation law, including charge, energy, momentum, angular momentum, and nucleon number, are obeyed. Energy is released by the fission of a heavy nucleus into two lighter, always unstable, nuclei. Energy is also released by the fusion of two light nuclei into a heavier nucleus.
Molecules and Condensed Matter
Molecular bonds and molecular spectra: The principal types of molecular bonds are ionic, covalent, van der Waals, and hydrogen bonds. In a diatomic molecule the rotational energy levels are given by equation: E_l=l(l+1)\frac{\hbar^2}{2I} (l=0,1,2,\ldots), where I=m_{\mathrm{r}}r_0^2 is the moment of inertia of the molecule, m_{\mathrm{r}}=\frac{m_1m_2}{m_1+m_2} is its reduced mass, and r_0 is the distance between the two atoms. The vibrational energy levels are given by equation E_n=(n+\frac12)\hbar\omega=(n+\frac12)\hbar\sqrt{\frac{k'}{m_{\mathrm{r}}}} n=(0,1,2,\ldots), where k' is the effective force constant of the interatomic force.
Solids and energy bands: Interatomic bonds in solids are of the same types as in molecules plus one additional type, the metallic bond. Associating the basis with each lattice point gives the crystal structure.
When atoms are bound together in condensed matter, their outer energy levels spread out into bands. At absolute zero, insulators and semiconductors have a completely filled valence band separated by an energy gap from an empty conduction band. Conductors, including metals, have partially filled conduction bands.
Free-electron model of metal: In the free-electron model of the behavior of conductors, the electrons are treated as completely free particles within the conductor. In this model the density of states is given by equation g(E)=\frac{(2m)^{2/3}V}{2\pi^2\hbar^3}E^{1/2}. The probability that an energy state of energy E is occupied is given by the Fermi-Dirac distribution, f(E)=\frac1{e^{(E-E_{\mathrm{F}})/kT}+1} (E_{\mathrm{F}} is the Fermi energy), which is a consequence of the exclusion principle.
Semiconductors: A semiconductor has an energy gap of about 1 eV between its valence and conduction bands. Its electrical properties can be drastically changed by the addition of small concentrations of donor impurities, giving an n-type semiconductor, or acceptor impurities, giving a p-type semiconductor.
Semiconductor devices: Many semiconductor devices, including diodes, transistors, and integrated circuits use one or more p\text{-}n-junctions. The current-voltage relationship for an ideal p\text{-}n-junction diode is given by equation I=I_S(e^{eV/kT}-1).
Quantum Mechanics II: Atomic Structure
Three-dimensional problems: The time-independent Schrödinger equation for three-dimensional problems is given by: -\frac{\hbar^2}{2m}(\frac{\partial^2\psi(x,y,z)}{\partial x^2}+\frac{\partial^2\psi(x,y,z)}{\partial y^2}+\frac{\partial^2\psi(x,y,z)}{\partial z^2})+U(x,y,z)\psi(x,y,z)=E\psi(x,y,z).
Particle in a three-dimensional box: The wave function for a particle in a cubical box is the product of a function of x only, a function of y only, and a function of z only. Each stationary state is described by three quantum numbers (n_X,n_Y,n_Z): E_{n_X,n_Y,n_Z}=\frac{(n_X^2+n_Y^2+n_Z^2)\pi^2\hbar^2}{2mL^2}, (n_X=1,2,3,\ldots;n_Y=1,2,3,\ldots;n_Z=1,2,3,\ldots). Most of the energy levels given by this equation exhibit degeneracy: More than one quantum state has the same energy.
The hydrogen atom: The Schrödinger equation for the hydrogen atom gives the same energy levels as the Bohr model: E_n=-\frac1{(4\pi\epsilon_0)^2}\frac{m_\mathrm{r}e^4}{2n^2\hbar^2}=-\frac{13.60\,\mathrm{eV}}{n^2}. If the nucleus has charge Ze, there is an additional factor of Z^2 in the numerator. The possible magnitudes L of orbital angular momentum are given by equation: L=\sqrt{l(l+1)}\hbar, (l=0,1,2,\ldots,n-1). The possible values of the z-component of orbital angular momentum are given by equation: L_z=m_l\hbar, (m_l=0,\pm1,\pm2,\ldots,\pm l).
The probability that an atomic electron is between r and r+dr from the nucleus is P(r)\,dr, given by equation: P(r)\,dr=|\psi|^2\,dV=|\psi|^2\,4\pi r^2\,dr. Atomic distances are often measured in units of a, the smallest distance between the electron and the nucleus in the Bohr model: a=\frac{\epsilon_0h^2}{\pi m_\mathrm{r}e^2}=\frac{4\pi\epsilon_0\hbar^2}{m_\mathrm{r}e^2}=5.29\times10^{-11}\mathrm{m}.
The Zeeman effect: The interaction energy of an electron (mass m) with magnetic quantum number m_l in a magnetic field \vec{B} along the +z-direction is given by equation: U=-\mu_zB=m_l\frac{e\hbar}{2m}B=m_lm_{\mathrm{B}}B (m_l=0,\pm1,\pm2,\ldots,\pm l), where m_{\mathrm{B}}=\frac{e\hbar}{2m} is called the Bohr magneton.
Electron spin: An electron has an intrinsic spin angular momentum of magnitude S, given by equation S=\sqrt{\frac12(\frac12+1)}\hbar=\sqrt{\frac34}\hbar. The possible values of the z-component of the spin angular momentum are S_x=m_s\hbar (m_s=\pm\frac12).
An orbiting electron experience an interaction between its spin and the effective magnetic field produced by the relative motion of electron and nucleus. This spin-orbit coupling, along with relativistic effects, splits the energy levels according to their total angular momentum quantum number j: E_{n,j}=-\frac{13.60\,\mathrm{eV}}{n^2}[1+\frac{n^2}{\alpha^2}(\frac{n}{j+\frac12}-\frac34)].
Many-electron atoms: In a hydrogen atom, the quantum numbers n, l, m_l, and m_s of the electron have certain allowed values given by equation: n\geq1, 0\leq l\leq n-1, |m_l|\leq l, m_s=\pm\frac12. In a many-electron atom, the allowed quantum numbers for each electron are the same as in hydrogen, but the energy levels depend on both n and l because of screening, the partial cancellation of the field of the nucleus by inner electrons. If the effective (screened) charge attracting an electron is Z_{\mathrm{eff}}e, the energies of the levels are given approximately by equation: E_n=-\frac{Z_{\mathrm{eff}}^2}{n^2}(13.6\,\mathrm{eV}).
X-ray spectra: Moseley’s law states that the frequency of a K_{\alpha} x ray from a target with atomic number Z is given by equation f=(2.48\times10^{15}\,\mathrm{Hz})(Z-1)^2. Characteristic x-ray spectra result from transition to a hole in an inner energy level of an atom.
Quantum entanglement: The wave function of two identical particles can be such that neither particle is itself in a definite state. For example, the wave function could be a combination of one term with particle 1 in state A and particle 2 in state B and one term with particle 1 in state B and particle 2 in state A. The two particles are said to be entangled, since measuring the state of one particle automatically determines the result of subsequent measurements of the other particle.
Quantum Mechanics I: Wave Functions
Wave functions: The wave function for a particle contains all of the information about that particle. If the particle moves in one dimension in the presence of a potential energy function U(x), the wave function \Psi(x,t) obeys the one-dimensional Schrödinger equation: -\frac{\hbar^2}{2m}\frac{\partial^2\Psi(x,t)}{\partial x^2}+U(x)\Psi(x,t)=i\hbar\frac{\partial\Psi(x,t)}{\partial t}. (For a free particle on which no forces act, U(x)=0.) The quantity |\Psi(x,t)|^2, called the probability distribution function, determines the relative probability of finding a particle near a given position at a given time. If the particle is in a state of definite energy, called a stationary state, \Psi(x,t) is a product of a function \psi(x) that depends on only spatial coordinates and a function e^{-iEt/\hbar} that depends on only time: \Psi(x,t)=\psi(x)e^{iEt/\hbar}. For a stationary state, the probability distribution function is independent of time.
A spatial stationary-state wave function \psi(x) for a particle that moves in one dimension in the presence of a potential-energy function U(x) satisfies the time-independent Schrödinger equation: -\frac{\hbar^2}{2m}\frac{d^2\psi(x)}{dx^2}+U(x)\psi(x)=E\psi(x). More complex wave functions can be constructed by super-imposing stationary-state wave functions. These can represent particles that are localized in a certain region, thus representing both particle and wave aspects.
Particle in a box: The energy levels for a particle of mass m in a box (an infinitely deep square potential well) with width L are given by the equation: E_n=\frac{p_n^2}{2m}=\frac{n^2h^2}{8mL^2}=\frac{n^2\pi^2\hbar^2}{2mL^2} (n=1,2,3,\ldots). The corresponding normalized stationary-state wave functions of the particle are given by the equation \psi_n(x)=\sqrt{\frac2L}\sin\frac{n\pi x}L (n=1,2,3,\ldots).
Wave functions and normalization: To be a solution of the Schrödinger equation, the wave function \psi(x) and its derivative d\psi(x)/dx must be continuous everywhere except where the potential-energy function U(x) has an infinity discontinuity. Wave functions are usually normalized so that the total probability of finding the particle somewhere is unity: \int_{-\infty}^{+\infty}|\psi(x)|^2\,dx=1.
Finite potential well: In a potential well with finite depth U_0, the energy levels are lower than those for an infinitely deep well with the same width, and the number of energy levels corresponding to bound states is finite. The levels are obtained by matching wave functions at the well walls to satisfy the continuity of \psi(x) and d\psi(x)/dx.
Potential barriers and tunneling: There is a certain probability that a particle will penetrate a potential-energy barrier even though its initial energy is less than the barrier height. This process is called tunneling.
Quantum harmonic oscillator: The energy levels for the harmonic oscillator (for which U(x)=\frac12k'x^2) are given by the equation: E_n=(n+\frac12)\hbar\sqrt{\frac{k'}{m}}=(n+\frac12)\hbar\omega (n=1,2,3,\ldots). The spacing between any two adjacent levels is \hbar\omega, where \omega=\sqrt{k'/m} is the oscillation angular frequency of the corresponding Newtonian harmonic oscillator.
Measurement in quantum mechanics: If the wave function of a particle does not correspond to a definite value of a certain physical property (such as momentum or energy), the wave function changes when we measure that property. This phenomenon is called wave-function collapse. |
01b93d048515afe4 | Electronic correlation
From Wikipedia, the free encyclopedia
(Redirected from Electron correlation)
Jump to: navigation, search
Electronic correlation is the interaction between electrons in the electronic structure of a quantum system. The correlation energy is a measure of how much the movement of one electron is influenced by the presence of all other electrons.
Atomic and molecular systems[edit]
Electron correlation energy in terms of various levels of theory of solutions for the Schrödinger equation.
Within the Hartree–Fock method of quantum chemistry, the antisymmetric wave function is approximated by a single Slater determinant. Exact wave functions, however, cannot generally be expressed as single determinants. The single-determinant approximation does not take into account Coulomb correlation, leading to a total electronic energy different from the exact solution of the non-relativistic Schrödinger equation within the Born–Oppenheimer approximation. Therefore, the Hartree–Fock limit is always above this exact energy. The difference is called the correlation energy, a term coined by Löwdin.[1] The concept of the correlation energy was studied earlier by Wigner.[2]
A certain amount of electron correlation is already considered within the HF approximation, found in the electron exchange term describing the correlation between electrons with parallel spin. This basic correlation prevents two parallel-spin electrons from being found at the same point in space and is often called Fermi correlation. Coulomb correlation, on the other hand, describes the correlation between the spatial position of electrons due to their Coulomb repulsion, and is responsible for chemically important effects such as London dispersion. There is also a correlation related to the overall symmetry or total spin of the considered system.
The word correlation energy has to be used with caution. First it is usually defined as the energy difference of a correlated method relative to the Hartree–Fock energy. But this is not the full correlation energy because some correlation is already included in HF. Secondly the correlation energy is highly dependent on the basis set used. The "exact" energy is the energy with full correlation and full basis set.
Electron correlation is sometimes divided into dynamical and non-dynamical (static) correlation. Dynamical correlation is the correlation of the movement of electrons and is described under electron correlation dynamics[3] and also with the configuration interaction (CI) method. Static correlation is important for molecules where the ground state is well described only with more than one (nearly-)degenerate determinant. In this case the Hartree–Fock wavefunction (only one determinant) is qualitatively wrong. The multi-configurational self-consistent field (MCSCF) method takes account of this static correlation but not dynamical correlation.
If one wants to calculate excitation energies (energy differences between the ground and excited states) one has to be careful that both states are equally balanced (e.g., Multireference configuration interaction).
In simple terms the molecular orbitals of the Hartree–Fock method are optimized by evaluating the energy of an electron in each molecular orbital moving in the mean field of all other electrons, rather than including the instantaneous repulsion between electrons.
To account for electron correlation there are many post-Hartree–Fock methods, including:
One of the most important methods for correcting for the missing correlation is the configuration interaction (CI) method. Starting with the Hartree–Fock wavefunction as the ground determinant one takes a linear combination of the ground and excited determinants as the correlated wavefunction and optimizes the weighting factors according to the Variational Principle. When taking all possible excited determinants one speaks of Full-CI. In a Full-CI wavefunction all electrons are fully correlated. For non-small molecules Full-CI is much too computationally expensive. One truncates the CI expansion and gets well-correlated wavefunctions and well-correlated energies according to the level of truncation.
Perturbation theory gives correlated energies, but no new wavefunctions. PT is not variational. This means the calculated energy is not an upper bound for the exact energy. It is possible to partition Møller–Plesset perturbation theory energies via Interacting Quantum Atoms (IQA) energy partitioning (although most commonly the correlation energy is not partitioned).[4] This is an extension to the theory of Atoms in Molecules. IQA energy partitioning enables one to look in detail at the correlation energy contributions from individual atoms and atomic interactions. IQA correlation energy partitioning has also been shown to be possible with coupled cluster methods.[5][6]
There are also combinations possible. E.g. one can have some nearly degenerate determinants for the multi-configurational self-consistent field method to account for static correlation and/or some truncated CI method for the biggest part of dynamical correlation and/or on top some perturbational ansatz for small perturbing (unimportant) determinants. Examples for those combinations are CASPT2 and SORCI.
Crystalline systems[edit]
In condensed matter physics, electrons are typically described with reference to a periodic lattice of atomic nuclei. Non-interacting electrons are therefore typically described by Bloch waves, which correspond to the delocalized, symmetry adapted molecular orbitals used in molecules (while Wannier functions correspond to localized molecular orbitals). A number of important theoretical approximations have been proposed to explain electron correlations in these crystalline systems.
The Fermi liquid model of correlated electrons in metals is able to explain the temperature dependence of resistivity by electron-electron interactions. It also forms the basis for the BCS theory of superconductivity, which is the result of phonon-mediated electron-electron interactions.
Systems that escape a Fermi liquid description are said to be strongly-correlated. In them, interactions plays such an important role that qualitatively new phenomena emerge.[7] This is the case, for example, when the electrons are close to a metal-insulator transition. The Hubbard model is based on the tight-binding approximation, and can explain conductor-insulator transitions in Mott insulators such as transition metal oxides by the presence of repulsive Coulombic interactions between electrons. Its one-dimensional version is considered an archetype of the strong-correlations problem and displays many dramatic manifestations such as quasi-particle fractionalization. However, there is no exact solution of the Hubbard model in more than one dimension.
The RKKY Interaction can explain electron spin correlations between unpaired inner shell electrons in different atoms in a conducting crystal by a second-order interaction that is mediated by conduction electrons.
The Tomonaga Luttinger liquid model approximates second order electron-electron interactions as bosonic interactions.
Mathematical viewpoint[edit]
For two independent electrons a and b,
where ρ(ra,rb) represents the joint electronic density, or the probability density of finding electron a at ra and electron b at rb. Within this notation, ρ(ra,rb) dra drb represents the probability of finding the two electrons in their respective volume elements dra and drb.
If these two electrons are correlated, then the probability of finding electron a at a certain position in space depends on the position of electron b, and vice versa. In other words, the product of their independent density functions does not adequately describe the real situation. At small distances, the uncorrelated pair density is too large; at large distances, the uncorrelated pair density is too small (i.e. the electrons tend to "avoid each other").
1. ^ Löwdin, Per-Olov (March 1955). "Quantum Theory of Many-Particle Systems. III. Extension of the Hartree–Fock Scheme to Include Degenerate Systems and Correlation Effects". Physical Review. American Physical Society. 97 (6): 1509–1520. Bibcode:1955PhRv...97.1509L. doi:10.1103/PhysRev.97.1509.
2. ^ Wigner, E. (1934-12-01). "On the Interaction of Electrons in Metals". Physical Review. 46 (11): 1002–1011. doi:10.1103/PhysRev.46.1002.
3. ^ J.H. McGuire, "Electron Correlation Dynamics in Atomic Collisions", Cambridge University Press, 1997
4. ^ McDonagh, James L.; Vincent, Mark A.; Popelier, Paul L.A. (October 2016). "Partitioning dynamic electron correlation energy: Viewing Møller-Plesset correlation energies through Interacting Quantum Atom (IQA) energy partitioning". Chemical Physics Letters. 662: 228–234. doi:10.1016/j.cplett.2016.09.019.
5. ^ Holguín-Gallego, Fernando José; Chávez-Calvillo, Rodrigo; García-Revilla, Marco; Francisco, Evelio; Pendás, Ángel Martín; Rocha-Rinza, Tomás (15 July 2016). "Electron correlation in the interacting quantum atoms partition via coupled-cluster lagrangian densities". Journal of Computational Chemistry. 37 (19): 1753–1765. ISSN 1096-987X. doi:10.1002/jcc.24372.
6. ^ McDonagh, James L.; Silva, Arnaldo F.; Vincent, Mark A.; Popelier, Paul L. A. (12 April 2017). "Quantifying Electron Correlation of the Chemical Bond". The Journal of Physical Chemistry Letters: 1937–1942. ISSN 1948-7185. doi:10.1021/acs.jpclett.7b00535.
7. ^ Quintanilla, Jorge; Hooley, Chris (2009). "The strong-correlations puzzle". Physics World. 22: 32–37. ISSN 0953-8585.
See also[edit] |
25a0034f1f7df649 | previous index next PDF
Time-Dependent Solutions: Propagators and Representations
Michael Fowler, UVa
We’ve spent most of the course so far concentrating on the eigenstates of the Hamiltonian, states whose time-dependence is merely a changing phase. We did mention much earlier a superposition of two different energy states in an infinite well, resulting in a wave function sloshing backwards and forwards. It’s now time to cast the analysis of time dependent states into the language of bras, kets and operators. We’ll take a time-independent Hamiltonian H, with a complete set of orthonormalized eigenstates, and as usual
i ψ( x,t ) t = 2 2m 2 ψ( x,t ) x 2 +V( x )ψ( x,t ),
Or, as we would now write it
i t | ψ( x,t )=H| ψ( x,t ).
Since H is itself time independent, this is very easy to integrate!
| ψ( x,t )= e iH( t t 0 )/ | ψ( x, t 0 ).
The exponential operator that generates the time-dependence is called the propagator, because it describes how the wave propagates from its initial configuration, and is usually denoted by U:
| ψ( x,t )=U( t t 0 )| ψ( x, t 0 ).
It’s appropriate to call the propagator U , because it’s a unitary operator:
U( t t 0 )= e iH( t t 0 ) , so U ( t t 0 )= e i H ( t t 0 ) = e iH( t t 0 ) = U 1 ( t t 0 ).
Since H is Hermitian, U is unitary. It immediately follows that
ψ( x,t )| ψ( x,t )= ψ( x, t 0 )| U U( t t 0 )| ψ( x, t 0 ) = ψ( x, t 0 )| ψ( x, t 0 )
the norm of the ket vector is conserved, or, translating to wave function language, a wave function correctly normalized to give a total probability of one stays that way. (This can also be proved from the Schrödinger equation, of course, but this is quicker.)
This is all very succinct, but unfortunately the exponential of a second-order differential operator doesn’t sound too easy to work with. Recall, though, that any function of a Hermitian operator has the same set of eigenstates as the original operator. This means that the eigenstates of e iH( t t 0 )/ are the same as the eigenstates of H , and if H| ψ n = E n | ψ n , then
e iH( t t 0 )/ | ψ n = e i E n ( t t 0 )/ | ψ n .
This is of course nothing but the time dependent phase factor for the eigenstates we found before and, as before, to find the time dependence of any general state we must express it as a superposition of these eigenkets, each having its own time dependence. But how do we do that in the operator language? Easy: we simply insert an identity operator, the one constructed from the complete set of eigenkets, thus:
| ψ( t )= e iH( t t 0 )/ n=1 | ψ n ψ n | ψ( t 0 ) = n=1 e i E n ( t t 0 )/ | ψ n ψ n | ψ( t 0 ).
Staring at this, we see that it’s just what we had before: at the initial time t= t 0 , the wave function can be written as a sum over the eigenkets:
| ψ( t 0 )= | ψ n ( t 0 ) ψ n ( t 0 )| ψ( t 0 ) = c n | ψ n ( t 0 )
with c n = ψ n |ψ | c n | 2 =1 , and the usual generalization for continuum eigenvalues, and the time development is just given by inserting the phases:
| ψ( t )= c n e i E n ( t t 0 )/ | ψ n ( t 0 ) .
The expectation value of the energy E in |ψ ,
E =ψ|H|ψ= | c n | 2 E n
and is (of course) time independent.
The expectation value of the particle position x is
ψ( t )|x| ψ( t )= n,m c n * c m e i( E n E m )( t t 0 )/ ψ n ( t 0 )|x| ψ m ( t 0 )
and is not in general time-independent. (It is real, of course, on adding the n,m term to the m,n term.)
This analysis is only valid for a time-independent Hamiltonian. The important extension to a system in a time-dependent external field, such as an atom in a light beam, will be given later in the course.
The Free Particle Propagator
To gain some insight into what the propagator U looks like, we’ll first analyze the case of a particle in one dimension with no potential at all. We’ll also take t 0 =0 to make the equations less cumbersome.
For a free particle in one dimension E= p 2 /2m= 2 k 2 /2m the energy eigenstates are also momentum eigenstates, we label them |k , so
U( t )= e iHt/ = e iHt/ dk 2π |kk| = e i k 2 t/2m dk 2π |kk| .
Let’s consider (following Shankar and others) what seems the simplest example.
Suppose that at t= t 0 =0 , a particle is at x 0 ψ(x,t=0)=δ(x x 0 )=| x 0 : what is the probability amplitude for finding it at x at a later time t ? (This would be just its wave function at the later time.)
x|U( t,0 )| x 0 = e i k 2 t/2m dk 2π x|kk| x 0 = e i k 2 t/2m dk 2π e ik( x 0 x ) = m 2πit e im ( x 0 x ) 2 /2t ,
On examining the above expression, though, it turns out to be nonsense! Noting that the term in the exponent is pure imaginary, | ψ( x,t ) | 2 =m/2πt independent of x ! This particle apparently instantaneously fills all of space, but then its probability dies away as 1/t
Question: Where did we go wrong?
Answer: Notice first that | ψ( x,t ) | 2 is constant throughout space. This means that the normalization, | ψ( x,t ) | 2 dx= ! And, as we’ve seen above, the normalization stays constant in time the propagator is unitary. Therefore, our initial wave function must have had infinite norm. That’s exactly right we took the initial wave function ψ(x,t=0)=δ(x x 0 )=| x 0 .
Think of the δ -function as a limit of a function equal to 1/Δ over an interval of length Δ , with Δ going to zero, and it’s clear the normalization goes to infinity as 1/Δ . This is not a meaningful wave function for a particle. Recall that continuum kets like | x 0 are normalized by x| x =δ( x x ) , they do not represent wave functions individually normalizable in the usual sense. The only meaningful wave functions are integrals over a range of such kets, such as dxψ( x ) |x . In an integral like this, notice that states |x within some tiny x -interval of length δx, say, have total weight ψ( x )δx , which goes to zero as δx is made smaller, but by writing ψ(x,t=0)=δ(x x 0 )=| x 0 we took a single such state and gave it a finite weight. This we can’t do.
Of course, we do want to know how a wave function initially localized near a point develops. To find out, we must apply the propagator to a legitimate wave function one that is normalizable to begin with. The simplest “localized particle” wave function from a practical point of view is a Gaussian wave packet,
ψ( x ,0 )= e i p 0 x / e x 2 /2 d 2 ( π d 2 ) 1/4 .
(I’ve used d in place of Shankar’s Δ here to try to minimize confusion with Δx, etc.)
The wave function at a later time is then given by the operation of the propagator on this initial wave function:
ψ(x,t)= U(x,t; x ,0) e i p 0 x / e x 2 /2 d 2 ( π d 2 ) 1/4 d x = m 2πit e im (x x ) 2 /2t e i p 0 x / e x 2 /2 d 2 ( π d 2 ) 1/4 d x .
The integral over x is just another Gaussian integral, so we use the same result,
d x e a x 2 +b x = π a e b 2 /4a .
Looking at the expression above, we can see that
b= im t ( x p 0 t m ) , a= 1 2 d 2 im 2t .
This gives
ψ( x,t )= π 1/4 d( 1+ it m d 2 ) exp( im x 2 2t )exp( im t ( x p 0 t m ) 2 2( 1+ it m d 2 ) )
where the second exponential is the term e b 2 /4a . As written, the small t limit is not very apparent, but some algebraic rearrangement yields:
ψ( x,t )= π 1/4 d( 1+it/m d 2 ) exp( ( x p 0 t/m ) 2 2 d 2 ( 1+it/m d 2 ) )exp( i p 0 ( x p 0 t/2m ) ) .
It is clear that this expression goes to the initial wave packet as t goes to zero. Although the phase has contributions from all three terms here, the main phase oscillation is in the third term, and one can see the phase velocity is one-half the group velocity, as discussed earlier.
The resulting probability density:
| ψ( x,t ) | 2 = 1 π( d 2 + 2 t 2 / m 2 d 2 ) exp ( x p 0 t/m ) 2 ( d 2 + 2 t 2 / m 2 d 2 ) .
This is a Gaussian wave packet, having a width which goes as t/md for large times, where d is the width of the initial packet in x -space so /md is the spread in velocities Δv within the packet, hence the gradual spreading Δv t in x -space.
It’s amusing to look at the limit of this as the width d of the initial Gaussian packet goes to zero, and see how that relates to our δ -function result. Suppose we are at distance x from the origin, and there is initially a Gaussian wave packet centered at the origin, width dx. At time tmxd/ , the wave packet has spread to x and has | ψ( x,t ) | 2 of order 1/x at x. Thereafter, it continues to spread at a linear rate in time, so locally | ψ( x,t ) | 2 must decrease as 1/t to conserve probability. In the δ -function limit d0 , the wave function instantly spreads through a huge volume, but then goes as 1/t as it spreads into an even huger volume. Or something.
Schrödinger and Heisenberg Representations
Assuming a Hamiltonian with no explicit time dependence, the time-dependent Schrödinger equation has the form
and as discussed above, the formal solution can be expressed as:
| ψ( x,t )= e iHt/ | ψ( x,t=0 ).
Now, any measurement on a system amounts to measuring a matrix element of an operator between two states (or, more generally, a function of such matrix elements).
In other words, the physically significant time dependent quantities are of the form
φ( t )|A| ψ( t )= φ( 0 )| e iHt/ A e iHt/ | ψ( 0 )
where A is an operator, which we are assuming has no explicit time dependence.
So in this Schrödinger picture, the time dependence of the measured value of an operator like x or p comes about because we measure the matrix element of an unchanging operator between bras and kets that are changing in time.
Heisenberg took a different approach: he assumed that the ket describing a quantum system did not change in time, it remained at | ψ( 0 ), but the operators evolved according to:
A H (t)= e iHt/ A H (0) e iHt/ .
Clearly, this leads to the same physics as before. The equation of motion of the operator is:
i d A H (t) dt =[ A H (t),H ].
The Hamiltonian itself does not change in time energy is conserved, or, to put it another way, H commutes with e iHt/ . But for a nontrivial Hamiltonian, say for a particle in one dimension in a potential,
H= p 2 /2m+V(x)
the separate components will have time-dependence, parallel to the classical case: the kinetic energy of a swinging pendulum varies with time. (For a particle in a potential in an energy eigenstate the expectation value of the kinetic energy is constant, but this is not the case for any other state, that is, for a superposition of different eigenstates.) Nevertheless, the commutator of x,p will be time-independent:
[ x H (t), p H (t) ]= e iHt/ [ x H (0), p H (0) ] e iHt/ = e iHt/ i e iHt/ =i.
(The Heisenberg operators are identical to the Schrödinger operators at t=0. )
Applying the general commutator result [ A,BC ]=[ A,B ]C+B[ A,C ] ,
[ x H (t), p 2 H (t) 2m ]= i p H (t) m
d x H (t) dt = p H (t) m
and since [ x H (t), p H (t) ]=i, p H (t)=id/d x H (t) ,
d p H (t) dt = 1 i [ p H (t),V( x H (t)) ]=V( x H (t)).
This result could also be derived by writing V( x ) as an expansion in powers of x, then taking the commutator with p.
Exercise: check this.
Notice from the above equations that the operators in the Heisenberg Representation obey the classical laws of motion! Ehrenfest’s Theorem, that the expectation values of operators in a quantum state follow the classical laws of motion, follows immediately, by taking the expectation value of both sides of the operator equation of motion in a quantum state.
Simple Harmonic Oscillator in the Heisenberg Representation
For the simple harmonic oscillator, the equations are easily integrated to give:
x H (t)= x H (0)cosωt+( p H (0)/mω)sinωt p H (t)= p H (0)cosωtmω x H (0)sinωt.
We have put in the H subscript to emphasize that these are operators. It is usually clear from the context that the Heisenberg representation is being used, and this subscript may be safely omitted.
The time-dependence of the annihilation operator a is:
a( t )= e iHt/ a( 0 ) e iHt/
H=ω( a ( t )a( t )+ 1 2 ).
Note again that although H is itself time-independent, it is necessary to include the time-dependence of individual operators within H.
i d dt a( t )=[ a( t ),H ] =ω[ a( t ), a ( t )a( t ) ] =ω[ a( t ), a ( t ) ]a( t ) =ωa( t )
a( t )=a( 0 ) e iωt .
Actually, we could have seen this as follows: if |n are the energy eigenstates of the simple harmonic oscillator,
e iHt/ |n= e inωt/ |n= e inωt |n.
Now the only nonzero matrix elements of the annihilation operator a ^ between energy eigenstates are of the form
n1|a( t )|n= n1| e iHt/ a( 0 ) e iHt/ |n = e iω( n1 )t n1|a( 0 )|n e iωnt = n1|a( 0 )|n e iωt .
Since this time-dependence is true of all energy matrix elements (trivially so for most of them, since they’re identically zero), and the eigenstates of the Hamiltonian span the space, it is true as an operator equation.
Evidently, the expectation value of the operator a( t ) in any state goes clockwise in a circle centered at the origin in the complex plane. That this is indeed the classical motion of the simple harmonic oscillator is confirmed by recalling the definition a= ξ+iπ 2 = 1 2mω ( mωx+ip ) , so the complex plane corresponds to the ( mωx,p ) phase space discussed near the beginning of the lecture on the Simple Harmonic Oscillator. We’ll discuss this in much more detail in the next lecture, on Coherent States.
The time-dependence of the creation operator is just the adjoint equation: a ( t )= a ( 0 ) e iωt .
previous index next PDF |
1385bb5dc44b4d47 | Dismiss Notice
Join Physics Forums Today!
You cannot derieve Schrödinger Equation .
1. Jan 22, 2006 #1
"You cannot derieve Schrödinger Equation".
Bah. We're being told this over and over again. Then the game guy invents operators to extract momentum and energy from wavefunction, then puts them in Newtwon equation! He's saying exactly this:
[tex]\frac{p^2}{2m} + V = E[/tex]
Should I look amazed when this equation is consistent with [tex]\frac{d<p>}{dt} = <-\frac{dV}{dr}>[/tex]. It has a name BTW, Ehrenfest's theorem. Aside from what a great discovery this is, what Dirac wrote seems just to be relativistic version of it. Put operators in [tex]E^2 - (pc)^2 - (mc^2)^2 = 0[/tex]. With some tricks to make it linear.
And why are quantum teachers are proudly (I simply hate the look in their face, when pleasured by uncertainty) trying to sell us: "you can Not derieve Schrödinger equation any way, it is a fundamental law of nature!"
Then I stare blankly at them and say: "What I'm saying right now is wrong."
Last edited: Jan 22, 2006
2. jcsd
3. Jan 22, 2006 #2
May be the problem here is in the fact that Quantum Axiomatic not unificated. If we read papers of different authors, we can see that thay used a different axioms. Very often Schrödinger equation is one of axioms because it is not derived but postulate.
4. Jan 22, 2006 #3
The way that you "derive" the Schrodinger equation is that you assume that the hamiltonian is the generator of finite time translations. As for the form of the hamiltonian, it comes from making a classical correspondence and introducing operators that obey the canonical commutation relations, or by arriving at the momentum and position forms from the assumption that the momentum is the generator of infinitesimal space translations, and then out comes the form of the momentum operator in the position representation.
5. Jan 23, 2006 #4
User Avatar
Science Advisor
Homework Helper
This symmetry-based deduction of SE is found in Sakurai's book and i'm not really a big fan of it. The axiomatic formulation of QM in the Dirac formulation is preferrable to any other approach to finding an evolution equation for quantum states.
P.S. SE is really a consequence in other formulations of QM: von Neumann's, Feynman's and Schwinger's...:wink: But in Dirac's it's an axiom.
6. Jan 23, 2006 #5
What they are saying is that the Schrödinger Equation cannot be derived in a vaccum, i.e. you'd need other postulates to derive it. E.g. you'd have to postulate things like "Replace observables in classical equations with corresponding operators." But postulates must be independant, otherwise they are not postulates. I may be missing a point of logic here where, perhaps, it can be shown that you need more postulates that are postulated in the list of postulates that QM is based on. The list of postulates may not be unique either. Perhaps there are other lists of postulates in which the Schrödinger Equation does not appear. This is what you'd have to show in order to say that you can derive the Schrödinger Equation from other postulates.
7. Jan 23, 2006 #6
I agree that you cannot derieve operators from something else, they're just to extract necessary stuff from wavefunction representing the particle, and since this wavefunction isn't anywhere else (perhaps de Broglie, we may remember), OK, they're basic things.
But,I'm not talking about operators, whom cannot define anything mechanical on their own. I'm talking about something solidly related to the real world: Schrödinger equation. And what I'm saying is, it's a combination of classic mechanics and wavefunction + some new math. They did derieve Schrödinger equation from wavefunction operators and classic mechanics, and still I hear: "It's a fundamental nature law, that cannot be derieved from something else". But wasn't that exactly how Schrödinger did derieve it?
Last edited: Jan 23, 2006
8. Jan 23, 2006 #7
The thing is, if you use the symmetry-based deduction, it sets you up quite well for thinking about Noether's Theorem, so in that sense it's quite useful.
The Schrodinger equation arises out of some postulates of quantum mechanics, particularly that observables are generators of some sort of unitary transformation of a system. In the case of the hamiltonian, it's the generator of time evolution. For momentum, it's the generator of space translation. Angular momentum gets rotation. If you think of it like this, then you "derive" the Schrodinger equation as such, but you cannot arrive at it from anything classical because it isn't classical. Classical-based arguments are not correct.
9. Jan 23, 2006 #8
User Avatar
Staff: Mentor
Schrödinger was inspired to his equation by making an analogy between mechanics and optics. In this analogy, quantum mechanics corresponds to classical mechanics in a similar way as wave optics corresponds to geometrical optics. I posted a more detailed description sometime last year, based on one of Schrödinger's papers. Let's see if I can find it... ah, here it is.
10. Jan 23, 2006 #9
The beginning of Schrödinger's article "An Undulatory Theory of the Mechanics of Atoms and Molecules", E. Schrödinger Phys. Rev. 28, 1049–1070 (1926) is quite an interesting read if you care to see how Schrödinger used the optical-mechanical analogy. It can be found at http://link.aps.org/abstract/PR/v28/p1049
11. Jan 23, 2006 #10
Well technically that's not all true: the Hamiltonian is also the classical generator of time translations (i.e. via the Poisson bracket), and similar stuff can be said for momentum generating space translations etc.
Similar Discussions: You cannot derieve Schrödinger Equation .
1. Schrödinger equation (Replies: 11) |
90884acb23bb7c7e | Sign up ×
In classical mechanics, $F=ma$ tells us how to evolve a system at time $t=t_0$ to $t=t_0+dt$.
In quantum mechanics, the Schrodinger equation gives us a similar recipe.
These equations are, in a certain sense, completely deterministic. Is it possible that nature only appears to be deterministic because the only language we know how to express physics is math (particularly equations), which (not to offend statisticians) seems to be particularly apt at describing deterministic systems?
In other words, are there possible time-evolution laws that are both non-deterministic and falsifiable?
If not, is determinism not falsifiable?
share|cite|improve this question
2 Answers 2
"are there possible time-evolution laws that are both non-deterministic and falsifiable?"
Yes, they are called stochastic (differential) equations. The classic example is the Langevin equation, which is Newton's law with a random force.
share|cite|improve this answer
Interesting. I still feel that this is deterministic in some sense. By running Monte Carlo simulations, etc. one could in principle map out the probability distribution of this particle as a function of time. In this sense, it almost as deterministic as QM – hwlin Feb 23 '13 at 3:59
Yes, the probability distribution evolves deterministically. Whatever your non-deterministic mechanics are you can describe it by a deterministically evolving probability distribution on some appropriate space of states. – Michael Brown Feb 23 '13 at 4:19
@hwlin - To elaborate Michael's point a little bit, if something is not deterministic once, we can always take a massive ensemble of similar systems and make some kind of a statistical inference which is true on average but not necessarily in every case. This is how the whole of statistical mechanics came about, and leading from that, stochastic processes and non-equilibrium stat mech as well. So essentially, take a large enough sample of your non-deterministic thing, and we have the tools to tell you what will happen. :) – Kitchi Feb 23 '13 at 9:57
To elaborate a bit more. The Langevin equation for a single system is $\tilde{F}=m\tilde{a}$, which is non deterministic because both $\tilde{F}$ and $\tilde{a}$ are random. If we take averages on both sides we recover the deterministic $F=ma$ with $F=\langle\tilde{F}\rangle$ and $a=\langle\tilde{a}\rangle$. Thus the ensemble behaves deterministically because we are averaging out the random fluctuations, but each individual system continues being non-deterministic. – juanrga Feb 24 '13 at 13:55
$F=ma$ only applies to a special class of classical systems. It does not apply to non-deterministic classical systems for which more general equations of motion are needed:
Poincaré resonances and the extension of classical dynamics
Poincaré resonances and the limits of trajectory dynamics
The same about the Schrödinger equation except that any ordinary textbook on QM already explains you in what situations you cannot use the Schrödinger equation to describe the evolution of the system under study
The quantum version of the above extension of classical dynamics is covered in The Liouville Space Extension of Quantum Mechanics
share|cite|improve this answer
Your Answer
|
3705bfffdb4ce154 | This Quantum World/Implications and applications/Atomic hydrogen
From Wikibooks, open books for an open world
Jump to: navigation, search
Atomic hydrogen[edit]
While de Broglie's theory of 1923 featured circular electron waves, Schrödinger's "wave mechanics" of 1926 features standing waves in three dimensions. Finding them means finding the solutions of the time-independent Schrödinger equation
with the potential energy of a classical electron at a distance from the proton. (Only when we come to the relativistic theory will we be able to shed the last vestige of classical thinking.)
In using this equation, we ignore (i) the influence of the electron on the proton, whose mass is some 1836 times larger than that of he electron, and (ii) the electron's spin. Since relativistic and spin effects on the measurable properties of atomic hydrogen are rather small, this non-relativistic approximation nevertheless gives excellent results.
For bound states the total energy is negative, and the Schrödinger equation has a discrete set of solutions. As it turns out, the "allowed" values of are precisely the values that Bohr obtained in 1913:
However, for each there are now linearly independent solutions. (If are independent solutions, then none of them can be written as a linear combination of the others.)
Solutions with different correspond to different energies. What physical differences correspond to linearly independent solutions with the same ?
Using polar coordinates, one finds that all solutions for a particular value are linear combinations of solutions that have the form
turns out to be another quantized variable, for implies that with In addition, has an upper bound, as we shall see in a moment.
Just as the factorization of into made it possible to obtain a -independent Schrödinger equation, so the factorization of into makes it possible to obtain a -independent Schrödinger equation. This contains another real parameter over and above whose "allowed" values are given by with an integer satisfying The range of possible values for is bounded by the inequality The possible values of the principal quantum number the angular momentum quantum number and the so-called magnetic quantum number thus are:
Each possible set of quantum numbers defines a unique wave function and together these make up a complete set of bound-state solutions () of the Schrödinger equation with The following images give an idea of the position probability distributions of the first three states (not to scale). Below them are the probability densities plotted against Observe that these states have nodes, all of which are spherical, that is, surfaces of constant (The nodes of a wave in three dimensions are two-dimensional surfaces. The nodes of a "probability wave" are the surfaces at which the sign of changes and, consequently, the probability density vanishes.)
S states.jpg
Take another look at these images:
The letters s,p,d,f stand for l=0,1,2,3, respectively. (Before the quantum-mechanical origin of atomic spectral lines was understood, a distinction was made between "sharp," "principal," "diffuse," and "fundamental" lines. These terms were subsequently found to correspond to the first four values that can take. From onward the labels follows the alphabet: f,g,h...) Observe that these states display both spherical and conical nodes, the latter being surfaces of constant (The "conical" node with is a horizontal plane.) These states, too, have a total of nodes, of which are conical.
Because the "waviness" in is contained in a phase factor it does not show up in representations of To make it visible, it is customary to replace by its real part as in the following images, which do not represent probability distributions.
The total number of nodes is again the total number of non-spherical nodes is again but now there are plane nodes containing the axis and conical nodes.
What is so special about the axis? Absolutely nothing, for the wave functions which are defined with respect to a different axis, make up another complete set of bound-state solutions. This means that every wave function can be written as a linear combination of the functions and vice versa. |
efff207c65534767 | GPGPU with WebGL: solving Laplace’s equation
This is the first post in what will hopefully be a series of posts exploring how to use WebGL to do GPGPU (General-purpose computing on graphics processing units). In this installment we will solve a partial differential equation using WebGL, the Laplace’s equation more specifically.
Discretizing the Laplace’s equation
The Laplace’s equation, \nabla^2 \phi = 0, is one of the most ubiquitous partial differential equations in physics. It appears in lot of areas, including electrostatics, heat conduction and fluid flow.
To get a numerical solution of a differential equation, the first step is to replace the continuous domain by a lattice and the differential operators with their discrete versions. In our case, we just have to replace the Laplacian by its discrete version:
\displaystyle \nabla^2 \phi(x) = 0 \rightarrow \frac{1}{h^2}\left(\phi_{i-1\,j} + \phi_{i+1\,j} + \phi_{i\,j-1} + \phi_{i\,j+1} - 4\phi_{i\,j}\right) = 0,
where h is the grid size.
If we apply this equation at all internal points of the lattice (the external points must retain fixed values if we use Dirichlet boundary conditions) we get a big system of linear equations whose solution will give a numerical approximation to a solution of the Laplace’s equation. Of the various methods to solve big linear systems, the Jacobi relaxation method seems the best fit to shaders, because it applies the same expression at every lattice point and doesn’t have dependencies between computations. Applying this method to our linear system, we get the following expression for the iteration:
\displaystyle \phi_{i\,j}^{(k+1)} = \frac{1}{4}\left(\phi_{i-1\,j}^{(k)} + \phi_{i+1\,j}^{(k)} + \phi_{i\,j-1}^{(k)} + \phi_{i\,j+1}^{(k)}\right),
where k is a step index.
Solving the discretized problem using WebGL shaders
If we use a texture to represent the domain and a fragment shader to do the Jacobi relaxation steps, the shader will follow this general pseudocode:
1. Check if this fragment is a boundary point. If it’s one, return the previous value of this point.
2. Get the four nearest neighbors’ values.
3. Return the average of their values.
To flesh out this pseudocode, we need to define a specific representation for the discretized domain. Taking into account that the currently available WebGL versions don’t support floating point textures, we can use 32 bits RGBA fragments and do the following mapping:
R: Higher byte of \phi.
G: Lower byte of \phi.
B: Unused.
A: 1 if it’s a boundary value, 0 otherwise.
Most of the code is straightforward, but doing the multiprecision arithmetic is tricky, as the quantities we are working with behave as floating point numbers in the shaders but are stored as integers. More specifically, the color numbers in the normal range, [0.0, 1.0], are multiplied by 255 and rounded to the nearest byte value when stored at the target texture.
My first idea was to start by reconstructing the floating point numbers for each input value, do the required operations with the floating numbers and convert the floating point numbers to color components that can be reliably stored (without losing precision). This gives us the following pseudocode for the iteration shader:
// wc is the color to the "west", ec is the color to the "east", ...
float w_val = wc.r + wc.g / 255.0;
float e_val = ec.r + ec.g / 255.0;
// ...
float val = (w_val + e_val + n_val + s_val) / 4.0;
float hi = val - mod(val, 1.0 / 255.0);
float lo = (val - hi) * 255.0;
fragmentColor = vec4(hi, lo, 0.0, 0.0);
The reason why we multiply by 255 in place of 256 is that we need val_lo to keep track of the part of val that will be lost when we store it as a color component. As each byte value of a discrete color component will be associated with a range of size 1/255 in its continuous counterpart, we need to use the “low byte” to store the position of the continuous component within that range.
Simplifying the code to avoid redundant operations, we get:
float val = (wc.r + ec.r + nc.r + sc.r) / 4.0 +
(wc.g + ec.g + nc.g + sc.g) / (4.0 * 255.0);
float lo = (val - hi) * 255.0;
The result of running the full code, implemented in GLSL, is:
Solving the Laplace's equation using a 32x32 grid. Click the picture to see the live solving process (if your browser supports WebGL).
As can be seen, it has quite low resolution but converges fast. But if we just crank up the number of points, the convergence gets slower:
Incompletely converged solution in a 512x512 grid. Click the picture to see a live version.
How can we reconcile these approaches?
The basic idea behind multigrid methods is to apply the relaxation method on a hierarchy of increasingly finer discretizations of the problem, using in each step the coarse solution obtained in the previous grid as the “starting guess”. In this mode, the long wavelength parts of the solution (those that converge slowly in the finer grids) are obtained in the first coarse iterations, and the last iterations just add the finer parts of the solution (those that converge relatively easily in the finer grids).
The implementation is quite straightforward, giving us fast convergence and high resolution at the same time:
Multigrid solution using grids from 8x8 to 512x512. Click the picture to see the live version.
It’s quite viable to use WebGL to do at least basic GPGPU tasks, though it is, in a certain sense, a step backward in time, as there is no CUDA, floating point textures or any feature that helps when working with non-graphic problems: you are on your own. But with the growing presence of WebGL support in modern browsers, it’s an interesting way of partially accessing the enormous computational power present in modern video cards from any JS application, without requiring the installation of a native application.
In the next posts we will explore other kinds of problem-solving where WebGL can provide a great performance boost.
5 thoughts on “GPGPU with WebGL: solving Laplace’s equation
1. Evgeny says:
Very nice application. There are floating point textures in the nightly Chrome (for about 2 months)
There is “The Energy2D Simulator” open source Java based project
with very nice turbulent flows (3-5 applets). They used implicit scheme and relaxation. You could move in this directions too 🙂
• mchouza says:
You can see a more complex example of the same techniques in this (not very accurate and still unfinished) simulation of the two slits experiment with the Schrödinger equation:
In my next posts I will probably transition to floating point textures for this kind of simulations, as working with the combination of integer textures and floating point values in the shaders is quite painful 😀
Thanks for your comment and your very interesting website!
2. […] This is very cool indeed — GPGPU with WebGL: solving Laplace’s equation […]
3. […] In a previous post we solved Laplace’s Equation using WebGL. We will see how to implement the Lattice Boltzmann algorithm using WebGL shaders in the next post, but this post has a preview of the solution: Click on the image to go to the demo. New obstacles can be created by dragging the mouse over the simulation area. […]
4. […] method is introduced with WebGL demos in this blog. Demidov wrote something about Multigrid recently. Real-Time Gradient-Domain Painting is an […]
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
ed8de932ed0f588c | Search tips
Search criteria
Logo of wtpaEurope PMCEurope PMC Funders GroupSubmit a Manuscript
Phys Rev A. Author manuscript; available in PMC 2010 December 10.
Published in final edited form as:
PMCID: PMC3000527
Soliton absorption spectroscopy
We analyze optical soliton propagation in the presence of weak absorption lines with much narrower linewidths as compared to the soliton spectrum width using the novel perturbation analysis technique based on an integral representation in the spectral domain. The stable soliton acquires spectral modulation that follows the associated index of refraction of the absorber. The model can be applied to ordinary soliton propagation and to an absorber inside a passively modelocked laser. In the latter case, a comparison with water vapor absorption in a femtosecond Cr:ZnSe laser yields a very good agreement with experiment. Compared to the conventional absorption measurement in a cell of the same length, the signal is increased by an order of magnitude. The obtained analytical expressions allow further improving of the sensitivity and spectroscopic accuracy making the soliton absorption spectroscopy a promising novel measurement technique.
Light sources based on femtosecond pulse oscillators have now become widely used tools for ultrashort studies, optical metrology, and spectroscopy. Such sources combine broad smooth spectra with diffraction-limited brightness, which is especially important for high-sensitivity spectroscopic applications. Advances in near- and mid-infrared femtosecond oscillators made possible operation in the wavelength ranges of strong molecular absorption, allowing direct measurement of important molecular gases with high resolution and good signal-to-noise ratio [1]. At the same time, it was observed that such oscillators behave quite differently, when the absorb-ing gas fills the laser cavity or introduced after the output mirror [2, 3]. The issue has become especially important with introduction of the mid-IR femtosecond oscillators such as Cr:ZnSe [4], which operate in the 2–3 μm wave-length region with strong atmospheric absorption.
As an example, Figure 1 presents a typical spectrum of a Cr:ZnSe femtosecond oscillator, operating at normal atmospheric conditions. It is clearly seen, that the pulse spectrum acquires strong modulation features which resemble the dispersion signatures of the atmospheric lines. Being undesirable for some applications, such spectral modulation might at the same time open up interesting opportunity of intracavity absorption spectroscopy. Compared with the traditional intracavity laser absorption spectroscopy [5, 6] based on transient processes, this approach would have an advantage of being a well-quantified steady-state technique, that can be immediately coupled to frequency combs and optical frequency standards for extreme accuracy and resolution.
FIG. 1
Output spectrum of a 100-fs Cr:ZnSe oscillator (black solid line) when operated at open air. The atmospheric transmission (gray) is calculated from HITRAN database [7] and corresponds to a full round-trip. The lower graph (b) shows the expanded central ...
In this paper, we present a numerical and analytical treatment of the effect of a narrowband absorption on a femtosecond pulse, considered as a dissipative soliton. Such a treatment covers both, passively modelocked ultrashort pulse oscillators with intracavity absorbers, and soliton propagation in fibers with impurities. The theoretical results are compared with the experiment for a femtosecond Cr:ZnSe oscillator operating at normal atmospheric conditions. We prove that the spectral modulation imposed by a narrowband absorption indeed accurately follows the associated index of refraction when the absorber linewidth is sufficiently narrow.
Our approach is based on the treatment of an ultrashort pulse as one-dimensional dissipative soliton of the nonlinear complex Ginzburg-Landau equation (CGLE) [8, 9]. This equation has such a wide horizon of application that the concept of “the world of the Ginzburg-Landau equation” has become broadly established [10]. In particular, such a model describes a pulse with the duration T0 inside an oscillator or propagating along a nonlinear fiber.
To obey the CGLE, the electromagnetic field with the amplitude A(z, t) should satisfy the slowly-varying amplitude approximation, provided by the relation ω0 >> 1/T0, where ω0 is the field carrier frequency, t is the local time, and z is the propagation coordinate. This approximation is well satisfied even for pulses of nearly single optical cycle duration [11]. When additionally we can neglect the field variation along the cavity round-trip or the variation of material parameters along a fiber, as well as the contribution of higher-order dispersions, the amplitude dynamics can be described on the basis of the generalized CGLE [9, 1214]
where P [equivalent] |A|2 is the instant field power and α is the inverse gain bandwidth squared. The nonlinear terms in Eq. (1) describe i) saturable self-amplitude modulation (SAM) with nonlinear gain defined by the non-linear operator Σ^, and ii) self-phase modulation (SPM), defined by the parameter γ. For a laser oscillator, γ = 4πnn2lcryst0Aeff. Here λ0 is the wavelength; n and n2 are the linear and nonlinear refractive indexes of an active medium, respectively; lcryst is the length of the active medium; Aeff = πω2 is the effective area of a Gaussian mode with the radius w inside the active medium. The propagation coordinate z is naturally normalized to the cavity length, i.e. z becomes the cavity round-trip number. For a fiber propagation, γ = 2πnn20Aeff, where n and n2 are the linear and nonlinear refractive indexes of a fiber, respectively, and Aeff is the effective mode area of the fiber [15]. Finally, β2 is the round-trip net group delay dispersion (GDD) for an oscillator or the group velocity dispersion parameter for a fiber with β2 < 0 corresponding to anomalous dispersion.
The typical explicit expressions for Σ^[P] in the case, when the SAM response is instantaneous, are i) Σ^[P]=κP (cubic nonlinear gain), ii) Σ^[P]=κ(PζP2) (cubic-quintic nonlinear gain), and iii) Σ^[P]=κP(1+ζP) (perfectly saturable nonlinear gain) [9, 16]. The second case corresponds to an oscillator mode-locked by the Kerr-lensing [8]. The third case represents, for instance, a response of a semiconductor saturable absorber, when T0 exceeds its excitation relaxation time [17]. However, if the latter condition is not satisfied, one has to add an ordinary differential equation for the SAM and Eq. (1) becomes an integro-differential equation (see below).
The σ-term is the saturated net-loss at the carrier frequency ω0, which is the reference frequency in the model. This term is energy-dependent: the pulse energy E(z)P(z,t)dt can be expanded in the vicinity of threshold value σ = 0 as σ ≈ δ (E/E* – 1) [18], where δ = [ell]2/g0 ([ell] is the frequency-independent loss and g0 is the small-signal gain, both for the round-trip) and E* is the round-trip continuous-wave energy equal to the average power multiplied by the cavity period.
The operator Γ^ describes an effect of the frequency-dependent losses, which can be attributed to an absorp-tion within the dissipative soliton spectrum. That can be caused, for instance, by the gases filling an oscillator cavity or the fiber impurities for a fiber oscillator. Within the framework of this study, we neglect the effects of loss saturation and let Γ^ be linear with respect to A (z, t). The expression for Γ^[A(z,t)] is more convenient to describe in the Fourier domain, Ã(z, ω) being the Fourier image of A(z, t). If the losses result from the l independent homogeneously broadened lines centered at ωl (relative to ω0) with linewidths Ωl and absorption coefficients An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpgl < 0, then the action of operator Γ^ can be written down in the form of a superposition of causal Lorentz profiles [19, 20]
In the more general case the causal Voigt profile has to be used for Γ^[A~] [21]. Causality of the complex profile of Eq. (2) demonstrates itself in the time domain, where one has
The conventional analysis of perturbed soliton propagation includes approximation of the effective group delay dispersion of the perturbation as Taylor series β(ω)=β2(ωω0)22+β3(ωω0)36, assuming that the additional terms β2, β3, … are sufficiently small. This approach is absolutely not applicable in our case, because the dispersion, associated with a narrow linewidth absorber can be extremely large. For example, an atmospheric line with a typical width Ω = 3GHz and peak absorption of only 10−3 produces the group delay dispersion modulation of β2=±0.9ns2, far exceeding the typical intracavity values of β2 ~ 102 ..104 fs2. Moreover, decreasing the linewidth Ω (and thus reducing the overall absorption of the line) causes the group delay dispersion term to diverge as β2Ω2.
In the following we shall therefore start with a numerical analysis to establish the applicability and stability of the model, and then present a novel analysis technique based on integral representation in the spectral domain.
Without introducing any additional assumptions, we have solved the Eqs. (1,2) numerically by the symmetrized split-step Fourier method. To provide high spectral resolution, the simulation local time window contains 222 points with the mesh interval 2.5 fs. The simulation parameters for the cubic-quintic version of Eq. (1) are presented in Table I. The GDD parameter β2 = −1600 fs2 provides a stable single pulse with FWHM ≈100 fs. The single low-power seeding pulse converges to a steady-state solution during z ≈5000.
Laser simulation parameters. The numbers correspond to a Cr:ZnSe femtosecond oscillator of Fig. 1 with lcryst =0.4 cm, w =80 μm, λ0 =2.5 μm, n = 2.44, n2 = 10−14 cm2/W, [ell] =0.075, g0 =2.5[ell].
The pulse propagation within a linear medium (e.g. an absorbing gas outside an oscillator, a passive fiber containing some impurities, or microstructured fiber filled with a gas) is described by Eqs. (1,2) with zero α, γ, Σ^, and the initial A (0, t) corresponding to output oscillator pulse. The obvious effect of the absorption lines on a pulse spectrum are the dips at ωl (Fig. 4a), that simply follow the Beer’s law. This regime allows using the ultrashort pulse for conventional absorption spectroscopy [1]. The nonzero real part of an absorber permittivity (i.e. (Γ^)0) does significantly change the pulse in time domain [22], but does not alter the spectrum. The pulse spectrum reveals only imaginary part of an absorber permittivity (i.e. (Γ^)0).
FIG. 4
(Color online) Central parts of the dissipative soliton spectra in an oscillator with the single absorption line centered at ω =0. (a) An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg = −0.05; Ω = 4 GHz (solid curve, open circles and crosses) and 1 GHz (dashed curve, open squares ...
Introducing the nonzero SPM coefficient γ=β2A02T02 with zero α and Σ^ transforms Eq. (1) to a perturbed nonlinear Schrödinger equation and results in the true perturbed soliton propagation. In this case, as shown in Fig. 2b, the situation becomes dramatically different. Besides the dips in the spectrum shown by the gray curve corresponding to a contribution of (Γ^) only, there is a pronounced contribution from the phase change induced by the dispersion of absorption lines (solid curve in Fig. 2b corresponds to the complex profile of Γ^ in Eq. (2)). As a result, the spectral profile has the sharp bends with the maximum on the low-frequency side and the minimum on the high-frequency side of the corresponding absorption line. At the same time, the dips in the spectrum due to absorption are strongly suppressed. In addition to spectral features, the soliton decays, acquires a slight shift towards the higher frequencies, and its spectrum gets narrower due to the energy loss.
FIG. 2
(Color online) Part of the pulse spectrum after: (a) linear propagation for 25 dispersion lengths inside a fiber with two absorption lines and (b) perturbed soliton propagation for 100 dispersion lengths. Gray curve in (b) corresponds to the contribution ...
The soliton spectrum reveals in this case the real part of the absorber permittivity. However, the continuous change of the soliton shape due to energy decay renders the problem as a non-steady state case. The situation becomes different in a laser oscillator, where pumping provides a constant energy flow to compensate the absorption loss.
Let us consider the steady-state intra-cavity narrowband absorption inside a passively modelocked femtosecond oscillator, where the pulse is controlled by the SPM and the SAM, which is described by the cubic-quintic Σ^ in Eq. (1) modeling the Kerr-lens mode-locking mechanism [12]. Such an oscillator can operate both, in the negative dispersion regime [12] with an chirped-free soliton-like pulse, and in the positive dispersion regime [18], where the propagating pulse acquires strong positive chirp. In this study, we consider only the negative dispersion regime, the positive dispersion regime will be a subject of following studies.
The results of the simulation are shown in Figs. Figs.33 and and4,4, and they demonstrate the same dispersion-like modulation of the pulse spectrum. Fig. 3 demonstrates action by three narrow (Ω =2 GHz) absorption lines centered at −10, 0 and 10 GHz in the neighborhood of ω = 0. One can see (Fig. 3a), that the absorption lines do not cause spectral dips at ωl, but produce sharp bends, very much like the case of the true perturbed Schrödinger soliton considered before. One can also clearly see the collective redistribution of spectral power from higher- to lower-frequencies, which enhances local spectral asymmetry (Fig. 3a). Such an asymmetry suggests that the dominating contribution to a soliton perturbation results from the real part of an absorber permittivity, which, in particular, causes the time-asymmetry of perturbation in the time domain. This asymmetry is seen in time domain as a ns-long modulated exponential precursor in Fig. 3b.
FIG. 3
(Color online) Dissipative soliton in an oscillator: (a) central part of the spectrum and (b) power P(t). An oscillator is filled with an absorber described by Eq. (2) with the triplet of lines: An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg1 = An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg2 = An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg3 = −0.005, Ω1 = Ω2 = Ω ...
The simulated effect of a single narrow absorption line centered at ω = 0 is shown in Fig. 4 for different values of peak absorption An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg and width Ω. In Fig. 4a, An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg = −0.05 and Ω = 4 GHz (solid curve, open circles and crosses) and 1 GHz (dashed curve, open squares and triangles). The solid and dashed curves demonstrate the action of complex profile (2), whereas circles (squares) and crosses (triangles) demonstrate the separate action of (Γ^) and (Γ^), respectively. One can see, that the profile of perturbed spectrum traces that formed by only (Γ^) (i.e. it traces the real part of an absorber permittivity). One can say, that the pure phase effect ((Γ^), circles and squares in Figs. 4a,b) strongly dominates over the pure absorption ((Γ^), crosses in Figs. 4a,b), like that for the Schrödinger soliton. Such a domination enhances with a lowered An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg (Fig. 4b, crosses), however the |An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg| growth increases the relative contribution of (Γ^) and causes the frequency downshift of the bend (Fig. 4a). Amplitude of the bend traces the An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg value, while its width is defined by Ω.
The SAM considered above is modeled by the cubicquintic nonlinear term Σ^ in Eq.(1). Such a SAM is typically realized by using the self-focusing inside an active medium (Kerr-lens modelocking). For reliable self-starting operation of mode-locked oscillator it is often desirable to use a suitable saturable absorber (SA), e.g. a semiconductor-based SESAM [23, 24]. Such an absorber can be described in the simplest case by a single-lifetime two-level model, giving the time-dependent loss coefficient Λ (t) as
where Λ0 is the loss coefficient for a small signal, Aeff is the effective beam area on the SA, Ts and Js are the SA relaxation time and the saturation energy fluency, respectively. Eq. (4) supplements Eq. (1) and the SAM term Σ^ in the latter has to be replaced by (Λ0 – Λ(t)). When the pulse width is longer than the SA relaxation time, one can replace Eq. (4) by its adiabatic solution so that
where ξ [equivalent] Ts/JsS is the inverse saturation power.
We have simulated Eqs. (1,2,4) in the case of Js =50 μJ/cm2 and Ts =0.5 ps, that correspond to the measurement in Fig. 1. Two cases have been considered: weak focusing (Aeff =4000 μm2 or saturation energy Es=2 nJ) and hard focusing (Aeff =1000 μm2 or saturation energy Es=0.5 nJ). We also considered Eq. (5) for the same peak saturation level as weakly focused SA (i.e. ξ−1=4 kW). In the latter case the SA effectively becomes instantaneous, and the perturbed soliton spectrum is the same as for the Kerr-lens modelocking, i.e. as for the cubic-quintic Σ^. When the saturation energy is sufficiently large, there is no difference between the models expressed by Eqs. (4) and (5). The effect of a narrow absorption line is similar to that of the soliton of the cubic-quintic Eq. (1). Decrease of the saturation energy Es causes down-shift of the pulse spectrum as a whole, but the narrow bend on the soliton spectrum reproduces the real part of the absorber permittivity. One can thus conclude that the type of the SAM is irrelevant for an effect of the narrowband absorption lines on a dissipative soliton spectrum.
Another important conclusion from the numerical simulations is the demonstrated stability of the dissipative soliton against perturbations induced by narrowband absorption. In the following analytical treatment we shall therefore omit the stability analysis.
To study the transformation of dissipative soliton spectrum under action of narrow absorption lines, we apply the perturbation method [9, 25]. Since the basic features already become apparent for the perturbed Schrödinger soliton and do not depend on SAM details, we shall consider the simplest case of the cubic nonlinear gain Σ^[P]=κP. The unperturbed solitonic chirp-free solution of such reduced equation with Γ^=0 is a (z, t) = A0sech (t/T0) exp [(t) + iqz] with dϕdt=ϖ. The unperturbed soliton parameters are [25]
where the equation parameters are confined
Hence, the soliton wavenumber is q = −γσ/κ.
Its is reasonable to treat the soliton of the reduced Eq. (1) as the Schrödinger soliton with the parameters constrained by the dissipative terms σ, α and κ (see Eqs. (6,7)). This implies that the equation, which has to be linearized with respect to a small perturbation copropagating with the soliton without beating, decay or growth (i.e. having a wavenumber real and equals to q [9]), is the perturbed nonlinear Schrödinger equation
Linearization of the latter with respect to a perturbation f (t) exp (iqz) results in
In the spectral domain, Eq. (9) becomes
where [25]
Here k (ω) is the frequency-dependent complex wave number, and S (ω) is the perturbation source term for Γ^ corresponding to Eq. (2).
Further, one may assume the phase matching between the soliton and its perturbation. This assumption in combination with the equality U (ω) = U* (ω), which holds for the Schrödinger soliton, results in dωU(ωω)f~(ω)=dωU(ωω)f~(ω).
The equation (10) for the Fourier image of perturbation is the Fredholm equation of second kind. Its solution can be obtained by the Neumann series method so that the iterative solution becomes [25]
where f~n(ω) is the n-th iteration and f~0(ω)=S(ω)[k(ω)q].
The “phase character” of a soliton perturbation (i-multiplier in lhs. of Eq. (9) and the expression for the source term (11)) demonstrate that the real part of absorber permittivity contributes to the real part of soliton spectral amplitude. Simultaneously, the resonant condition k (ω) – q =0, which is responsible for a dispersive wave generation caused by, for instance, the higher-order dispersions [9], is not reachable in our case. The resonance can appear in case of large |An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg|, κ/γ, and ΩlT0, but such regimes are beyond the scope of this work.
Eq. (12) can be solved numerically. Fig. 5 shows (f~1(ω)) (dashed curve) and (f~1(ω)) (dotted curve). One can see, that the real part of absorber permittivity defines (f~(ω)) while the imaginary part of absorber permittivity defines (f~(ω)). That agrees with the simulation results and is opposite to the case of a linear pulse propagation. One can also see a tiny frequency down-shift θ of the of f~(ω) minimum from ωl like that in the simulations.
FIG. 5
(Color online) Central part of the dissipative soliton spectra perturbed by f~1(ω) (solid curve) or f~0(ω) (open squares) as well as the profiles of (f~1) (dashed curve) and (f~1) (dotted curve) from Eq. (12). Single ...
The pulse spectrum (solid curve in Fig. 5), results from interference of the perturbation with the soliton. For the chosen parameters of the absorption line the zero-order approximation f~0(ω) (open squares) is very close to the first-order approximation (solid curve) but is slightly down-shifted in the vicinity of the bend maximum and minimum.
With even narrower line width Ω of 1 GHz (Fig. 6) the spectral perturbation gets very close to the real part of absorber permittivity and, simultaneously, the spectral down-shift θ (location of the (f~1) minimum, dashed curve) vanishes. The f~0(ω) (gray solid curves) now perfectly matches the f~1(ω) (open circles and crosses) within a broad range of An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg (gray solid curves 1 and 2 as well as circles and crosses belong to An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg =−0.005 and −0.05, respectively). The bend amplitudes are in agreement with Eq. (11).
FIG. 6
(Color online) Central part of the dissipative soliton spectra perturbed by f~1(ω) (open circles and crosses) or f~0(ω) (solid gray curves) as well as the profiles of (f~1) (solid black curve) and (f~1) (dashed curve) ...
A superposition of three identical absorption lines, which corresponds to the numerical spectra in Fig. 2, is shown in Fig. 7. One can see, that the lowest-order analytical solution f~0(ω)accurately reproduces the numerical result. It is important, that a cumulative contribution of lines into k (ω) does not distort a superposition contribution of S (ω) into a soliton spectrum (see Eqs. (11,12)). This means that the individual contribution of a single line within a group is easily distinguishable and can be quantitatively assessed, opening way for interesting spectroscopic applications.
FIG. 7
(Color online) Central part of the dissipative soliton spectra perturbed by f~0(ω) (black solid curve) and the profiles of (f~0) (dashed curve) and (f~0) (dotted curve) from Eq. (12). Triplet of absorption lines is centered at ...
As Figs. Figs.55 and and66 suggest, the zero-order approximation f~0(ω)=S(ω)[k(ω)q] is quite accurate for a description of perturbation in the limit of |An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg| << 1. This allows expressing the perturbed spectrum of an isolated line (see (11,12)) in analytical form [25]:
Eq. (13) allows further simplification in the case of |An external file that holds a picture, illustration, etc.
where Eqs. (6) and the condition αΩl21 have been used.
Eq. (14) demonstrates that the spectral bend follows the real part of absorber permittivity. Spectral down-shift of bend is the effect of O(2) and not included in (14). The perturbation is represented by the term in square brackets and its relative amplitude is proportional to An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg. Furthermore, the aspect ratio of the kink grows with i) the increase of the relative contribution of the SAM κ/γ ; ii) the gain bandwidth 1α; iii) approaching of the resonance frequency ωl to the center of soliton spectrum (but the ratio of the aspect ratio to the local soliton spectral power increases with |ωl|, because the former decreases as ωl2 while the latter falls faster as cosh (πT0ωl/2)2); and iv) with an approaching to the soliton stability border, which corresponds to vanishing σ. It should be noted, that smaller σ entails the soliton width growth (Eq. (6)).
Since the soliton parameters are interrelated, it is instructive to express σ through the observable parameters such as soliton energy E or the soliton width T0. When αωl2σ (e.g. ωl ≈0 or/and an oscillator operates far from the stability border σ =0), the perturbation amplitude is inversely proportional to γκE2:
For a fixed gain bandwidth, the amplitude scales with squared pulsewidth T02. Ultimately, the latter equation is equivalent to
i.e. the relative perturbation amplitude near soliton central frequency is the ratio of incurred loss coefficient to the soliton wavenumber, regardless of the z coordinate normalization. Therefore, this analytical expression that has been derived for the self-consistent oscillator, should also be valid for the case of a soliton propagation in a long fiber when the conditions of applicability |An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg| << 1, Ω << 1/T0 are met. The final form of the soliton spectrum thus becomes
where P~0(ω) is the spectrum of an unperturbed soliton.
In the analysis above we have shown that, for the case of sufficiently sparse, narrow and weak Lorentzian absorber lines, their spectral signatures are equivalent to the dispersion-like modulation with a relative amplitude equal to the peak absorption coefficient over oscillator round-trip (or nonlinear length for passive propagation) divided by the soliton wavenumber. For quantitative comparison with the experiment we recall equation (6) and express the maximum spectrum deviation of a single line at |ωωl|; = Ωl through observable parameters:
where χl = 2An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpgl is the peak absorption coefficient of the line, L is the absorber path length and Δν is the full width at half maximum of the soliton spectrum. Substituting the actual values of the setup in Fig. 1 (β2 = −820 ± 40 fs2, Δν =113 cm−1= 3.39 THz, round-trip air path length L = 149 cm, relative humidity 50 ± 1% at 21 ± 0.5 °C) and taking e.g. the line at 4088 cm−1 (122.56 THz, marked with an asterisk), we obtain |An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg/q| = 0.33 ± 0.02 for the maximum modulation, which is in perfect agreement with the observed value of 31.5% (Fig. 1b, black line). The agreement is remarkably accurate given the less than optimal resolution of the spectrometer (0.25 cm−1) and significant third-order dispersion of about +104 fs3 that was not accounted for in the presented analysis.
It is important to notice, that the expression (18) includes only the externally observable soliton band-width and relatively stable dispersion parameter. The alignment-sensitive values like saturated losses σ, nonlinearity γ, nonlinearity saturation parameter κ, etc. which are in practice not known with sufficient accuracy, are all accounted for by the self-consistent soliton parameters.
Another important point is the fact, that the signal amplitude 2|An external file that holds a picture, illustration, etc.
Object name is ukmss-32699-ig0008.jpg/q| can be much bigger than that from conventional absorption spectroscopy χlL of the cell with the same length. The signal enhancement factor can be controlled by the pulse parameters and it exceeds an order of magnitude for the presented case (χlL = 5% for selected line and a single-pass cell of a resonator size). For additional sensitivity improvement one can apply the well-developed intracavity multi-pass cell technique [26]. The expression (18) suggests, that ultimate sensitivity can be obtained at the expense of the reduced bandwidth coverage Δν. In this respect, the presented technique has the same quadratic dependence of sensitivity on spectral bandwidth as the conventional intracavity absorption spectroscopy [5].
Further refinement of the presented theory should include demonstration of its applicability to arbitrarily shaped absorption features. The superposition property provides a strong argument for such extension, but it has to be rigorously proven for Doppler- and more general Voigt-shaped lines, and also for the dense line groups in e.g. Q-branches. It would be interesting also to extend the theory to the absorber lines at the soliton wings (Fig. 1a).
With the above issues resolved, the soliton-based spectroscopy may become a powerful tool for high-resolution, high-sensitivity spectroscopy and sensing. Possible implementation include soliton propagation in a gas-filled holey fibers, as well as already presented intracavity spectroscopy with femtosecond oscillators. The latter, being a natural frequency comb source, allows direct locking to optical frequency standards, providing for ultimate resolution and spectral accuracy.
We have been able to derive an analytical solution to the problem of an one-dimensional optical dissipative soliton propagating in a medium with narrowband absorption lines. We predict appearance of spectral modulation that follows the associated index of refraction rather than absorption profile. The novel perturbation analysis technique is based on integral representation in the spectral domain and is insensitive to the diverging differential terms, inherent to the Taylor series representation of the narrow spectral lines.
The model is applicable to a conventional soliton propagation and to a passively modelocked laser with intracavity absorber, the only difference being the characteristic propagation distance (dispersion length and cavity round-trip, respectively). In the latter case the prediction has been confirmed for a case of water vapour absorption lines in a mid-IR Cr:ZnSe oscillator. The model provides very good qualitative and quantitative agreement with experimental observations, opening a way to metrological and spectroscopical applications of the novel technique, which can provide a significant (order of magnitude and more) enhancement of the signal over conventional absorption for the same cell length.
We gratefully acknowledge insightful discussions and experimental advice from N. Picqué, G. Guelachvili (CNRS, Univ. Paris-Sud, France), and I. T. Sorokina (NTNU, Norway). This work has been supported by the Austrian science Fund FWF (projects 17973 and 20293) and the Austrian-French collaboration Amadée.
[1] Sorokin E, Sorokina IT, Mandon J, Guelachvili G, Picqué N. Opt. Express. 2007;15:16540. [PMC free article] [PubMed]
[2] Mandon J, Guelachvili G, Sorokina I, Sorokin E, Kalashnikov V, Picqué N. Europhysics Conference; Abstract Volume 32G. paper WEoB.4 at Europhoton 2008.
[3] Kalashnikov V, Sorokin E, Mandon J, Guelachvili G, Picqué N, Sorokina IT. paper TUoA.3 at Europhoton 2008. Abstract Volume 32G.
[4] Sorokina IT, Sorokin E, Carrig T. paper CMQ2 at CLEO/QELS 2006; Technical Digest on CD.
[5] Baev VM, Latz T, Toschek PE. Appl. Phys. B. 1999;69:171.
[6] Picqué N, Gueye F, Guelachvili G, Sorokin E, Sorokina IT. Opt. Lett. 2005;30:3410. [PMC free article] [PubMed]
[7] The HITRAN. database
[8] Haus HA, Fujimoto JG, Ippen EP. Structures for additive pulse mode locking. JOSA B. 1991;8:2068–2076.
[9] Akhmediev NN, Ankiewicz A. Solitons: Nonlinear Pulses and Beams. Chapman&Hall; 1997.
[10] Aranson IS, Kramer L. Rev. Mod. Phys. 2002;74:99.
[11] Brabec T, Krausz F. Phys. Rev. Lett. 1997;78:3282.
[12] Kärtner FX, editor. Few-cycle Laser Pulse Generation and its Applications. Springer Verlag; Berlin: 2004.
[13] Akhmediev NN, Ankiewicz A, editors. Dissipative Solitons. Springer Verlag; Berlin: 2005.
[14] the signs before i in Eq. (1) correspond to those in [8] and Akhmanov SA, Vysloukh VA, Chirkin AS. Optics of Femtosecond Laser Pulses. AIP; NY: 1992.
[15] Agrawal GP. Nonlinear Fiber Optics. Academic Press; San Diego: 2001.
[16] Biswas A, Konar S. Introduction to non-Kerr Law Optical Solitons. Chapman&Hall; Boca Raton: 2007.
[17] Haus HA, Silberberg Y. J. Opt. Soc. Am. B. 1985;2:1237.
[18] Kalashnikov VL, Podivilov E, Chernykh A, Apolonski A. Applied Physics B. 2006;83:503.
[19] Butylkin VS, Kaplan AE, Khronopulo Yu.G., Yakubovich EI. Resonant Nonlinear Interactions of Light with Matter. Springer Verlag; Berlin: 1989.
[20] Oughstun KE. Electromagmetic and Optical Pulse Propagation 1. Springer; NY: 2006.
[21] De Sousa Meneses D, Gruener G, Malki M, Echegut P. J. Non-Crystalline Solids. 2005;351:124.
[22] Yamaoka Y, Zeng L, Minoshima K, Matsumoto H. Appl. Opt. 2004;43:5523. [PubMed]
[23] Islam MN, Sunderman ER, Soccolich CE, Bar-Joseph I, Sauer N, Chang TY, Miller BI. IEEE J. Quantum Electron. 1989;25:2454.
[24] Keller U, Weingarten KJ, Kärtner FX, Kopf D, Braun B, Jung ID, Fluck R, Honninger C, Matuschek N, Aus der Au J. IEEE J. Selected Topics in Quantum Electron. 1996;2:435.
[25] Kalashnikov VL. Maple worksheet (unpublished)
[26] Kowalevicz AM, Sennaroglu A, Zare AT, Fujimoto JG. J. Opt. Soc. Am. B. 2006;23:760. |
6b83b3c1c60b4213 | Courses Description
PH 100 Physics
Vector algebra, Motion of Particle in one, two and three dimensions, Projectile motion, Uniform Circular motion, Force , mass, Newton’s laws, Tension and Normal force, Frictional forces, Concept of free body diagram, Electrostatic force, electrostatic field, Electric dipole, Electric flux, Gauss ‘s law, Electrostatic potential, magnetic field, Biot-Savart law, Effect of magnetic field on current carrying conductors, Ampere’s Law, How magnetism is used in a computer, Band theory, Insulators, metals, semiconductors, doped semiconductors, The p-n junction, The junction rectifier, LED, Transistor.
PH 103 Applied Physics
Vector algebra, Motion in two and three dimensions, Force and motion, Newton’s laws, Application of Newton’s second law for some specific forces, Friction, Rotation, Moment of inertia, Torque, Rotational Energy, Simple Harmonic Motion, Waves, Waves speed, Energy and Power of traveling waves, Doppler’s effect. Electrostatic force, electrostatic field, Electric dipole, Electric flux, Gauss ‘s law, Electrostatic potential, magnetic field, Biot-Savart law, Effect of magnetic field on current carrying conductors, Ampere’s Law, Magnetic dipole, Faraday’s law of electromagnetic induction, Energy stored in electric and magnetic fields, Introduction to solid state Physics, Superconductivity, Semiconductors and Modern trends in Atomic Physics.
PH 107 Physics I – Basic Mechanics
Vector and Scalars, Motions in 2 and 3 dimensions, projectile motion, uniform circular motion, Force and acceleration, Newton’s laws, frictional force, Work, Energy, Kinetic and Potential Energy, Gravitational force, Conservation of energy, Rotational motion, Angular velocity, Torque, Rotational Inertia, Oscillations, Simple Harmonic motion, Harmonic Oscillator, Waves, Transverse and Longitudinal waves, Wave speed, Energy and Power of Waves, Standing Waves. Inertial and non-inertial frame, Postulates of Relativity, The Lorentz Transformation, Relativity of time, Relativity of length, Relativity of mass, Transformation of velocity, variation of mass with velocity, mass energy relation and its importance, relativistic momentum and Relativistic energy.
PH 108 Physics II – Electricity and Magnetism
Electric charge, Coulomb's Law, Electric field, electric flux, Gauss's Law, Electric potential, Capacitors, Electric current, Ohm's law, Magnetic fields, Ampere's Law, Inductors, Faraday's Law, DC Circuits, Energy stored in magnetic fields, magnetic materials, induced magnetic fields. The Electromagnetic Model, Vector Analysis, Static Electric Fields, Solution of Electrostatic Problems, Steady Electric Currents. Electromagnetic waves, Poynting vector, Interference, Diffraction. Alternating Fields and Currents, Diamagnetism, Paramagnetism, Ferromagnetism, Hysteresis.
PH-201 Heat and Thermodynamics
Basic Concepts and Definitions in Thermodynamics: Thermodynamic system, Surrounding and Boundaries. Type of systems, Macroscopic and microscopic description of system, Heat and Temperature: Temperature, Kinetic theory of ideal gas, Work done on an ideal gas, First law of thermodynamics and its applications to adiabatic, isothermal, cyclic and free expansion. Reversible and irreversible processes, Second law of thermodynamics, Carnot theorem and Carnot engine. Heat engine, Entropy and Second law of thermodynamics, Entropy and Probability, Thermodynamic Functions: Thermodynamic functions, Introduction to Statistical Mechanics, Mean free path and microscopic calculations of mean free path. Distribution of Molecular Speeds, Distribution of Energies, Maxwell distribution, Maxwell Boltzmann energy distribution
PH-202 Waves and Oscillations
Simple and Damped Simple Harmonic Oscillation, Mass-Spring System, Simple Harmonic Oscillator Equation, Complex Number Notation, LC Circuit, Simple Pendulum, Quality Factor, LCR Circuit. Forced Damped Harmonic Oscillation, Coupled Oscillations, Transverse Waves, Longitudinal Waves, Traveling Waves, Standing Waves in a Finite Continuous Medium, Traveling Waves in an Infinite Continuous Medium, Energy Conservation, Transmission Lines, Reflection and Transmission at Boundaries, Electromagnetic Waves. Wave Pulses: Multi-Dimensional Waves, Interference and Diffraction of Waves
PH-203 Modern Physics
Motivation for Non--Classical Physics, Wave-Particle Duality, Quantum Mechanics in One Dimension
Quantum Mechanical Tunneling, Photoelectric effect, Compton effect, production and properties of X-rays, diffraction of X-rays, concept of matter waves, deBroglie relationship, The concept of a wave function, time independent Schrodinger equation and interpretation of the equation, solving the Schrodinger equation for a free particle, Concept of tunneling, reflection and transmission of wave functions from barriers The Hydrogen atom, orbitals, angular momentum and its quantization, orbital magnetism, Zeeman effect, concept of spin, Pauli’s exclusion principle, Building of the periodic table, Quantum Mechanics in Three Dimensions: , From Atoms to Molecules and Solids: Ionic bonds, covalent bonds, hydrogen bonds, Nuclear Structure: Size and structure of nucleus, nuclear forces,
PH-204 Classical Mechanics
Review of Newtonian Mechanics: Frame of reference, orthogonal transformations, angular velocity and angular acceleration, Newton’s laws of motion, Galilean transformation, conservation laws, The Motion of Rigid Bodies: The Euler angles, rotational kinetic energy and angular momentum, the inertia tensor, Euler equations of motion, motion of a torque-free symmetrical top, Central Force Motion: The two-body problem, effective potential and classification of orbits, Kepler’s laws, Motion in Non-inertial Systems: Accelerated translational co-ordinate system, dynamics in rotating co-ordinate system, The Lagrange Formulation of Mechanics and Hamilton Dynamics: Generalized co-ordinates and constraints, D’Alembert’s principle and Lagrange’s Equations, Hamilton’s principle, integrals of motion, nonconservative system and generalized potential
PH-205 Electrodynamics
The Dirac Delta Function: Review of vector calculus using example of Dirac Delta function, Electrostatics: The electric field: introduction, Coulomb’s law, the electric field, continuous charge distributions. Divergence and curl of electrostatic fields: field lines, flux and Gauss’s law, the divergence of E, applications of Gauss’s law, the curl of E. Electric potential: introduction to potential, comments on potential, Poisson’s equation and Laplace’s equation. The Method of Images, Multi-pole Expansion: Polarization: dielectrics, induced dipoles, alignment of polar molecules, polarization, Magnetostatics: The Lorentz Force law: magnetic fields, magnetic forces, currents. The Biot-Savart Law: steady currents, the magnetic field of a steady current. Magnetic Fields in Matter: Magnetization, diamagnets, paramagnets, ferromagnets, torques and forces on magnetic dipoles, effect of a magnetic field on atomic orbits, magnetization.
PH-301 Methods of Mathematical Physics
Review of vector analysis: definitions, Differential operators, gradient, divergence, curl, integration of vector fields, Gauss' theorem, Stokes' theorem, Gauss' law, Poisson's equation
Vector analysis in curvilinear coordinates, orthogonal coordinates Determinants, matrices, orthogonal and unitary matrices, matrix diagonalization Finite and infinite sequences, limit of a sequence Fourier series and analysis, use and application to physical systems Complex algebra, functions of a complex variable, Cauchy-Riemann conditions, integration of complex
PH-302 Physical Electronics
The crystal lattice, basic quantum mechanics, energy bands, elemental semiconductors, compound semiconductors, alloys, semiconductors electrons, holes, density-of-states, effective mass, carrier concentration, doping, recombination, the Fermi energy, quasi-Fermi energies, mobility, conductivity, Hall effect, optical properties of semiconductors, carrier drift and diffusion. Diodes (pn junction, Schottky, LED’s, laser diodes, solar cells and photodiodes), bipolar transistors, field effect transistors: JFET’s, MESFETs, MODFETs and MOSFET’s.
PH-303 Quantum Mechanics I
Historical motivation: wave-particle duality, photo-electric effect, instability of atoms, black body catastrophe. Observables and operators, postulates of mechanics, measurement problems, the state function and expectation values, Schrödinger wave equation, Time-independent Schrödinger equation and one-dimensional problems, stationary states, superposition principle, free particles, infinite and finite square well, harmonic oscillator, and delta-function potential. Hilbert space, Dirac notation, linear transformations, discrete and continuous basis vectors, hermitian and unitary operators, Waves incident on potential barrier, reflection and transmission coefficients, WKB method. Quantum mechanics in three-dimensions, cartesian and spherical forms of Schrodinger equation, separation of variables, Rotational symmetry, angular momentum as a generator of rotations, spherical harmonics and their properties. Completeness and orthonormality properties.
PH-304 Circuit Electronics
Ohm’s law, Kirchoff’s voltage and current laws, the superposition principle, Source transformation, maximum power transfer theorem, Thevenin-Norton equivalent circuits, linear system analysis basics. Introduction to semiconductors, intrinsic and extrinsic semiconductors, Ideal diodes, terminal characteristics of junction diodes, Basic principles of pn junctions, built-in potential, Bipolar Junction Transistors (BJT),, Basic operational amplifiers inverting and non-inverting, differential modes, gain and bandwidth, frequency response Principles of feedback
PH-305 Electromagnetic and Relativity Theory
Electrodynamics: Electromotive force: Ohm’s law, electromotive force, motional emf, electromagnetic induction: Faraday’s law, Conservation Laws: Charge and energy: the continuity equation, Poynting’s theorem, momentum: Newton’s third law in electrodynamics, Electromagnetic Waves: Waves in one dimension: the wave equation, sinusoidal waves, boundary conditions, reflection and transmission, polarization Potentials and Fields: The potential formulation: scalar and vector potentials, gauge transformations, Coulomb gauge and Lorentz gauge, Radiation, Dipole Radiation: What is radiation, electric dipole radiation, magnetic dipole radiation, radiation from an arbitrary source, Electrodynamics and Relativity: The special theory of relativity: Einstein’s postulates, the geometry of relativity, the Lorentz transformations
PH-307 Methods of Mathematical and Computational Physics
Vector spaces, basis vectors, linear independence, function spaces. Review of differentiation and integration, continuity and differentiability, firstorder differential equations, general solution by integration, uniqueness property. Second order differential equations with constant coefficients, Euler linear equations, singular points, series solution by Frobenius' method, Second order linear partial differential equations, Laplace equation, wave equation, solution of Poisson equation, Definition of probability, simple properties, random variables, binomial distribution, Poisson and Gaussian distributions, central limit theorem, statistics.
PH-308 Quantum Mechanics II
Motion of a particle in a central potential. Separation of variables, effective potential, solution for the Coulomb problem.Spin as an internal degree of freedom, intrinsic magnetic moment, Identical particles: Many-particle systems, system of distinguishable noninteracting particles, systems of identical particles, Scattering: Classical scattering theory, The variational principle: Variational theorem, variational approximation method, the ground state of helium atom.The WKB approximation: WKB wave functions, Time-dependent perturbation theory, Time-independent perturbation theory: Nondegenerate perturbation theory, degenerate perturbation theory.
PH-309 Solid State Physics I
Crystal Structure: Lattices and basis, Symmetry operations, Fundamental Types of Lattice, Position and Orientation of Planes in Crystals, Simple crystal structures, Crystal Diffraction and Reciprocal Lattice: Diffraction of X-rays, Neutrons and electrons from crystals; Bragg’s law; Reciprocal lattice, Ewald construction and Brillouin zone, Fourier Analysis of the Basis., Phonons and Lattice, Thermal Properties of Solids: , Electrical Properties of Metals: Classical free electron theory of metals, energy levels and density of orbital’s in one dimension, effect of temperature on the Fermi–Dirac distribution function, properties of the free electron gas, electrical conductivity and Ohm’s Law,
PH-310 Atomic and Molecular Physics
One Electron Atoms: Review of Bohr Model of Hydrogen Atom, Reduced Mass, Atomic Units and Wavenumbers, Energy Levels and Spectra, Schrodinger Equation for One-Electron Atoms, Quantum Angular Momentum and Spherical Harmonics, Electron Spin, Spin-Orbit interaction. Levels and Spectroscopic Notation, Lamb Shift, Hyperfine Structure and Isotopic Shifts. Rydberg Atoms. Interaction of One-Electron Atoms with Electromagnetic Radiation: Radiative Transition Rates, Dipole Approximation, Einstein Coefficients, Selection Rules, Dipole Allowed and Forbidden Transitions. Metastable Levels, Line Intensities and Lifetimes of Excited States, Shape and Width of Spectral Lines, Scattering of Radiation by Atomic Systems, Zeeman Effect, Linear and Quadratic Stark Effect. Many-Electron Atoms: Schrodinger Equation for Two-Electron Atoms, Para and Ortho States, Pauli’s Principle and Periodic Table, Coupling of Angular Momenta, L-S and J-J Coupling. Ground State and Excited States of Multi-Electron Atoms, Configurations and Terms. Molecular Structure and Spectra: Structure of Molecules, Covalent and Ionic Bonds, Electronic Structure of Diatomic Molecules, Rotation and Vibration of Diatomic Molecules, Born-Oppenheimer Approximation. Electronic Spectra, Transition Probabilities and Selection Rules, Frank- Condon Principle, H2+ and H2. Effects of Symmetry and Exchange. Bonding and Anti-bonding Orbitals. Electronic Spin and Hund’s Cases, Nuclear Motion: Rotation and Vibrational Spectra (Rigid Rotation, Harmonic Vibrations). Selection Rules. Spectra of Triatomic and Polyatomic Molecules, Raman Spectroscopy, Mossbauer Spectroscopy
PH-311 Nuclear Physics
History: Starting from Bacqurel’s discovery of radioactivity to Chedwick’s neutron.
Basic Properties of Nucleus: Nuclear size, mass, binding energy, nuclear spin, magnetic dipole and electric quadrupole moment, parity and statistics. Nuclear Forces: Yukawa's theory of nuclear forces. Nucleon scattering, charge independence and spin dependence of nuclear force, isotopic spin. Nuclear Models: Liquid drop model, Fermi gas model, Shell model, Collective model.
Theories of Radioactive Decay: Theory of Alpha decay and explanation of observed phenomena, measurement of Beta ray energies, the magnetic lens spectrometer, Fermi theory of Beta decay, Neutrino hypothesis, theory of Gamma decay, multipolarity of Gamma rays, Nuclear isomerism.
Nuclear Reactions: Conservation laws of nuclear reactions, Q-value and threshold energy of nuclear reaction, energy level and level width, cross sections for nuclear reactions, compound nucleolus theory of nuclear reaction and its limitations, direct reaction, resonance reactions, Breit-Wigner one level formula including the effect of angular momentum
PH-401 Statistical Mechanics
Review of Classical Thermodynamics: States, macroscopic vs. microscopic, "heat" and "work", energy, entropy, equilibrium, laws of thermodynamics, Equations of state, thermodynamic potentials, temperature, pressure, chemical potential, thermodynamic processes (engines, refrigerators), Maxwell relations, phase equilibria. Foundation of Statistical Mechanics: Phase Space, Trajectories in Phase Space, Conserved Quantities and Accessible Phase Space, Macroscopic Measurements and Time Averages, Ensembles and Averages over Phase Space, Liouville's Theorem, and examples (e.g. adsorption), calculation of partition function and thermodynamic quantities. Simple Applications of Ensemble Theory: Monoatomic ideal gas in classical and quantum limit, Gibb’s paradox and quantum mechanical enumeration of states, equipartition theorem and examples (ideal gas, harmonic oscillator), specific heat of solids, quantum mechanical calculation of para-magnetism, Quantum Statistics.
PH-402 Solid State Physics II
Dielectric Properties of Solids: Polarization, Depolarization, Local and Maxwell field, Lorentz field, Clausius-Mossotti relation, Dielectric Constant and Polarizability, Masurement of dielectric constant, ferro electricity and ferroelectric crystals, Phase Transitions, First and 2nd order phase transitions, Applications Semiconductors: General properties of semiconductors, intrinsic and extrinsic semiconductors, their band structure, carrier statistics in thermal equilibrium, band level treatment of conduction in semiconductors and junction diodes, diffusion and drift currents, collisions and recombination times Optical Properties: Interaction of light with solids, Optical Properties of Metals and Non-Metals, Kramers Kronnig Relation, Excitons, Raman Effect in crystals, optical spectroscopy of solids. Magnetic Properties of Materials: Magnetic dipole moment and susceptibility, different kinds of magnetic materials, Langevin diamagnetic equation, Paramagnetic equation and Curie law, Classical and quantum approaches to paramagnetic materials. Ferro-magnetic and anti – ferromagnetic order, Curie point and exchange integral, Effect of temperature on different kinds of magnetic materials and applications. Superconductivity: Introduction to superconductivity, Zero-Resistance and Meissner Effect.
PH-403 Digital Electronics
Review of Number Systems: Binary, Octal and Hexadecimal number system, their inter-conversion, concepts of logic, truth table, basic logic gates. Boolean Algebra: De Morgan’s theorem, simplification of Boolean expression by Boolean Postulates and theorem, K-maps and their uses. Don’t care condition, Different codes. (BCD, ASCII, Gray etc.). Parity in Codes. IC Logic Families: Basic characteristics of a logic family. (Fan in/out, Propagation delay time, dissipation, noise margins etc. Different logic based IC families (DTL, RTL, ECL, TTL, CMOS). Combinational Logic Circuit: Logic circuits based on AND – OR, OR-AND, NAND, NOR Logic, gate design, addition, subtraction (2’s compliments, half adder, full adder, half subtractor, full subtractor encoder, decoder, PLA. Exclusive OR gate. Sequential Logic Circuit: Flip-flops clocked RS-FF, D-FF, T-FF, JK-FF, Shift Register, Counters (Ring, Ripple, up-down, Synchronous) A/D and D/A Converters. Memory Devices: ROM, PROM, EAPROM, EE PROM, RAM, (Static and dynamic) Memory mapping techniques Micro Computers: Computers and its types, all generation of computers,
PH-404 Computational Physics
Computer Languages: A brief introduction of the computer languages like Basic, C. Pascal etc. and known software packages of computation Numerical Methods: Numerical Solutions of equations, Regression and interpolation, Numerical integration and differentiation. Error analysis and technique for elimination of systematic and random errors Modeling & Simulations: Conceptual models, the mathematical models, Random numbers and random walk, doing Physics with random numbers, Computer simulation, Relationship of modeling and simulation. Some systems of interest for physicists such as Motion of Falling objects, Kepler's problems, Oscillatory motion, Many particle systems, Dynamic systems, Wave phenomena, Field of static charges and current, Diffusion, Populations genetics etc
PH-405 Introduction to Photonics
Guided Wave Optics: Planar slab waveguides, Rectangular channel waveguides, Single and multi-mode optical fibers, waveguide modes and field distributions, waveguide dispersion, pulse propagation Gaussian Beam Propagation: ABCD matrices for transformation of Gaussian beams, applications to simple resonators Electromagnetic Propagation in Anisotropic Media: Reflection and transmission at anisotropic interfaces, Jones Calculus, retardation plates, polarizers Electro-optics and Acousto-optics: Linear electro-optic effect, Longitudinal and transverse modulators, amplitude and phase modulation, Mach-Zehnder modulators, Coupled mode theory, Optical coupling between waveguides, Directional couplers, Photoelastic effect, Acousto-optic interaction and Bragg diffraction, Acousto-optic modulators, deflectors and scanners Optoelectronics: p-n junctions, semiconductor devices: laser amplifiers, injection lasers, photoconductors, photodiodes, photodetector noise.
PH-411 Introduction to Nanomaterials
Introduction to nanomaterials is an introductory course to the students intending to do specialization in nanoscience and nanotechnology. The course includes the brief introduction of nanomaterials, the properties of nanomaterials and their comparison to the bulk materials. The synthesis of nanoparticles of different dimensionalities will be thoroughly discussed. The last section includes the applications of nanomaterials and the safety measurements against toxicity of materials. An introduction to nanoscience and nanotechnology: Historical perspective, physical properties of bulk and nano-sized nanostrucutres, surface energy, nucleation and growth of nanostrucutres, stabilization of nanoparticles, synthesis methods for zero, one and two dimensional nanostructures, discussion of methods, superlattices, self-assembly, Thiol-derivatised monolayer, monolayers of acids, amines and alcohols, Langmuir-Blodgett films, electrochemical deposition lithography techniques, top-down and bottom-up approaches, physical vapor deposition, chemical vapor deposition, sputtering, applications of nanoparticles, material safety and application
PH-412 Electronics Materials and Devices
Semiconductor Fundamentals: Composition, purity and structure of semiconductors, energy band model, band gap and materials classification, charge, effective mass and carrier numbers, density of states, the Fermi function and equilibrium distribution of carriers, doping, n and p-type semiconductors and calculations involving carrier concentrations, EF etc., temperature dependence of carrier concentrations, drift current, mobility, resistivity and band bending, diffusion and total currents, diffusion coefficients, recombination-generation, minority carrier life times and continuity equations with problem solving examples. Device Fabrication Processes: Oxidation, diffusion, ion implantation, lithography, thin-film deposition techniques like evaporation, sputtering, chemical vapour deposition (CVD), epitaxy etc. PN Junction and Bipolar Junction Transistor: Junction terminology, Poisson’s equation, qualitative solution, the depletion approximation, quantitative electrostatic relationships, ideal diode equation, non-idealities, BJT fundamentals, Junction field effect transistor, MOS fundamentals, the essentials of MOSFETs. Dielectric Materials: Polarization mechanisms, dielectric constant and dielectric loss, capacitor dielectric materials, piezoelectricity, ferroelectricity and pyroelectricity
PH-413 Smart Nanomaterials
Brief introduction of nanoparticles, its scope , magnetic nanoparticles inside and everywhere around , most extensively studied magnetic nanoparticles and their preparation, metals, nanoparticles of rare earth metals, oxidation of metallic nanoparticles, magnetic alloys , Fe–Co alloys, magnetic oxides, magnetic moments and their interactions with magnetic fields. Bohr magneton, spin and orbital magnetic moments, magnetic dipole moments in an external magnetic field, the spontaneous magnetization, anisotropy, domains, the spontaneous magnetization, temperature dependence of the magnetization in the molecular field approximation, Curie temperature in the Weiss Heisenberg model curie temperature in the stoner model, the meaning of exchange in the Weiss Heisenberg and stoner models, thermal excitations: spin waves, the magnetic anisotropy, the shape anisotropy ,the magneto-crystalline anisotropy. Magnetic microstructure: magnetic domains and domain walls, ferromagnetic domains, antiferromagnetic domains, magnetization curves and hysteresis loops
PH-414 Surfaces and Interfaces
The brief introduction of structure of surfaces, defects, interaction of defects and their observation, electronic states, charge distribution at surfaces, elasticity theory of surface defects, thermodynamics of flat and curved surfaces, statistical theromodynamics i.e. the free energy, vapor pressure of solid surfaces, adsorption of molecules and ions, desorption, chemical bonding, surface phonons, adsorbate modes, inelastic scattering of atoms and electrons, optical techniques for scattering observations electronic, optical and magnetic properties of surfaces and the diffusion phenomenon.
PH-415 Characterization of Materials
Overview of characterization techniques, light microscopy, Scanning Electron Microscopy (SEM), Scanning Tunneling Microscopy (STM), Particle Size Analyzer, Transmission Electron Microscopy (TEM) , Scanning Force Microscopy (SFM), Energy-Dispersive X-Ray Spectroscopy (EDS), Electron Energy-Loss Spectroscopy in the Transmission Electron Microscope, Scanning Transmission Electron Microscopy (STEM), XRD. Experimental methods for structure determination-X-rays, properties of X-rays, diffraction of X-rays, experimental methods and crystal determination techniques, X-Ray Photoelectron Spectroscopy (XPS), Photoluminescence (PL) and Fourier Transform Infrared Spectroscopy (FTIR), Raman Spectroscopy, Solid State Nuclear Magnetic Resonance (NMR) and Hall Effects (electrical properties measurements).
PH-416 Functional properties of materials
Overview of quantum mechanics, electrons in a crystal field, Electrical properties: band theory of metals and semiconductors, Fermi energy, density of states, effective mass, conductivity of electrons in metals and semiconductors – classical and quantum mechanical treatment, conduction in polymers, metal oxides, dielectric properties, ferrroelectricity, piezoelectricity, Electronic properties: free electrons with and without damping, reflectivity, Lorentz equations, Harmonic oscillators, optical spectra of materials conduction and dispersion, Magnetic properties: Curie law, Langevin theory of para- and dia-magnetism, molecular field theory, Heisenberg exchange interaction, Weiss field, point-charge approximation, crystal fields, field induced and 4f electron anisotropy, Magnetic properties: Origin of atomic moments, paramagnetism of free ions, Brillouin function, Curie law, Langevin theory of para- and dia-magnetism, molecular field theory, Heisenberg exchange interaction, Weiss field, point-charge approximation, crystal fields, field induced and 4f electron anisotropy, Caloric effects, magnetic anisotropy permanent magnets, domain walls, coercivity, hysteresis loop, exchange coupling in rare-earth magnets, hard ferrites, soft magnetic materials, random-anisotropy model, soft magnetism and grain size, Heat capacity, classical theory, Debye model, Einstein model, electronic contribution, thermal conduction in metals and alloys (classical and quantum consideration), thermal conduction of dielectrics, electrical, optical and magnetic properties in nano regime
PH-416 Functional properties of materials
Quarks and leptons, Yukawa and electromagnetic interactions, weak, strong and gravitational interactions, current conservation in the Maxwell’s equations, Lorentz and gauge invariance in electromagnetism, the Klein-Gordon equation, the Dirac equation, Lorentz transformation of spinors, solutions of the Dirac equation, electromagnetic interactions via gauge principle, the quantum field, Lagrangian and Hamiltonian formalism, relativity, mass and four dimensions, qualitative introduction to interactions, the interaction picture and S-matrix, the decay and scattering amplitude, the Yukawa exchange, the complex scalar field, the Dirac field and the spin statistics, Coulomb scattering of spin 0 and spin 1/2 particles, spin 0 and spin 1/2 scattering, electron-pion scatterings crossing symmetry, Compton scattering, electron-muon scattering, electron-proton elastic and inelastic scattering, the parton model, the quark parton model, the Drell-Yan process, electron-positron annihilation into hadrons.
PH-432 Plasma Physics
Introduction to plasmas, how plasmas are produced, Debye length, plasma frequency, number of electrons in a Debye sphere, the de-Broglie wavelength and quantum effects, representative plasma parameters. Motion of a charged particle in a static uniform magnetic field and in the presence of perpendicular electric and magnetic fields, gravitational drift, gradient and curvature drifts. Motion in a magnetic mirror field, drift-motion in a time varying electric and magnetic fields, adiabatic invariants, conservation of J in time independent fields, the Hamiltonian method and chaotic orbits. Fluid equations for a plasma, continuity equation, momentum balance equation, equation of state, and two-fluid equations. Waves in cold plasma, Fourier representation of waves, plasma oscillations, electron and ion waves, sound waves, electrostatic ion waves perpendicular to magnetic field, lower-hybrid frequency. Electromagnetic waves for unmagnetized and magnetized plasmas, Alfven waves, magnetosonic waves, and ray paths in inhomogeneous plasmas. Introduction to controlled fusion: Basic nuclear fusion reactions, reaction rates and power rates and power density, radiation losses from plasmas, operational conditions.
PH-433 Group Theory
Correspondences and transformations, groups, definitions and examples, subgroups, Cayley's theorem, Cosets, Lagrange's theorem, conjugate classes, invariant subgroups, factor groups, homomorphism, direct products, quick review of linear vector spaces, group representations, equivalent representations - characters, construction of representations, invariance of functions and operators, operators, unitary representations, Hilbert space Reducibity/irreducibility of a representation, Schur's Lemmas, Lie groups, isomorphism, subgroups, mixed continuous groups, one parameter group, structure constants, Lie algebras, compact semisimple Lie groups, linear representations, invariant integration, irreducible representations, the Casimir operator, universal covering group, systems of identical particles and SU(n), angular momentum analysis, the Pauli principle, seniority in atomic spectra, atomic spectra in jj-coupling, isotopic spin, nuclear spectra in L-S coupling, the L-S and jj-coupling shell model.
PH-434 Lasers and Quantum Optics
Review of quantum mechanics, Dirac’s notation, Pauli spin matrices, electromagnetic waves and photons, wavelength and frequencies of electromagnetic radiation. Spontaneous and stimulated emission, absorption. Maser principle, cavity, gain medium, population inversion, Boltzmann statistics, threshold condition. Three-level laser, properties of a laser beams, black-body radiation theory. Modes of a rectangular cavity, Raleigh-Jeans and Planck radiation formula. Semi-classical treatment of the interaction of radiation and matter.. Diffraction optics in paraxial approximation. Passive optical resonators, plane-parallel (Fabry-Perot) resonator, concentric, confocal, generalized spherical and ring resonator. Eigen-modes and Eigen-values. Stability condition, unstable resonator, photon lifetime and cavity Q. Q-switching, electro-optical, and acousto-optic Q-switches, saturable absorber Q-switch. Theory of mode-locking, active and passive mode-locking. Laser excitation techniques, optical, electrical, and chemical pumping, laser pumping, excitation transfer, meta-stable states and lifetimes. Types of lasers, solid-state, dye and semiconductor lasers, gas, chemical, free electron, and X-ray lasers, laser applications.
PH-435 Introduction to Quantum Computation
Computer technology and historical background, Basic principles and postulates of quantum mechanics: Quantum states, evolution, quantum measurement, superposition, quantization from bits to qubits, operator function, density matrix, Schrodinger equation, Schmidt decomposition, EPR and Bell’s inequality, Quantum Computation: Quantum Circuits, Single qubit operation, Controlled operations, Measurement, Universal quantum gates, Single qubit and CNOT gates, Breaking unbreakable codes: Code making, Trapdoor function, One time pad, RSA cryptography, Code breaking on classical and quantum computers, Schor’s algorithm, Quantum Cryptography: Uncertainty principle, Polarization and Spin basis, BB84, BB90, and Ekert protocols, Quantum cryptography with and without eavesdropping, Experimental realization, Quantum Search Algorithm.
PH-436 Quantum Information Theory
Review of Quantum Mechanics and overview of Quantum information: Postulates of quantum mechanics, quantum states and observables, Dirac notation, projective measurements, density operator, pure and mixed states, entanglement, tensor products, no-cloning theorem, mixed states from pure states in a larger Hilbert space, Schmidt decomposition, generalized measurements, (CP maps, POVMs), qualitative overview of Quantum Information. Quantum Communication: Dense coding, teleportation, entanglement swapping, instantaneous transfer of information, quantum key distribution. Entanglement and its (search algorithm), modeling quantum measurements, Bekenstein bound, quantum error correction (general conditions, stabilizer codes, 3-qubit codes, relationship with Maxwell’s demon), fault tolerant quantum computation (overview). Physical Protocols for Quantum Information and Computation: Ion trap, optical lattices, NMR, quantum optics, cavity QED.
PH-601 Methods of Mathematical Physics
Second Order Differential Equations: Partial differential equations, Series solutions, a second solution, non-homogeneous equations, Green function. Sturm Liouville Theory: Self – Adjoint ODE’s, Hermitian Operators, Gram-Schmidt Orthoganalization. Laplace transforms and inverse Laplace transforms, Laplace transform of periodic functions. The convolution integral. Bessel Function: Bessel functions of first kind, Bessel function of 2nd kind, Neumann functions, Hankel functions. Legendre Functions: Generating function, recurrence relations, orthogonal, associated Legendre function, spherical Harmonics, applications to spheroidal coordinate system Special Functions: Hermite Functions, Laguerre Functions, Chebyshev polynomials, hypergeometric functions. Fourier Transforms: integral Transform Methods. Integral Equations: Integral equations integral transforms. Generating functions, Neumann series, Degenerate kernels, Hilbert-Schmidt theory. Nonlinear Differential Equations and its Solutions: Classification of nonlinear differential equation and its solutions.
PH-602 Electrodynamics
Maxwell equations and Maxell’s displacement current, vector and scalar potential, Gauge Transforms, Lorentz and Coulomb gauge. Green’s function for conducting and non-conducting sphere, Greens function for wave equation, Retarded solutions for the fields, one dimensional Green’s function, two and three dimensional Green’s functions, Dirac Delta function, properties and uses. Poynting’s theorem and conservation laws, Poynting theorem in linear and dispersive medium, solution for harmonic fields, transformation properties of electromagnetic fields and sources—under rotation. Plane wave in a non-conducting medium, at the surface of and within a conductor, cylindrical cavities and wave guides, modes in a rectangular waveguides, energy flow and attenuation in waveguides. Power losses in a cavity and Q of a cavity, Schulman resonances, multimode propagation in optical fibers. Modes in a planer slab dielectric waveguides, modes in circular fibres, Fields in a hollow metallic wave guide.
PH-603 Material Science
Why Study Materials Science and Engineering? Classification of Materials (metals, ceramics, polymers, composites)). Properties (Mechanical, electrical and magnetic properties). Equilibrium and Kinetics (stable, unstable and metastable equilibrium). Review of thermodynamics terms (temperature, pressure, internal energy, enthalpy, etc.). Atomic Structure. Atomic Bonding in Solids. Bonding Forces and Energies. Primary Interatomic Bonds (ionic, covalent, metallic bonding).. Concept of diffraction in a periodic lattice. Structural information from x-ray diffraction and other diffraction techniques. Crystal structures of metals and ceramic materials. Point Defects. Vacancies and Self-Interstitials.. Diffusion Mechanism. Steady-State Diffusion Nonsteady-State Diffusion, Equilibrium diagrams having intermediate phases or compounds. Phase transformation: Basic concepts, Kinetics of phase transformations, Metastable versus stable transformations, Isothermal transformation diagrams, Continuous cooling transformation diagram.
PH-604 Advanced Quantum Mechanics - I
Time evolution and Schrödinger equation, the Schrödinger versus the Heisenberg picture, interaction picture. Symmetries, conservation laws and degenerates. Discrete symmetries, Parity or space inversion, Lattice Translation as discrete symmetries Classical radiation field, Creation, annihilation and number operators, Quantization of radiation field. Relativistic Quantum Mechanics of Spin 1/2 particles, probability conservation in Relativistic quantum, the Dirac equation, Simple solutions, non-relativistic approximations, plane wave solutions Relativistic invariance of Dirac equation transformation properties of Dirac bilinear, adjoint Dirac equation, equation of continuity, constant of motion The Klein- Gordon Equation, Derivation and Covariance, Klein's Paradox and Zitterbewegung.
PH-605 Advanced Quantum Mechanics - II
Quantum mechanics of continuous systems, discretization, infinite matrices, calculation of matrix elements between states characterized by continuous variables. Concept of classical paths, principle of least action, introduction to path integrals, propagator, simple harmonic oscillator in path integral representation. Adiabatic processes, Berry phase in atomic and molecular physics, quantum Hall effect, coherent states. Multiple vacua, tunneling phenomena, Supersymmetric quantum mechanics. Superconductivity and superfluidity: Meisner effect, Landau-Ginsburg theory, Cooper pairs. Basics of many body theory, particles and holes, RPA, Feynman diagrams for non-relativistic systems. Quantum theory of measurement, EPR paradox, Bell’s theorem, quantum logic, quantum computation.
PH-606 Statistical Physics
Intensive and extensive quantities, thermodynamic variables, thermodynamic limit, thermodynamic transformations. Classical ideal gas, first law of thermodynamics, application to magnetic systems, heat and entropy, Carnot cycle. Second law of thermodynamics, absolute temperature, temperature as integrating factor, entropy of ideal gas. Conditions for equilibrium, equation of state, Fermi gas at low temperatures, application to electrons in solids and white dwarfs. The Bose gas: photons, phonons, Debye specific heat, Bose-Einstein condensation, equation of state, liquid helium. Canonical and grand canonical ensembles, partition function, connection with thermodynamics, fluctuations. minimization of free energy, photon fluctuations, pair creation. The order parameter, Broken symmetry, Ising spin model, Ginsburg – Landau theory, mean-field theory, critical exponents, fluctuation-dissipation theorem, correlation length, universality
PH-710 Physics and Chemistry of Nanomaterials
When does size matter? Scales of Various Systems, Chemistry: atoms, molecules, clusters, Top Down approach, Bottom up approach, Chemical Approaches: Wet Chemical Synthesis of Nanomaterials, Sol gel process with examples.Gas phase synthesis of nanomaterials; Chemical vapor deposition (CVD), Furnace assisted synthesis, Gas Condensation Processing,Sputtered Plasma Processing: Microwave Plasma Processing, Particle precipitation aided Chemical Properties: reactivity and catalytic activity. Electronic and Optical properties: particle in a box, quantum-size-effect (QSE), quantum dots (Q-particles), quantum structures, and artificial atoms. Electrical Properties: size induced metal-insulator-transition (SIMIT), clusters of metals and semiconductors, and one-dimensional conductive nanowires. Mechanical Properties: nanostructured beams, and nanocomposites. Magnetic Properties: nano-scale magnets, transparent magnetic materials, and ultrahigh-density magnetic recording materials.
PH-711 Condensed Matter Physics
Band theory and electron correlations: Single electron in a periodic potential, many electrons in a periodic potential, Hartree-Fock-LDA and beyond. Fermi liquid theory and elementary excitations: Quasiparticles and Landau parameters, thermodynamics of a Fermi liquid. Second quantization: Second quantization for fermions and bosons, Quadratic Hamiltonians and canonical transformations. Quantization of lattice vibrations. Green’s functions: Green’s function and response functions, Dyson and Bethe-Salpeter equations, perturbation methods and Feynman diagrams, zero temperature versus finite temperature formulation. Fermi liquid theory: microscopic formulation: Landau quasiparticles as poles of Green’s function, Landau parameters, conservation law and Ward identities. Quantum magnetism: Spin waves, spin path integral, quantum non-linear sigma model. Modern applications: Kondo effect, quantum phase transitions, non-Fermi liquid.
PH-712 Thermodynamics of Materials
Concepts of Helmholtz free energy and Gibbs free energy. Energy-property relationships, thermal equilibrium and chemical equilibrium. Gibbs-Helmholtz relationships. Equilibrium constant and its variation with temperature, vant Hoff’s equation. Clapeyron equation. Fugacity and chemical activity. ideal and regular solution models. Thermodynamics of solutions, Gibbs- Duhem relationship. Homogeneous and heterogeneous nucleation. The effect of temperature and pressure on phase transformation. Mixing functions. Excess functions. Thermodynamic properties and equilibrium phase diagrams. Phase Rule, Gibbs free energy and entropy calculations. Typical equilibrium Phase diagrams. Statistical mechanics/models in thermodynamics.
PH-720 Renewable Energy Sources
Introduction, importance of energy, world energy demand. Conventional energy sources, renewable sources; potential, availability and present status of renewable sources. Solar energy, physical principle of conversion of the solar radiation into heat, flat-plate collectors., biogas generation, classification of biogas plants. Geothermal sources, hydro-thermal geo- pressure, petro- thermal and magma resources, advantages and limitation of geo- thermal energy. Introduction, global generating on growth rate, prospects of nuclear fusion, safety and health hazards issues, global resources and their assessment. Classification, micro, mini, small and large resources. Principles of energy conversions, turbines, working and efficiency of from to small power systems, environmental impacts.
PH-721 Physics of Solar Cells
An introduction to solar energy, direct and in direct sources of solar energy. Review of semiconductor properties, materials and structural characteristics effecting cell performance. Short-circuit current limit, open-circuit voltage limits, effects of temperatures, short-circuit current losses, open-circuit voltage losses, fill factor losses, efficiency measurement.. Contribution to saturation current density, top-contact design, optical design, antireflection coating, textured surfaces, spectral response, silicon single crystal wafers for solar cells and modules, module construction, cell operating temperatures, module durability and circuit design. Advance materials for solar cell, pre and post surface modification of solar cells, polishing and chemical etching of basic photovoltaic materials. Annealing in various environments, ion-implantation, energy storage, power control and system sizing. Uses of solar cells in water pumping and residential systems, central power plants for space applications.
PH-730 Computational Physics
Introduction to symbolic computing (Matlab, Mathematica and Simulink), introduction to computers, errors estimation, methods for roots of nonlinear equations, linear system simulations (Gauss-elimination, Jacobi method, Gauss-Seidel method, LU decomposition), Eigen-value problems; Linear and nonlinear regressions, computational integration and differentiation, Ordinary Differential Equations (Euler method, Improved Euler method, KR-methods), Multi-step methods; Partial differential equations, introduction to Monte Carlo methods, Genetic Algorithms.
PH-731 Mathematical Modeling & Simulation
Introduction to mathematical modeling, fundamentals of simulation, Introduction to Matlab and Simulink, block model development in Simulink, first order models (examples from fluids, biophysics, physics, electrical systems and mechanical systems), second order systems and models (example on homogeneous and non-homogeneous linear systems coupled or simultaneous systems (examples from fluids and population, electrical and mechanical systems), nonlinear systems and simulation methods; stochastic models and simulation methods (discrete and continuous systems), probability density functions and sampling methods, random walks, introduction to MC techniques.
PH-760 Semiconductor Theory
Crystal Structure, Atomic Bonding, Intrinsic and Extrinsic Semiconductors, Energy Bands, Density of States, Nearly Free Electron Model, Kronig-Penny Model, Energy Bands for Intrinsic and Extrinsic Semiconductors Fermi-Dirac Statistics, Carrier Concentrations in Thermal Equilibrium in Intrinsic Semiconductors and Semiconductors with Impurity Levels. Thermoelectric and Thermomagnetic Effects, Quantum Transport. Diffusion processes, Diffusion and Drift of Carriers, The Continuity Equation, Direct and Indirect Recombination of Electrons and Holes, Steady State Carrier Injection, Optical Absorption, Interband Transitions, Photoconductivity, Luminescence. Ohmic, Blocking and Neutral Metal-Semiconductor Contacts, PN-Junction under Equilibrium Conditions, Forward and Reverse-Biased Junctions, Reverse-Bias Breakdown, Deviations from the Simple Theory.
PH-761 Physics of magnetic materials
Magnetism & various magnetic materials with their applications, classical and quantum phenomenology of magnetism. orbital motion of a single electron, spin states of a single electron, states of isolated ions, ions in magnetic fields, spectroscopic investigations Quantum Mechanics, Magnetism and Bonding in Metals. Spontaneous magnetic order, ferromagnetisms in elements, ferromagnetism in alloys, ferromagnetism in non-metallic compounds, ferromagnetism & anti-ferromagnetism, linear and helical magnetism. magnetocrystalline anisotropy, shape anisotropy and stress anisotropy diamagnetism of isolated atoms and ions, diamagnetism of crystalline solids, diamagnetic resonance or cyclotron resonance, the main classes of paramagnetic solids, paramagnetism due to ions of rare-earth and transition elements, paramagnetism of metals, free radicals and molecular paramagnetism, paramagnetic relaxation. Soft Magnetic Materials theory and applications. Amorphous Materials: magnetism and disorder. Magnetism in Small Structures exchange coupling and nanocrystals.
PH-762 Experimental Techniques
Characterization of electromagnetic radiation, and its interaction with matter. Diffraction of x-ray and neutrons by crystalline material. Qualitative and quantitative analysis of the diffraction patterns. Energy dispersive and wavelength dispersive analysis, thermal analysis, Differential Calorimetric analysis. Thermal Gravimetric analysis (TGA). Molecular spectroscopy techniques, IR spectroscopy, UV-ViS spectroscopy, Transmission Electron Microscopy (TEM),( FTIR), gamma-ray spectroscopy, Mossbauer spectroscopy, Raman spectroscopy and Atomic Force Microscopy (AFM).Understanding of the data analysis qualitatively and quantitatively. Errors and Data Analysis: Errors of observation: accidental and systematic errors. Errors in compounds quantities, in products, in quotient in sum or difference. Frequency distributions and related terminology, methods of least squares, weighted mean and its standard error, curve fitting and accuracy of co-efficient.
PH-763 Surface Physics
The surface as an especially important object for physical investigation. Influece of the surface on physical properties of objects. Clean and covered surfaces. Adsorbtion and catalysis. What is UHV: Vacuum concepts and UHV hardware. The methods to get clean surfaces. The structure of surfaces. Short overview of modern experimental techniques. Lattice concept. 3 D crystal structures, 2D surface structures. Specific types of surface, fcc, hcp, bcc and stepped surfaces and a discussion of their relative energies. More complex to the theory and practice of SIMS, SIMS imaging and depth profiling, Auger depth profiling, theory and practice of Rutherford. Back scattering. Classification of microscopy techniques, Basic concepts in Surface imaging and localized spectroscopy, Imaging XPS, Optical microscopy, STEM. SEM.SPM. An introduction to the theory and practice of scanning Tunneling Microscopy, Scanning probe microscopy techniques, Atomic Force Microscopy.
PH-764 Optical Properties of Solids
Maxwell equations, dielectric optical response, refractive index and absorption, Lorentz oscillator model, dispersion relations, Lyddane-Sachs-Teller relation, Drude theory and basic plasma opticslight scattering, Raman and Brillouin scattering, coherent Raman spectroscopy. Direct and indirect gap semiconductors, energy and momentum conservation in band-to-band transitions, optical absorption and quantum mechanical time-dependent perturbation theory, dipole-allowed optical transition in the parabolic band approximation, indirect optical transitions, excitons, two-particle Schrodinger equation, selection rules, first-class dipole allowed transitions, second-class dipole allowed transitions, , excitons in quantum wells. Franz-Keldysh effect, DC Stark effect, exciton ionization, quantum-confined dc-Stark effect.Overview of Semiconductor Optical Nonlinearities: Phase-space blocking, screening, bandgap renormalization, thermal nonlinearities, optical Stark effect, two-photon absorption. Basic operation principles of LED's and lasers, doping p-n junctions forward and reverse bias, I-V curves, semiconductor lasers, photodetectors.
PH-765 Conducting polymers
Basics of conducting polymers Synthesis, structures and morphology; Conductivity Properties: Semiconductor models and conductivity mechanisms in conducting polymers; Doping reactions: Composites, copolymers, conductive polymer thin films; Electrochromic and electrochemical properties of conducting polymers; Solubility and processing of conducting polymers; conducting polymer coatings, Characterization methods: Electrical, mechanical and electrochemical characterizations; Application fields of conducting polymers: Sensor applications, photovoltaic applications; supercapacitor applications, recent activities in the field of conducting polymers.
PH-766 Biophysics
Introduction, Chemical bonding, Energies forces and bonds, Energy bands, Thermodynamics and statistical mechanics, Reaction rates, Transport processes, Biological polymers, Biological membranes, Biological energy, Movement of organisms, Excitable membranes, Nerve signals, Memory, Biological motors.
PH-770 Environmental Physics
Principal layers, troposphere, stratosphere, mesosphere, thermosphere, Ideal gas model revisited,exponential variation of pressure with height, Escape velocity, Temperature structure and lapse rate. The Sun as the prime source of energy for the earth, Solar energy input, cycles daily and annual, Spectrum of solar radiation reaching the earth, Total radiation and the Stefan Boltzmann,. Thermodynamics of moist air and cloud formation, Growth of water droplets in clouds, Rain and thunderstorms. Measuring the wind; the Beaufort scale, Origin of winds; the atmosphere as a heat engine, The principal forces acting on an air parcel, Cyclones and anticyclones, Thermal gradients and winds, Global convection and global wind patterns. Design of buildings. Atmospheric pollution; acid rain: Systems approaches to environmental issues, Acid rain as a regional problem. Sound and noise: Definition of the decibel and A-weighted sound levels, Measures of noise levels; effect of noise levels on hearing, Domestic noise; design of partitions.
PH-771 Photovoltaic Technology
Early attempts at solar, declining costs of PV, Definition of Gen I, Gen II, and Gen III PV technologies, Solar resources planet-wide, Applications, Utility scale, "Distributed grid" rooftop applications, Current usage of solar PV. Capacity factor calculations, Comparison of solar PV to other Methods, Daily energy demand variations and peak usage, Energy storage methods and Costs, Differences in economic case for point of use PV versus utility scale power generation. Monocrystalline Si, Polycrystalline Si, Si thin film, CdTe and CIGS, High performance multijunction cells. Cell classification, Front side ribbon soldering, Cell interconnects and "stringing", Electrical circuit assembly, Laminate assembly, CPV. Power output, footprint, and cost: Effects of latitude and climate, Tracking Systems, Balance of system (inverters, mounting racks, installation costs). a-Si, CIGS, CdTe, Exotics. Discrete cell panels; Construction overview, Stringing, Layout, Wiring, Final Test. Thin Film Panels; Construction overview, Advantages over discrete, Fabrication techniques, Test. PQ standards & measurements, Case studies.
PH-772 Solar Thermal Power Technology
Models for radiation analysis and beam radiation calculations, evaluation and estimation of the solar resources. Thermal conversion of solar radiation, the concentration of solar radiation, overview of solar concentrating technology. Parabolic trough, paraboloidic dish: continuous type and Fresnel type. single axis and double axis trackings. Solar Parabolic trough; design considerations, tracking and control systems, thermal design of receivers. Solar parabolic dish; design considerations, Sterling engine, Brayton cycle, tracking and control systems. Solar tower concepts; tower design, heliostat design, receiver types, trackingand control systems. Material and product/technology overview for the above technologies. Linear Fresnel reflector, Solar chimney. Technology overview, design considerations, materials. Performance study, site selection and land requirement.
PH-773 Bio-Energy Technology
Current energy consumption, overview of biofuel/bioenergy and biorefinery concepts. Fundamental concepts in understanding biofuel/bioenergy production Renewable feedstocks and their production. Feedstocks availability, characterization and attributes for biofuel/bioenergy production Biomass preprocessing: drying, size reduction, and densification. Various biofuels/bioenergy from biomass. Biomass conversion to heat and power: thermal gasification of biomass, anaerobic Digestion. Biomass conversion to biofuel: thermochemical conversion, syngas Fermentation Biochemical conversion to ethanol: biomass pretreatment. Different enzymes, enzyme hydrolysis, and their applications in ethanol production Biodiesel production from oil seeds, waste oils and algae. Environmental impacts of biofuel production. Energy balance and life-cycle analysis of biofuel production. Value-added processing of biofuel residues and co-products.
H-776 Monte Carlo Methods
Introduction to stochastic techniques, random number generation, probability theory, probability distribution functions, discrete and continuous pdfs, direct sampling methods, rejection techniques, importance sampling methods, random walks, diffusion and biased diffusion, Metropolis algorithm and its applications, error estimation and error reduction techniques, multivariate distributions, random walk filters, applications of MC methods (Ising model, Heisenberg model in statistical physics, neutron transport, radiation transport, study cases using large computer codes using MC methods such as GEANT-4, MCNP etc.
PH-777 Non-Linear Dynamics in Physics
Dynamical systems, phase space, Poincare section, spectral analysis, Basin of attraction, bifurcation diagrams; the Logistic map, period doubling, Lyapunov exponents, entropy; Characterization of chaotic attractors; prediction of chaotic states, method of analogues, linear approximation method, modification of chaotic states; spatio-temporal chaos, intermittency; Quantum maps, chaos in non-equilibrium statistical mechanics, driven systems; inter-mode traces in the propagator for particle in the box.
PH-778 Computational Statistical Physics
Review of thermodynamics and Statistical Mechanics. Empirical equation of state. Ideal gas laws. Van der Waal’s equation. Critical Phenomenon. Hugoniot equation. Mie-Gruneisen equation. Semi-empirical theory of Gruneisen ratio. Theoretical calculations of equation of state. Exactly soluble models. Classical ideal gas. Non-interacting Fermi gas. Non-interacting Bose gas. Paramagnets. Ising model. Approximate methods. Thomson-Fermi model. Debye-Huckle theory. Statistical mechanics of Plasmas. Cluster expansions. Computer based calculations of equation of state. Methods of molecular dynamics and Monte Carlo Techniques.
PH-779 Computational Condensed Matter Physics
Scattering theory, quantum scattering, calculation of cross-sections; Variational techniques, solution of generalized eigenvalue problems; Hartree-Fock method, the helium atom, many electron system, Slater determinants; Density functional theory, local approximation, exchange and correlation, applications; Molecular dynamics simulations, molecular systems, Langevin dynamics, ensembles and integrators, quantum molecular dynamics; Stochastic techniques; quantum Monte Carlo: variational diffusion, path-integral.
PH-780 Special Topics in Physics - I
This is a course on advances in Physics not already covered in the syllabus. This special paper may be conducted as a lecture course or as an independent study course. The topic and contents of this paper must be approved by the BOS, AU.
PH-781 Special Topics in Physics - II
|
3a7951d9110d35e3 | Home arrow Hydroelectric Power
Hydrogen PDF Print E-mail
Written by Administrator
Sunday, 26 August 2007
From Wikipedia, the free encyclopedia.
Jump to: navigation, search
1 - hydrogen helium
Name, Symbol, Number hydrogen, H, 1
Chemical series nonmetals
Group, Period, Block 1, 1, s
Appearance colorless
Atomic mass 1.00794(7) g/mol
Electron configuration 1s1
Electrons per shell 1
Physical properties
Phase gas
Density (0 °C, 101.325 kPa)
0.08988 g/L
Melting point 14.01 K
(-259.14 °C, -434.45 °F)
Boiling point 20.28 K
(-252.87 °C, -423.17 °F)
Triple point 13.8033 K, 7.042 kPa
Heat of fusion (H2) 0.117 kJ/mol
Heat of vaporization (H2) 0.904 kJ/mol
Heat capacity (25 °C) (H2)
28.836 J/(mol·K)
Vapor pressure
P/Pa 1 10 100 1 k 10 k 100 k
at T/K 15 20
Critical temperature 32.19 K
Critical pressure 1.315 MPa
Critical density 30.12 g/L
Atomic properties
Crystal structure hexagonal
Oxidation states 1, -1
(amphoteric oxide)
Electronegativity 2.20 (Pauling scale)
Ionization energies 1st: 1312.0 kJ/mol
Atomic radius 25 pm
Atomic radius (calc.) 53 pm (Bohr radius)
Covalent radius 37 pm
Van der Waals radius 120 pm
Magnetic ordering ???
Thermal conductivity (300 K) 180.5 mW/(m·K)
CAS registry number 1333-74-0
Notable isotopes
Main article: Isotopes of hydrogen
iso NA half-life DM DE (MeV) DP
1H 99.985% H is stable with 0 neutrons
2H 0.015% H is stable with 1 neutrons
3H trace 12.32 y β- 0.019 3He
Hydrogen (Latin: hydrogenium, from Greek: hydro: water, genes: forming) is a chemical element in the periodic table that has the symbol H and atomic number 1. At standard temperature and pressure it is a colorless, odorless, nonmetallic, univalent, tasteless, highly flammable diatomic gas. Hydrogen is the lightest and most abundant element in the universe. It is present in water, all organic compounds (rare exceptions exist, such as buckminsterfullerene) and in all living organisms. Hydrogen is able to react chemically with most other elements. Stars in their main sequence are overwhelmingly composed of hydrogen in its plasma state. The element is used in ammonia production, as a lifting gas, as an alternative fuel, and more recently as a power source of fuel cells.
Despite its ubiquity in the universe, hydrogen is surprisingly difficult to produce in large quantities on the Earth. In the laboratory, the element is prepared by the reaction of acids on metals such as zinc. The electrolysis of water is a simple method of producing hydrogen, but is economically inefficient for mass production. Large-scale production is usually achieved by steam reforming natural gas. Scientists are now researching new methods for hydrogen production; if they succeed in developing a cost-efficient method of large-scale production, hydrogen may become a viable alternative to greenhouse-gas-producing fossil fuels. One of the methods under investigation involves the use of green algae; another promising method involves the conversion of biomass derivatives such as glucose or sorbitol at low temperatures using a catalyst. Yet another method is the "steaming" of carbon, whereby hydrocarbons are broken down with heat to release hydrogen.
Basic features
Hydrogen is the lightest chemical element; its most common isotope comprises just one negatively charged electron, distributed around a positively charged proton (the nucleus of the atom). The electron is bound to the proton by the Coulomb force, the electrical force that one stationary, electrically charged nanoparticle exerts on another. The hydrogen atom has special significance in quantum mechanics as a simple physical system for which there is an exact solution to the Schrödinger equation; from that equation, the experimentally observed frequencies and intensities of hydrogen's spectral lines can be calculated. Spectral lines are dark or bright lines in an otherwise uniform and continuous spectrum, resulting from an excess or deficiency of photons in a narrow frequency range, compared with the nearby frequencies.
At standard temperature and pressure, hydrogen forms a diatomic gas, H2, with a boiling point of only 20.27 K and a melting point of 14.02 K.[1] Under extreme pressures, such as those at the centre of gas giants, the molecules lose their identity and the hydrogen becomes a metal (metallic hydrogen). Under the extremely low pressure in space—virtually a vacuum—the element tends to exist as individual atoms, simply because there is no way for them to combine. However, clouds of H2 and possibly singular hydrogen atoms are said to form in H I and H II regions and are associated with star formation. Hydrogen plays a vital role in powering stars through the proton–proton and carbon–nitrogen cycle. These are nuclear fusion processes, which release huge amounts of energy in stars and other hot celestial bodies as hydrogen atoms combine into helium atoms.
At high temperatures, hydrogen gas can exist as a mixture of atoms, protons, and negatively charged hydride ions. This mixture has a high emissivity and absorptivity in the visible light range, and plays an important part in the emission of light from the sun and other stars.
H2 is highly soluble in water, alcohol, and ether. It has a high capacity for adsorption, in which it is attached to and held to the surface of some substances. It is an odorless, tasteless, colorless, and highly flammable gas that burns at concentrations as low as 4% H2 in air. It reacts violently with chlorine and fluorine, forming hydrohalic acids that can damage the lungs and other tissues. When mixed with oxygen, hydrogen explodes upon ignition. A unique property of hydrogen is that its flame is completely invisible in air. This makes it difficult to tell if a leak is burning, and carries the added risk that it is easy to walk into a hydrogen fire inadvertently.
See also: hydrogen atom.
Large quantities of hydrogen are needed in the chemical and petroleum industries, notably in the Haber process for the production of ammonia, which by mass ranks as the world's fifth most produced industrial compound. Hydrogen is used in the hydrogenation of fats and oils (found in items such as margarine), and in the production of methanol. Hydrogen is used in hydrodealkylation, hydrodesulfurization, and hydrocracking[2]. The element has several other important uses.
There are no "hydrogen wells" or "hydrogen mines" on Earth, so hydrogen cannot be considered a primary energy source such as fossil fuels or uranium. Hydrogen can however be burned in internal combustion engines, an approach advocated by BMW's experimental hydrogen car. However, it is currently difficult and dangerous to store and handle in sufficient quantity for motor fuel use. Hydrogen fuel cells are being investigated as mobile power sources with lower emissions than hydrogen-burning internal combustion engines. The low emissions of hydrogen in internal combustion engines and fuel cells are currently offset by the pollution created by hydrogen production. This may change if the substantial amounts of electricity required for water electrolysis can be generated primarily from low pollution sources such as nuclear energy or wind. Research is being conducted on hydrogen as a replacement for fossil fuels. It could become the link between a range of energy sources, carriers and storage. Hydrogen can be converted to and from electricity (solving the electricity storage and transport issues), from biofuels, and from and into natural gas and diesel fuel. All of this can theoretically be achieved with zero emissions of CO2 and toxic pollutants.
Hydrogen was first produced by Theophratus Bombastus von Hohenheim (14931541)—also known as Paracelsus—by mixing metals with acids. He was unaware that the explosive gas produced by this chemical reaction was hydrogen. In 1671, Robert Boyle described the reaction between two iron fillings and dilute acids, which results in the production of gaseous hydrogen.[3] In 1766, Henry Cavendish was the first to recognize hydrogen as a discrete substance, by identifying the gas from this reaction as "inflammable" and finding that the gas produces water when burned in air. Cavendish stumbled on hydrogen when experimenting with acids and mercury. Although he wrongly assumed that hydrogen was a compound of mercury—and not of the acid—he was still able to accurately describe several key properties of hydrogen.
Antoine Lavoisier gave the element its name and proved that water is composed of hydrogen and oxygen. One of the first uses of the element was for balloons. The hydrogen was obtained by mixing sulfuric acid and iron. In 1931, Harold C. Urey discovered deuterium, an isotope of hydrogen, by repeated distilling the same sample of water. For this discovery, Urey received the Nobel Prize in Chemistry in 1934. In the same year, the third isotope, tritium, was discovered. Because of its relatively simple structure, hydrogen has often been used in models of how an atom works.
Electron energy levels
With the Bohr Model, the energy levels of hydrogen can be calculated fairly accurately. This is done by modeling the electron as revolving around the proton, much like the earth revolving around the sun. Except the sun holds earth in orbit with the force of gravity, but the proton holds the electron in orbit with the force of electromagnetism. Another difference between the Earth-Sun system and the electron-proton system is that, in this model, due to quantum mechanics the electron is allowed to only be at very specific distances from the proton. Modeling the hydrogen atom in this fashion yields the correct energy levels and spectrum.
Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms. [4] This element is found in great abundance in stars and gas giant planets. It is very rare in the Earth's atmosphere (1 ppm by volume), because being the lightest gas causes it to escape Earth's gravity, though when compounds are considered, it is the tenth most abundant element on Earth. The most common source for this element on Earth is water, which is composed two parts hydrogen to one part oxygen (H2O). Other sources include most forms of organic matter including coal, natural gas, and other fossil fuels. Methane (CH4) is an increasingly important source of hydrogen.
Throughout the universe, hydrogen is mostly found in the plasma state whose properties are quite different to molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields, for example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora.
Hydrogen can be prepared in several different ways: steam on heated carbon, hydrocarbon decomposition with heat, reaction of a strong base in an aqueous solution with aluminium, water electrolysis, or displacement from acids with certain metals. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas. At high temperatures (700–1100 °C), steam reacts with methane to yield carbon monoxide and hydrogen.
CH4 + H2O CO + 3 H2
CO + H2O CO2 + H2
Hydrogen combines with oxygen to form water, H2O, and releases significant amounts of energy in doing so, burning explosively in air. Deuterium oxide, or D2O, is commonly referred to as heavy water. Hydrogen also forms a vast array of compounds with carbon. Because of their association with living things, these compounds are called organic compounds, and the study of the properties of these compounds is called organic chemistry.
First tracks observed in Liquid hydrogen bubble chamber.
First tracks observed in Liquid hydrogen bubble chamber.
Under normal conditions, hydrogen gas is a mix of two different kinds of molecules which differ from one another by the relative spin of the nuclei.[5] These two forms are known as ortho- and para-hydrogen (this is different from isotopes, see below). In ortho-hydrogen the nuclear spins are parallel and form a triplet, while in para they are antiparallel and form a singlet. At standard conditions hydrogen is composed of about 25% of the para form and 75% of the ortho form (the so-called "normal" form). The equilibrium ratio of these two forms depends on temperature, but since the ortho form has higher energy (is an excited state), it cannot be stable in its pure form. At low temperatures (around boiling point), the equilibrium state is comprised almost entirely of the para form.
The conversion process between the forms is slow, and if hydrogen is cooled down and condensed rapidly, it contains large quantities of the ortho form. It is important in preparation and storage of liquid hydrogen, since the ortho-para conversion produces more heat than the heat of its evaporation, and a lot of hydrogen can be lost by evaporation in this way during several days after liquefying. Therefore, some catalysts of the ortho-para conversion process are used during hydrogen cooling. The two forms have also slightly different physical properties. For example, the melting and boiling points of parahydrogen are about 0.1 K lower than of the "normal" form.
Main Article: Isotopes of hydrogen
The most common isotope of hydrogen, this stable isotope has a nucleus consisting of a single proton; hence the descriptive, although rarely used, name protium. The spin of a protium atom is 1/2+. [6]
The other stable isotope is deuterium, with an extra neutron in the nucleus. Deuterium comprises 0.0184%–0.0082% of all hydrogen (IUPAC); ratios of deuterium to protium are reported relative to the VSMOW standard reference water. The spin of a deuterium atom is 1+.
The third naturally occurring hydrogen isotope is the radioactive tritium. The tritium nucleus contains two neutrons in addition to the proton. It decays through beta decay and has a half-life of 12.32 years. Tritium occurs naturally due to cosmic rays interacting with atmospheric gases. Like ordinary hydrogen, tritium reacts with the oxygen in the atmosphere to form T2O. This radioactive "water" molecule constantly enters the Earth's seas and lakes in the form of slightly radioactive rain, but its half-life is short enough to prevent a buildup of hazardous radioactivity. The spin of a tritium atom is 1/2+.
Hydrogen-4 was synthesized by bombarding tritium with fast-moving deuterium nuclei. It decays through neutron emission and has a half-life of 9.93696x10-23 seconds. The spin of a hydrogen-4 atom is 2-.
Hydrogen-6 decays through triple neutron emission and has a half-life of 3.26500x10-22 seconds.
In 2003 hydrogen-7 was created (article) at the RIKEN laboratory in Japan by colliding a high-energy beam of helium-8 atoms with a cryogenic hydrogen target and detecting tritons—the nuclei of tritium atoms—and neutrons from the breakup of hydrogen-7, the same method used to produce and detect hydrogen-5.
Scientists from the University of Colorado at Boulder discovered in 2005 that microbes living in the hot waters of Yellowstone National Park gain their sustenance from molecular hydrogen.
See also
1. ^ A PDF file from commonsensescience.org on hydrogen. URL accessed on September 15, 2005.
2. ^ Los Alamos National Laboratory – Hydrogen. URL accessed on September 15, 2005.
3. ^ Webelements – Hydrogen historical information. URL accessed on September 15, 2005.
4. ^ Universal Industrial Gases, Inc. – Hydrogen (H2) Applications and Uses. URL accessed on September 15, 2005.
5. ^ Jefferson Lab – Hydrogen. URL accessed on September 15, 2005.
6. ^ Lawrence Berkeley National Laboratory – Hydrogen isotopes. URL accessed on September 15, 2005.
Book references
• Stwertka, Albert (2002). A Guide to the Elements, Oxford University Press, New York, NY. ISBN 0195150279.
• Krebs, Robert E. (1998). The History and Use of Our Earth's Chemical Elements : A Reference Guide, Greenwood Press, Westport, Conn.. ISBN 0313301239.
• Newton, David E. (1994). The Chemical Elements, Franklin Watts, New York, NY. ISBN 0531125017.
• Rigden, John S. (2002). Hydrogen : The Essential Element, Harvard University Press, Cambridge, MA. ISBN 0531125017.
Last Updated ( Sunday, 26 August 2007 )
< Prev Next >
© 2018 iENERGY Inc. |
272147e58075ba66 | tisdag 25 november 2014
The Radiating Atom 1: Schrödinger's Enigma
Are there quantum jumps?
This is a first step in my search for a wave equation for a radiating atom as an analog of the wave equation with small damping studied in Mathematical Physics of Blackbody Radiation.
Schrödinger formulated his basic equation of quantum mechanics in the last of his four legendary articles on Quantisation as a Problem of Proper Values I-IV from 1926. Central to quantum mechanics is the basic relation (with $h$ Planck's constant)
• $\nu = (E_2 - E_1)/h$
between the frequency $\nu$ of emitted radiation, and the difference in energy $E_2 - E_1$ between two solutions $\psi_1(x,t)=\exp(i\nu_1t)\phi_1(x)$ and $\psi_2(x,t)=\exp(i\nu_2t)\phi_2(x)$ satisfying Schrödinger's equation
• $ih\frac{\partial\psi}{\partial t} + H\psi = 0$
where $H\phi_1=E_1\phi_1$ and $H\phi_2=E_2\phi_2$ with $E_1=h\nu_1$ and $E_2=h\nu_2$ and $H$ is the Hamiltonian operator acting with respect to a space coordinate $x$.
To connect to the basic relation, consider the function
• $\Psi (x,t) = \vert\Phi (x,t)\vert^2 = \Phi (x,t)\overline\Phi (x,t)$,
• $\Phi (x,t) = c_1\psi_1(x,t)+c_2\psi_1(x,t)$
a linear combination with coeffcients $c_1$ and $c_2$.
Direct computation shows that $\Psi (x,t)$ has a time dependency of the form
• $\exp(i(\nu_2 -\nu_1)t)$,
and thus corresponds to a beat between two frequencies as an interference phenomenon.
Interference between two eigen-states of energies $E_2$ and $E_2$ can thus naturally be viewed as a resonance phenomenon or beat-interference of frequency $\nu =(E_2 - E_1)/h$, which can be associated with emitted radiation from an oscillation of the modulus $\Psi (x,t)$ of the same frequency , because a pulsating charge generates a pulsating electromagnetic field.
It remains to formulate a Schrödinger equation with (small) radiation damping for an atom as an analogue of the wave equation studied in Mathematical Physics of Blackbody Radiation, an equation describing atomic oscillation between two energy levels as the origin of observable emitted radiation.
It is encouraging to note that Schrödinger in his article IV directly connects to radiation damping as an essential element of a mathematical model for an atom, a connection which is not present in the standard Schrödinger equation without radiation damping.
The mantra that presents itself is:
• Listen to the beat of the atom!
The model should contain a damping coefficient which vanishes when $\nu$ is an eigenvalue of the Hamiltonian and is small else. This makes the beat observable, while eigenvalues and eigenfunctions of the Hamiltonian are not.
Inga kommentarer:
Skicka en kommentar |
66b60a04a26e0aa3 | Powered by
Share this page on
Article provided by Wikipedia
Wolfgang Pauli
The Pauli exclusion principle is the "quantum mechanical principle which states that two or more "identical "fermions (particles with half-integer "spin) cannot occupy the same "quantum state within a "quantum system simultaneously. In the case of "electrons in atoms, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four "quantum numbers: n, the "principal quantum number, , the "angular momentum quantum number, m, the "magnetic quantum number, and ms, the "spin quantum number. For example, if two electrons reside in the same "orbital, and if their n, , and m values are the same, then their ms must be different, and thus the electrons must have opposite half-integer spin projections of 1/2 and −1/2. This principle was formulated by Austrian physicist "Wolfgang Pauli in 1925 for electrons, and later extended to all fermions with his "spin–statistics theorem of 1940.
Particles with an integer spin, or "bosons, are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state, as with, for instance, photons produced by a "laser and "Bose–Einstein condensate.
A more rigorous statement is that with respect to exchange of two identical particles the total "wave function is "antisymmetric for fermions, and symmetric for bosons. This means that if the space and spin co-ordinates of two identical particles are interchanged the wave function changes its sign for fermions, and does not change for bosons.
The Pauli exclusion principle describes the behavior of all "fermions (particles with "half-integer "spin"), while "bosons (particles with "integer spin") are subject to other principles. Fermions include "elementary particles such as "quarks, "electrons and "neutrinos. Additionally, baryons such as "protons and "neutrons ("subatomic particles composed from three quarks) and some "atoms (such as "helium-3) are fermions, and are therefore described by the Pauli exclusion principle as well. Atoms can have different overall "spin", which determines whether they are fermions or bosons — for example "helium-3 has spin 1/2 and is therefore a fermion, in contrast to "helium-4 which has spin 0 and is a boson.[1]:123–125 As such, the Pauli exclusion principle underpins many properties of everyday matter, from its large-scale stability, to the "chemical behavior of atoms.
"Half-integer spin" means that the intrinsic "angular momentum value of fermions is (reduced "Planck's constant) times a "half-integer (1/2, 3/2, 5/2, etc.). In the theory of "quantum mechanics fermions are described by "antisymmetric states. In contrast, particles with integer spin (called bosons) have symmetric wave functions; unlike fermions they may share the same quantum states. Bosons include the "photon, the "Cooper pairs which are responsible for "superconductivity, and the "W and Z bosons. (Fermions take their name from the "Fermi–Dirac statistical distribution that they obey, and bosons from their "Bose–Einstein distribution.)
In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more "chemically stable than those with odd numbers of electrons. In the 1916 article "The Atom and the Molecule" by "Gilbert N. Lewis, for example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in any given shell, and especially to hold eight electrons which are normally arranged symmetrically at the eight corners of a cube (see: "cubical atom).[2] In 1919 chemist "Irving Langmuir suggested that the "periodic table could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of "electron shells around the nucleus.[3] In 1922, "Niels Bohr updated his model of the atom by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells".[4]:203
Pauli looked for an explanation for these numbers, which were at first only "empirical. At the same time he was trying to explain experimental results of the "Zeeman effect in atomic "spectroscopy and in "ferromagnetism. He found an essential clue in a 1924 paper by "Edmund C. Stoner, which pointed out that, for a given value of the "principal quantum number (n), the number of energy levels of a single electron in the "alkali metal spectra in an external magnetic field, where all "degenerate energy levels are separated, is equal to the number of electrons in the closed shell of the "noble gases for the same value of n. This led Pauli to realize that the complicated numbers of electrons in closed shells can be reduced to the simple rule of one electron per state, if the electron states are defined using four quantum numbers. For this purpose he introduced a new two-valued quantum number, identified by "Samuel Goudsmit and "George Uhlenbeck as "electron spin.[5][6]
Connection to quantum state symmetry[edit]
The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a "sum of states in which one particle is in state and the other in state , and is given by:
and antisymmetry under exchange means that A(x,y) = −A(y,x). This implies A(x,y) = 0 when x = y, which is Pauli exclusion. It is true in any basis since local changes of basis keep antisymmetric matrices antisymmetric.
is necessarily antisymmetric. To prove it, consider the matrix element
The first and last terms are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
Pauli principle in advanced quantum theory[edit]
According to the "spin–statistics theorem, particles with integer spin occupy symmetric quantum states, and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-integer values of spin are allowed by the principles of quantum mechanics. In relativistic "quantum field theory, the Pauli principle follows from applying a "rotation operator in "imaginary time to particles of half-integer spin.
In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free fermions. The reason for this is that, in one dimension, exchange of particles requires that they pass through each other; for infinitely strong repulsion this cannot happen. This model is described by a quantum "nonlinear Schrödinger equation. In momentum space the exclusion principle is valid also for finite repulsion in a Bose gas with delta-function interactions,[7] as well as for "interacting spins and "Hubbard model in one dimension, and for other models solvable by "Bethe ansatz. The "ground state in models solvable by Bethe ansatz is a "Fermi sphere.
Atoms and the Pauli principle[edit]
The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly important consequence of the principle is the elaborate "electron shell structure of "atoms and the way atoms share electrons, explaining the variety of chemical elements and their chemical combinations. An "electrically neutral atom contains bound "electrons equal in number to the protons in the "nucleus. Electrons, being fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within an atom, i.e. have different spins while at the same electron orbital as described below.
An example is the neutral "helium atom, which has two bound electrons, both of which can occupy the lowest-energy ("1s) states by acquiring opposite spin; as spin is part of the quantum state of the electron, the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin can take only two different values ("eigenvalues). In a "lithium atom, with three bound electrons, the third electron cannot reside in a 1s state, and must occupy one of the higher-energy 2s states instead. Similarly, successively larger elements must have shells of successively higher energy. The chemical properties of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of occupied electron shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the "periodic table of the elements.[8]:214–218
Solid state properties and the Pauli principle[edit]
In "conductors and "semiconductors, there are very large numbers of "molecular orbitals which effectively form a continuous "band structure of "energy levels. In strong conductors ("metals) electrons are so "degenerate that they cannot even contribute much to the "thermal capacity of a metal.[9]:133–147 Many mechanical, electrical, magnetic, optical and chemical properties of solids are the direct consequence of Pauli exclusion.
Stability of matter[edit]
The stability of the electrons in an atom itself is unrelated to the exclusion principle, but is described by the quantum theory of the atom. The underlying idea is that close approach of an electron to the nucleus of the atom necessarily increases its kinetic energy, an application of the "uncertainty principle of Heisenberg.[10] However, stability of large systems with many electrons and many "nucleons is a different matter, and requires the Pauli exclusion principle.[11]
It has been shown that the Pauli exclusion principle is responsible for the fact that ordinary bulk matter is stable and occupies volume. This suggestion was first made in 1931 by "Paul Ehrenfest, who pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. Atoms therefore occupy a volume and cannot be squeezed too closely together.[12]
A more rigorous proof was provided in 1967 by "Freeman Dyson and Andrew Lenard, who considered the balance of attractive (electron–nuclear) and repulsive (electron–electron and nuclear–nuclear) forces and showed that ordinary matter would collapse and occupy a much smaller volume without the Pauli principle.[13][14]
The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive "exchange interaction, which is a short-range effect, acting simultaneously with the long-range electrostatic or "Coulombic force. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time.
Astrophysics and the Pauli principle[edit]
"Freeman Dyson and Andrew Lenard did not consider the extreme magnetic or gravitational forces that occur in some "astronomical objects. In 1995 "Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in "neutron stars, although at a much higher density than in ordinary matter.[15] It is a consequence of "general relativity that, in sufficiently intense gravitational fields, matter collapses to form a "black hole.
Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form of "white dwarf and "neutron stars. In both bodies, atomic structure is disrupted by extreme pressure, but the stars are held in "hydrostatic equilibrium by "degeneracy pressure, also known as Fermi pressure. This exotic form of matter is known as "degenerate matter. The immense gravitational force of a star's mass is normally held in equilibrium by "thermal pressure caused by heat produced in "thermonuclear fusion in the star's core. In white dwarfs, which do not undergo nuclear fusion, an opposing force to gravity is provided by "electron degeneracy pressure. In "neutron stars, subject to even stronger gravitational forces, electrons have merged with "protons to form "neutrons. Neutrons are capable of producing an even higher degeneracy pressure, "neutron degeneracy pressure, albeit over a shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher "density than a white dwarf. Neutron stars are the most "rigid" objects known; their "Young modulus (or more accurately, "bulk modulus) is 20 orders of magnitude larger than that of "diamond. However, even this enormous rigidity can be overcome by the "gravitational field of a massive star or by the pressure of a "supernova, leading to the formation of a "black hole.[16]:286–287
See also[edit]
1. ^ Kenneth S. Krane (5 November 1987). Introductory Nuclear Physics. Wiley. "ISBN "978-0-471-80553-3.
2. ^ [1]
3. ^ Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules" (PDF). Journal of the American Chemical Society. 41 (6): 868–934. "doi:10.1021/ja02227a002. Archived from the original (PDF) on 2012-03-30. Retrieved 2008-09-01.
4. ^ Shaviv, Glora. The Life of Stars: The Controversial Inception and Emergence of the Theory of Stellar Structure (2010 ed.). Springer. "ISBN "978-3642020872.
5. ^ Straumann, Norbert (2004). "The Role of the Exclusion Principle for Atoms to Stars: A Historical Account". Invited talk at the 12th Workshop on Nuclear Astrophysics. "CiteSeerX accessible.
6. ^ Pauli, W. (1925). "Über den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren". Zeitschrift für Physik. 31: 765–783. "doi:10.1007/BF02980631.
7. ^ A. Izergin and V. Korepin, Letter in Mathematical Physics vol 6, page 283, 1982
8. ^ Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, "ISBN "0-13-111892-7
9. ^ Kittel, Charles (2005), Introduction to Solid State Physics (8th ed.), USA: John Wiley & Sons, Inc., "ISBN "978-0-471-41526-8
10. ^ Elliot J. Lieb The Stability of Matter and Quantum Electrodynamics
11. ^ This realization is attributed by Lieb and by G. L. Sewell (2002). Quantum Mechanics and Its Emergent Macrophysics. Princeton University Press. "ISBN "0-691-05832-6. to F. J. Dyson and A. Lenard: Stability of Matter, Parts I and II (J. Math. Phys., 8, 423–434 (1967); J. Math. Phys., 9, 698–711 (1968) ).
12. ^ As described by F. J. Dyson (J.Math.Phys. 8, 1538–1545 (1967)), Ehrenfest made this suggestion in his address on the occasion of the award of the "Lorentz Medal to Pauli.
14. ^ Dyson, Freeman (1967). "Ground‐State Energy of a Finite System of Charged Particles". J. Math. Phys. 8 (8): 1538–1545. "Bibcode:1967JMP.....8.1538D. "doi:10.1063/1.1705389.
15. ^ Lieb, E. H.; Loss, M.; Solovej, J. P. (1995). "Stability of Matter in Magnetic Fields". "Physical Review Letters. 75 (6): 985–9. "arXiv:cond-mat/9506047Freely accessible. "Bibcode:1995PhRvL..75..985L. "doi:10.1103/PhysRevLett.75.985.
16. ^ Martin Bojowald (5 November 2012). The Universe: A View from Classical and Quantum Gravity. John Wiley & Sons. "ISBN "978-3-527-66769-7.
External links[edit]
|
1c57f4ca9bc6b51f | Causal Determinism
First published Thu Jan 23, 2003; substantive revision Thu Jan 21, 2016
Causal determinism is, roughly speaking, the idea that every event is necessitated by antecedent events and conditions together with the laws of nature. The idea is ancient, but first became subject to clarification and mathematical analysis in the eighteenth century. Determinism is deeply connected with our understanding of the physical sciences and their explanatory ambitions, on the one hand, and with our views about human free action on the other. In both of these general areas there is no agreement over whether determinism is true (or even whether it can be known true or false), and what the import for human agency would be in either case.
1. Introduction
In most of what follows, I will speak simply of determinism, rather than of causal determinism. This follows recent philosophical practice of sharply distinguishing views and theories of what causation is from any conclusions about the success or failure of determinism (cf. Earman, 1986; an exception is Mellor 1994). For the most part this disengagement of the two concepts is appropriate. But as we will see later, the notion of cause/effect is not so easily disengaged from much of what matters to us about determinism.
Traditionally determinism has been given various, usually imprecise definitions. This is only problematic if one is investigating determinism in a specific, well-defined theoretical context; but it is important to avoid certain major errors of definition. In order to get started we can begin with a loose and (nearly) all-encompassing definition as follows:
Determinism: The world is governed by (or is under the sway of) determinism if and only if, given a specified way things are at a time t, the way things go thereafter is fixed as a matter of natural law.
The italicized phrases are elements that require further explanation and investigation, in order for us to gain a clear understanding of the concept of determinism.
The roots of the notion of determinism surely lie in a very common philosophical idea: the idea that everything can, in principle, be explained, or that everything that is, has a sufficient reason for being and being as it is, and not otherwise. In other words, the roots of determinism lie in what Leibniz named the Principle of Sufficient Reason. But since precise physical theories began to be formulated with apparently deterministic character, the notion has become separable from these roots. Philosophers of science are frequently interested in the determinism or indeterminism of various theories, without necessarily starting from a view about Leibniz' Principle.
Since the first clear articulations of the concept, there has been a tendency among philosophers to believe in the truth of some sort of determinist doctrine. There has also been a tendency, however, to confuse determinism proper with two related notions: predictability and fate.
Fatalism is the thesis that all events (or in some versions, at least some events) are destined to occur no matter what we do. The source of the guarantee that those events will happen is located in the will of the gods, or their divine foreknowledge, or some intrinsic teleological aspect of the universe, rather than in the unfolding of events under the sway of natural laws or cause-effect relations. Fatalism is therefore clearly separable from determinism, at least to the extent that one can disentangle mystical forces and gods' wills and foreknowledge (about specific matters) from the notion of natural/causal law. Not every metaphysical picture makes this disentanglement possible, of course. But as a general matter, we can imagine that certain things are fated to happen, without this being the result of deterministic natural laws alone; and we can imagine the world being governed by deterministic laws, without anything at all being fated to occur (perhaps because there are no gods, nor mystical/teleological forces deserving the titles fate or destiny, and in particular no intentional determination of the “initial conditions” of the world). In a looser sense, however, it is true that under the assumption of determinism, one might say that given the way things have gone in the past, all future events that will in fact happen are already destined to occur.
Prediction and determinism are also easy to disentangle, barring certain strong theological commitments. As the following famous expression of determinism by Laplace shows, however, the two are also easy to commingle:
In this century, Karl Popper (1982) defined determinism in terms of predictability also, in his book The Open Universe.
Laplace probably had God in mind as the powerful intelligence to whose gaze the whole future is open. If not, he should have: 19th and 20th century mathematical studies showed convincingly that neither a finite, nor an infinite but embedded-in-the-world intelligence can have the computing power necessary to predict the actual future, in any world remotely like ours. But even if our aim is only to predict a well-defined subsystem of the world, for a limited period of time, this may be impossible for any reasonable finite agent embedded in the world, as many studies of chaos (sensitive dependence on initial conditions) show. Conversely, certain parts of the world could be highly predictable, in some senses, without the world being deterministic. When it comes to predictability of future events by humans or other finite agents in the world, then, predictability and determinism are simply not logically connected at all.
The equation of “determinism”with “predictability” is therefore a façon de parler that at best makes vivid what is at stake in determinism: our fears about our own status as free agents in the world. In Laplace's story, a sufficiently bright demon who knew how things stood in the world 100 years before my birth could predict every action, every emotion, every belief in the course of my life. Were she then to watch me live through it, she might smile condescendingly, as one who watches a marionette dance to the tugs of strings that it knows nothing about. We can't stand the thought that we are (in some sense) marionettes. Nor does it matter whether any demon (or even God) can, or cares to, actually predict what we will do: the existence of the strings of physical necessity, linked to far-past states of the world and determining our current every move, is what alarms us. Whether such alarm is actually warranted is a question well outside the scope of this article (see Hoefer (2002a), Ismael (2016) and the entries on free will and incompatibilist theories of freedom). But a clear understanding of what determinism is, and how we might be able to decide its truth or falsity, is surely a useful starting point for any attempt to grapple with this issue. We return to the issue of freedom in section 6, Determinism and Human Action, below.
2. Conceptual Issues in Determinism
Recall that we loosely defined causal determinism as follows, with terms in need of clarification italicized:
2.1 The World
Why should we start so globally, speaking of the world, with all its myriad events, as deterministic? One might have thought that a focus on individual events is more appropriate: an event E is causally determined if and only if there exists a set of prior events {A, B, C …} that constitute a (jointly) sufficient cause of E. Then if all—or even just most—events E that are our human actions are causally determined, the problem that matters to us, namely the challenge to free will, is in force. Nothing so global as states of the whole world need be invoked, nor even a complete determinism that claims all events to be causally determined.
For a variety of reasons this approach is fraught with problems, and the reasons explain why philosophers of science mostly prefer to drop the word “causal” from their discussions of determinism. Generally, as John Earman quipped (1986), to go this route is to “… seek to explain a vague concept—determinism—in terms of a truly obscure one—causation.” More specifically, neither philosophers' nor laymen's conceptions of events have any correlate in any modern physical theory.[1] The same goes for the notions of cause and sufficient cause. A further problem is posed by the fact that, as is now widely recognized, a set of events {A, B, C …} can only be genuinely sufficient to produce an effect-event if the set includes an open-ended ceteris paribus clause excluding the presence of potential disruptors that could intervene to prevent E. For example, the start of a football game on TV on a normal Saturday afternoon may be sufficient ceteris paribus to launch Ted toward the fridge to grab a beer; but not if a million-ton asteroid is approaching his house at .75c from a few thousand miles away, nor if his phone is about to ring with news of a tragic nature, …, and so on. Bertrand Russell famously argued against the notion of cause along these lines (and others) in 1912, and the situation has not changed. By trying to define causal determination in terms of a set of prior sufficient conditions, we inevitably fall into the mess of an open-ended list of negative conditions required to achieve the desired sufficiency.
Moreover, thinking about how such determination relates to free action, a further problem arises. If the ceteris paribus clause is open-ended, who is to say that it should not include the negation of a potential disruptor corresponding to my freely deciding not to go get the beer? If it does, then we are left saying “When A, B, C, … Ted will then go to the fridge for a beer, unless D or E or F or … or Ted decides not to do so.” The marionette strings of a “sufficient cause” begin to look rather tenuous.
They are also too short. For the typical set of prior events that can (intuitively, plausibly) be thought to be a sufficient cause of a human action may be so close in time and space to the agent, as to not look like a threat to freedom so much as like enabling conditions. If Ted is propelled to the fridge by {seeing the game's on; desiring to repeat the satisfactory experience of other Saturdays; feeling a bit thirsty; etc}, such things look more like good reasons to have decided to get a beer, not like external physical events far beyond Ted's control. Compare this with the claim that {state of the world in 1900; laws of nature} entail Ted's going to get the beer: the difference is dramatic. So we have a number of good reasons for sticking to the formulations of determinism that arise most naturally out of physics. And this means that we are not looking at how a specific event of ordinary talk is determined by previous events; we are looking at how everything that happens is determined by what has gone before. The state of the world in 1900 only entails that Ted grabs a beer from the fridge by way of entailing the entire physical state of affairs at the later time.
2.2 The way things are at a time t
The typical explication of determinism fastens on the state of the (whole) world at a particular time (or instant), for a variety of reasons. We will briefly explain some of them. Why take the state of the whole world, rather than some (perhaps very large) region, as our starting point? One might, intuitively, think that it would be enough to give the complete state of things on Earth, say, or perhaps in the whole solar system, at t, to fix what happens thereafter (for a time at least). But notice that all sorts of influences from outside the solar system come in at the speed of light, and they may have important effects. Suppose Mary looks up at the sky on a clear night, and a particularly bright blue star catches her eye; she thinks “What a lovely star; I think I'll stay outside a bit longer and enjoy the view.” The state of the solar system one month ago did not fix that that blue light from Sirius would arrive and strike Mary's retina; it arrived into the solar system only a day ago, let's say. So evidently, for Mary's actions (and hence, all physical events generally) to be fixed by the state of things a month ago, that state will have to be fixed over a much larger spatial region than just the solar system. (If no physical influences can go faster than light, then the state of things must be given over a spherical volume of space 1 light-month in radius.)
But in making vivid the “threat” of determinism, we often want to fasten on the idea of the entire future of the world as being determined. No matter what the “speed limit” on physical influences is, if we want the entire future of the world to be determined, then we will have to fix the state of things over all of space, so as not to miss out something that could later come in “from outside” to spoil things. In the time of Laplace, of course, there was no known speed limit to the propagation of physical things such as light-rays. In principle light could travel at any arbitrarily high speed, and some thinkers did suppose that it was transmitted “instantaneously.” The same went for the force of gravity. In such a world, evidently, one has to fix the state of things over the whole of the world at a time t, in order for events to be strictly determined, by the laws of nature, for any amount of time thereafter.
In all this, we have been presupposing the common-sense Newtonian framework of space and time, in which the world-at-a-time is an objective and meaningful notion. Below when we discuss determinism in relativistic theories we will revisit this assumption.
2.3 Thereafter
For a wide class of physical theories (i.e., proposed sets of laws of nature), if they can be viewed as deterministic at all, they can be viewed as bi-directionally deterministic. That is, a specification of the state of the world at a time t, along with the laws, determines not only how things go after t, but also how things go before t. Philosophers, while not exactly unaware of this symmetry, tend to ignore it when thinking of the bearing of determinism on the free will issue. The reason for this is that we tend to think of the past (and hence, states of the world in the past) as done, over, fixed and beyond our control. Forward-looking determinism then entails that these past states—beyond our control, perhaps occurring long before humans even existed—determine everything we do in our lives. It then seems a mere curious fact that it is equally true that the state of the world now determines everything that happened in the past. We have an ingrained habit of taking the direction of both causation and explanation as being past—-present, even when discussing physical theories free of any such asymmetry. We will return to this point shortly.
Another point to notice here is that the notion of things being determined thereafter is usually taken in an unlimited sense—i.e., determination of all future events, no matter how remote in time. But conceptually speaking, the world could be only imperfectly deterministic: things could be determined only, say, for a thousand years or so from any given starting state of the world. For example, suppose that near-perfect determinism were regularly (but infrequently) interrupted by spontaneous particle creation events, which occur only once every thousand years in a thousand-light-year-radius volume of space. This unrealistic example shows how determinism could be strictly false, and yet the world be deterministic enough for our concerns about free action to be unchanged.
2.4 Laws of nature
In the loose statement of determinism we are working from, metaphors such as “govern” and “under the sway of” are used to indicate the strong force being attributed to the laws of nature. Part of understanding determinism—and especially, whether and why it is metaphysically important—is getting clear about the status of the presumed laws of nature.
In the physical sciences, the assumption that there are fundamental, exceptionless laws of nature, and that they have some strong sort of modal force, usually goes unquestioned. Indeed, talk of laws “governing” and so on is so commonplace that it takes an effort of will to see it as metaphorical. We can characterize the usual assumptions about laws in this way: the laws of nature are assumed to be pushy explainers. They make things happen in certain ways , and by having this power, their existence lets us explain why things happen in certain ways. (For a defense of this perspective on laws, see Maudlin (2007)). Laws, we might say, are implicitly thought of as the cause of everything that happens. If the laws governing our world are deterministic, then in principle everything that happens can be explained as following from states of the world at earlier times. (Again, we note that even though the entailment typically works in the future→past direction also, we have trouble thinking of this as a legitimate explanatory entailment. In this respect also, we see that laws of nature are being implicitly treated as the causes of what happens: causation, intuitively, can only go past→future.)
Interestingly, philosophers tend to acknowledge the apparent threat determinism poses to free will, even when they explicitly reject the view that laws are pushy explainers. Earman (1986), for example, advocates a theory of laws of nature that takes them to be simply the best system of regularities that systematizes all the events in universal history. This is the Best Systems Analysis (BSA), with roots in the work of Hume, Mill and Ramsey, and most recently refined and defended by David Lewis (1973, 1994) and by Earman (1984, 1986). (cf. entry on laws of nature). Yet he ends his comprehensive Primer on Determinism with a discussion of the free will problem, taking it as a still-important and unresolved issue. Prima facie this is quite puzzling, for the BSA is founded on the idea that the laws of nature are ontologically derivative, not primary; it is the events of universal history, as brute facts, that make the laws be what they are, and not vice-versa. Taking this idea seriously, the actions of every human agent in history are simply a part of the universe-wide pattern of events that determines what the laws are for this world. It is then hard to see how the most elegant summary of this pattern, the BSA laws, can be thought of as determiners of human actions. The determination or constraint relations, it would seem, can go one way or the other, not both.
On second thought however it is not so surprising that broadly Humean philosophers such as Ayer, Earman, Lewis and others still see a potential problem for freedom posed by determinism. For even if human actions are part of what makes the laws be what they are, this does not mean that we automatically have freedom of the kind we think we have, particularly freedom to have done otherwise given certain past states of affairs. It is one thing to say that everything occurring in and around my body, and everything everywhere else, conforms to Maxwell's equations and thus the Maxwell equations are genuine exceptionless regularities, and that because they in addition are simple and strong, they turn out to be laws. It is quite another thing to add: thus, I might have chosen to do otherwise at certain points in my life, and if I had, then Maxwell's equations would not have been laws. One might try to defend this claim—unpalatable as it seems intuitively, to ascribe ourselves law-breaking power—but it does not follow directly from a Humean approach to laws of nature. Instead, on such views that deny laws most of their pushiness and explanatory force, questions about determinism and human freedom simply need to be approached afresh.
A second important genre of theories of laws of nature holds that the laws are in some sense necessary. For any such approach, laws are just the sort of pushy explainers that are assumed in the traditional language of physical scientists and free will theorists. But a third and growing class of philosophers holds that (universal, exceptionless, true) laws of nature simply do not exist. Among those who hold this are influential philosophers such as Nancy Cartwright, Bas van Fraassen, and John Dupré. For these philosophers, there is a simple consequence: determinism is a false doctrine. As with the Humean view, this does not mean that concerns about human free action are automatically resolved; instead, they must be addressed afresh in the light of whatever account of physical nature without laws is put forward. See Dupré (2001) for one such discussion.
2.5 Fixed
We can now put our—still vague—pieces together. Determinism requires a world that (a) has a well-defined state or description, at any given time, and (b) laws of nature that are true at all places and times. If we have all these, then if (a) and (b) together logically entail the state of the world at all other times (or, at least, all times later than that given in (a)), the world is deterministic. Logical entailment, in a sense broad enough to encompass mathematical consequence, is the modality behind the determination in “determinism.”
3. The Epistemology of Determinism
How could we ever decide whether our world is deterministic or not? Given that some philosophers and some physicists have held firm views—with many prominent examples on each side—one would think that it should be at least a clearly decidable question. Unfortunately, even this much is not clear, and the epistemology of determinism turns out to be a thorny and multi-faceted issue.
3.1 Laws again
As we saw above, for determinism to be true there have to be some laws of nature. Most philosophers and scientists since the 17th century have indeed thought that there are. But in the face of more recent skepticism, how can it be proven that there are? And if this hurdle can be overcome, don't we have to know, with certainty, precisely what the laws of our world are, in order to tackle the question of determinism's truth or falsity?
The first hurdle can perhaps be overcome by a combination of metaphysical argument and appeal to knowledge we already have of the physical world. Philosophers are currently pursuing this issue actively, in large part due to the efforts of the anti-laws minority. The debate has been most recently framed by Cartwright in The Dappled World (Cartwright 1999) in terms psychologically advantageous to her anti-laws cause. Those who believe in the existence of traditional, universal laws of nature are fundamentalists; those who disbelieve are pluralists. This terminology seems to be becoming standard (see Belot 2001), so the first task in the epistemology of determinism is for fundamentalists to establish the reality of laws of nature (see Hoefer 2002b).
Even if the first hurdle can be overcome, the second, namely establishing precisely what the actual laws are, may seem daunting indeed. In a sense, what we are asking for is precisely what 19th and 20th century physicists sometimes set as their goal: the Final Theory of Everything. But perhaps, as Newton said of establishing the solar system's absolute motion, “the thing is not altogether desperate.” Many physicists in the past 60 years or so have been convinced of determinism's falsity, because they were convinced that (a) whatever the Final Theory is, it will be some recognizable variant of the family of quantum mechanical theories; and (b) all quantum mechanical theories are non-deterministic. Both (a) and (b) are highly debatable, but the point is that one can see how arguments in favor of these positions might be mounted. The same was true in the 19th century, when theorists might have argued that (a) whatever the Final Theory is, it will involve only continuous fluids and solids governed by partial differential equations; and (b) all such theories are deterministic. (Here, (b) is almost certainly false; see Earman (1986),ch. XI). Even if we now are not, we may in future be in a position to mount a credible argument for or against determinism on the grounds of features we think we know the Final Theory must have.
3.2 Experience
Determinism could perhaps also receive direct support—confirmation in the sense of probability-raising, not proof—from experience and experiment. For theories (i.e., potential laws of nature) of the sort we are used to in physics, it is typically the case that if they are deterministic, then to the extent that one can perfectly isolate a system and repeatedly impose identical starting conditions, the subsequent behavior of the systems should also be identical. And in broad terms, this is the case in many domains we are familiar with. Your computer starts up every time you turn it on, and (if you have not changed any files, have no anti-virus software, re-set the date to the same time before shutting down, and so on …) always in exactly the same way, with the same speed and resulting state (until the hard drive fails). The light comes on exactly 32 µsec after the switch closes (until the day the bulb fails). These cases of repeated, reliable behavior obviously require some serious ceteris paribus clauses, are never perfectly identical, and always subject to catastrophic failure at some point. But we tend to think that for the small deviations, probably there are explanations for them in terms of different starting conditions or failed isolation, and for the catastrophic failures, definitely there are explanations in terms of different conditions.
There have even been studies of paradigmatically “chancy” phenomena such as coin-flipping, which show that if starting conditions can be precisely controlled and outside interferences excluded, identical behavior results (see Diaconis, Holmes & Montgomery 2004). Most of these bits of evidence for determinism no longer seem to cut much ice, however, because of faith in quantum mechanics and its indeterminism. Indeterminist physicists and philosophers are ready to acknowledge that macroscopic repeatability is usually obtainable, where phenomena are so large-scale that quantum stochasticity gets washed out. But they would maintain that this repeatability is not to be found in experiments at the microscopic level, and also that at least some failures of repeatability (in your hard drive, or coin-flipping experiments) are genuinely due to quantum indeterminism, not just failures to isolate properly or establish identical initial conditions.
If quantum theories were unquestionably indeterministic, and deterministic theories guaranteed repeatability of a strong form, there could conceivably be further experimental input on the question of determinism's truth or falsity. Unfortunately, the existence of Bohmian quantum theories casts strong doubt on the former point, while chaos theory casts strong doubt on the latter. More will be said about each of these complications below.
3.3 Determinism and Chaos
If the world were governed by strictly deterministic laws, might it still look as though indeterminism reigns? This is one of the difficult questions that chaos theory raises for the epistemology of determinism.
A deterministic chaotic system has, roughly speaking, two salient features: (i) the evolution of the system over a long time period effectively mimics a random or stochastic process—it lacks predictability or computability in some appropriate sense; (ii) two systems with nearly identical initial states will have radically divergent future developments, within a finite (and typically, short) timespan. We will use “randomness” to denote the first feature, and “sensitive dependence on initial conditions” (SDIC) for the latter. Definitions of chaos may focus on either or both of these properties; Batterman (1993) argues that only (ii) provides an appropriate basis for defining chaotic systems.
A simple and very important example of a chaotic system in both randomness and SDIC terms is the Newtonian dynamics of a pool table with a convex obstacle (or obstacles) (Sinai 1970 and others). See Figure 1.
Billiard table with convex obstacle
Figure 1: Billiard table with convex obstacle
The usual idealizing assumptions are made: no friction, perfectly elastic collisions, no outside influences. The ball's trajectory is determined by its initial position and direction of motion. If we imagine a slightly different initial direction, the trajectory will at first be only slightly different. And collisions with the straight walls will not tend to increase very rapidly the difference between trajectories. But collisions with the convex object will have the effect of amplifying the differences. After several collisions with the convex body or bodies, trajectories that started out very close to one another will have become wildly different—SDIC.
In the example of the billiard table, we know that we are starting out with a Newtonian deterministic system—that is how the idealized example is defined. But chaotic dynamical systems come in a great variety of types: discrete and continuous, 2-dimensional, 3-dimensional and higher, particle-based and fluid-flow-based, and so on. Mathematically, we may suppose all of these systems share SDIC. But generally they will also display properties such as unpredictability, non-computability, Kolmogorov-random behavior, and so on—at least when looked at in the right way, or at the right level of detail. This leads to the following epistemic difficulty: if, in nature, we find a type of system that displays some or all of these latter properties, how can we decide which of the following two hypotheses is true?
1. The system is governed by genuinely stochastic, indeterministic laws (or by no laws at all), i.e., its apparent randomness is in fact real randomness.
2. The system is governed by underlying deterministic laws, but is chaotic.
In other words, once one appreciates the varieties of chaotic dynamical systems that exist, mathematically speaking, it starts to look difficult—maybe impossible—for us to ever decide whether apparently random behavior in nature arises from genuine stochasticity, or rather from deterministic chaos. Patrick Suppes (1993, 1996) argues, on the basis of theorems proven by Ornstein (1974 and later) that “There are processes which can equally well be analyzed as deterministic systems of classical mechanics or as indeterministic semi-Markov processes, no matter how many observations are made.” And he concludes that “Deterministic metaphysicians can comfortably hold to their view knowing they cannot be empirically refuted, but so can indeterministic ones as well.” (Suppes 1993, p. 254) For more recent works exploring the extent to which deterministic and indeterministic model systems may be regarded as empirically indistinguishable, see Werndl (2016) and references therein.
There is certainly an interesting problem area here for the epistemology of determinism, but it must be handled with care. It may well be true that there are some deterministic dynamical systems that, when viewed properly, display behavior indistinguishable from that of a genuinely stochastic process. For example, using the billiard table above, if one divides its surface into quadrants and looks at which quadrant the ball is in at 30-second intervals, the resulting sequence is no doubt highly random. But this does not mean that the same system, when viewed in a different way (perhaps at a higher degree of precision) does not cease to look random and instead betray its deterministic nature. If we partition our billiard table into squares 2 centimeters a side and look at which quadrant the ball is in at .1 second intervals, the resulting sequence will be far from random. And finally, of course, if we simply look at the billiard table with our eyes, and see it as a billiard table, there is no obvious way at all to maintain that it may be a truly random process rather than a deterministic dynamical system. (See Winnie (1996) for a nice technical and philosophical discussion of these issues. Winnie explicates Ornstein's and others' results in some detail, and disputes Suppes' philosophical conclusions.)
The dynamical systems usually studied under the label of “chaos” are usually either purely abstract, mathematical systems, or classical Newtonian systems. It is natural to wonder whether chaotic behavior carries over into the realm of systems governed by quantum mechanics as well. Interestingly, it is much harder to find natural correlates of classical chaotic behavior in true quantum systems (see Gutzwiller 1990). Some, at least, of the interpretive difficulties of quantum mechanics would have to be resolved before a meaningful assessment of chaos in quantum mechanics could be achieved. For example, SDIC is hard to find in the Schrödinger evolution of a wavefunction for a system with finite degrees of freedom; but in Bohmian quantum mechanics it is handled quite easily on the basis of particle trajectories (see Dürr, Goldstein and Zhangì 1992).
The popularization of chaos theory in the relatively recent past perhaps made it seem self-evident that nature is full of genuinely chaotic systems. In fact, it is far from self-evident that such systems exist, other than in an approximate sense. Nevertheless, the mathematical exploration of chaos in dynamical systems helps us to understand some of the pitfalls that may attend our efforts to know whether our world is genuinely deterministic or not.
3.4 Metaphysical arguments
Let us suppose that we shall never have the Final Theory of Everything before us—at least in our lifetime—and that we also remain unclear (on physical/experimental grounds) as to whether that Final Theory will be of a type that can or cannot be deterministic. Is there nothing left that could sway our belief toward or against determinism? There is, of course: metaphysical argument. Metaphysical arguments on this issue are not currently very popular. But philosophical fashions change at least twice a century, and grand systemic metaphysics of the Leibnizian sort might one day come back into favor. Conversely, the anti-systemic, anti-fundamentalist metaphysics propounded by Cartwright (1999) might also come to predominate. As likely as not, for the foreseeable future metaphysical argument may be just as good a basis on which to discuss determinism's prospects as any arguments from mathematics or physics.
4. The Status of Determinism in Physical Theories
John Earman's Primer on Determinism (1986) remains the richest storehouse of information on the truth or falsity of determinism in various physical theories, from classical mechanics to quantum mechanics and general relativity. (See also his recent update on the subject, “Aspects of Determinism in Modern Physics” (2007)). Here I will give only a brief discussion of some key issues, referring the reader to Earman (1986) and other resources for more detail. Figuring out whether well-established theories are deterministic or not (or to what extent, if they fall only a bit short) does not do much to help us know whether our world is really governed by deterministic laws; all our current best theories, including General Relativity and the Standard Model of particle physics, are too flawed and ill-understood to be mistaken for anything close to a Final Theory. Nevertheless, as Earman stressed, the exploration is very valuable because of the way it enriches our understanding of the richness and complexity of determinism.
4.1 Classical mechanics
Despite the common belief that classical mechanics (the theory that inspired Laplace in his articulation of determinism) is perfectly deterministic, in fact the theory is rife with possibilities for determinism to break down. One class of problems arises due to the absence of an upper bound on the velocities of moving objects. Below we see the trajectory of an object that is accelerated unboundedly, its velocity becoming in effect infinite in a finite time. See Figure 2:
object accelerates to reach infinity
Figure 2: An object accelerates so as to reach spatial infinity in a finite time
By the time t = t*, the object has literally disappeared from the world—its world-line never reaches the t = t* surface. (Never mind how the object gets accelerated in this way; there are mechanisms that are perfectly consistent with classical mechanics that can do the job. In fact, Xia (1992) showed that such acceleration can be accomplished by gravitational forces from only 5 finite objects, without collisions. No mechanism is shown in these diagrams.) This “escape to infinity,” while disturbing, does not yet look like a violation of determinism. But now recall that classical mechanics is time-symmetric: any model has a time-inverse, which is also a consistent model of the theory. The time-inverse of our escaping body is playfully called a “space invader.”
space invader comes from infinity
Figure 3: A ‘space invader’ comes in from spatial infinity
Clearly, a world with a space invader does fail to be deterministic. Before t = t*, there was nothing in the state of things to enable the prediction of the appearance of the invader at t = t* +.[2] One might think that the infinity of space is to blame for this strange behavior, but this is not obviously correct. In finite, “rolled-up” or cylindrical versions of Newtonian space-time space-invader trajectories can be constructed, though whether a “reasonable” mechanism to power them exists is not clear.[3]
A second class of determinism-breaking models can be constructed on the basis of collision phenomena. The first problem is that of multiple-particle collisions for which Newtonian particle mechanics simply does not have a prescription for what happens. (Consider three identical point-particles approaching each other at 120 degree angles and colliding simultaneously. That they bounce back along their approach trajectories is possible; but it is equally possible for them to bounce in other directions (again with 120 degree angles between their paths), so long as momentum conservation is respected.)
Moreover, there is a burgeoning literature of physical or quasi-physical systems, usually set in the context of classical physics, that carry out supertasks (see Earman and Norton (1998) and the entry on supertasks for a review). Frequently, the puzzle presented is to decide, on the basis of the well-defined behavior before time t = a, what state the system will be in at t = a itself. A failure of CM to dictate a well-defined result can then be seen as a failure of determinism.
In supertasks, one frequently encounters infinite numbers of particles, infinite (or unbounded) mass densities, and other dubious infinitary phenomena. Coupled with some of the other breakdowns of determinism in CM, one begins to get a sense that most, if not all, breakdowns of determinism rely on some combination of the following set of (physically) dubious mathematical notions: {infinite space; unbounded velocity; continuity; point-particles; singular fields}. The trouble is, it is difficult to imagine any recognizable physics (much less CM) that eschews everything in the set.
Norton's dome
Figure 4: A ball may spontaneously start sliding down this dome, with no violation of Newton's laws. (Reproduced courtesy of John D. Norton and Philosopher's Imprint)
Finally, an elegant example of apparent violation of determinism in classical physics has been created by John Norton (2003). As illustrated in Figure 4, imagine a ball sitting at the apex of a frictionless dome whose equation is specified as a function of radial distance from the apex point. This rest-state is our initial condition for the system; what should its future behavior be? Clearly one solution is for the ball to remain at rest at the apex indefinitely.
But curiously, this is not the only solution under standard Newtonian laws. The ball may also start into motion sliding down the dome—at any moment in time, and in any radial direction. This example displays “uncaused motion” without, Norton argues, any violation of Newton's laws, including the First Law. And it does not, unlike some supertask examples, require an infinity of particles. Still, many philosophers are uncomfortable with the moral Norton draws from his dome example, and point out reasons for questioning the dome's status as a Newtonian system (see e.g. Malament (2007)).
4.2 Special Relativistic physics
Two features of special relativistic physics make it perhaps the most hospitable environment for determinism of any major theoretical context: the fact that no process or signal can travel faster than the speed of light, and the static, unchanging spacetime structure. The former feature, including a prohibition against tachyons (hypothetical particles travelling faster than light)[4]), rules out space invaders and other unbounded-velocity systems. The latter feature makes the space-time itself nice and stable and non-singular—unlike the dynamic space-time of General Relativity, as we shall see below. For source-free electromagnetic fields in special-relativistic space-time, a nice form of Laplacean determinism is provable. Unfortunately, interesting physics needs more than source-free electromagnetic fields. Earman (1986) ch. IV surveys in depth the pitfalls for determinism that arise once things are allowed to get more interesting (e.g. by the addition of particles interacting gravitationally).
4.3 General Relativity (GTR)
Defining an appropriate form of determinism for the context of general relativistic physics is extremely difficult, due to both foundational interpretive issues and the plethora of weirdly-shaped space-time models allowed by the theory's field equations. The simplest way of treating the issue of determinism in GTR would be to state flatly: determinism fails, frequently, and in some of the most interesting models. Here we will briefly describe some of the most important challenges that arise for determinism, directing the reader yet again to Earman (1986), and also Earman (1995) for more depth.
4.3.1 Determinism and manifold points
In GTR, we specify a model of the universe by giving a triple of three mathematical objects, <M, g,T>. M represents a continuous “manifold”: that means a sort of unstructured space (-time), made up of individual points and having smoothness or continuity, dimensionality (usually, 4-dimensional), and global topology, but no further structure. What is the further structure a space-time needs? Typically, at least, we expect the time-direction to be distinguished from space-directions; and we expect there to be well-defined distances between distinct points; and also a determinate geometry (making certain continuous paths in M be straight lines, etc.). All of this extra structure is coded into g, the metric field. So M and g together represent space-time. T represents the matter and energy content distributed around in space-time (if any, of course).
For mathematical reasons not relevant here, it turns out to be possible to take a given model spacetime and perform a mathematical operation called a “hole diffeomorphism” h* on it; the diffeomorphism's effect is to shift around the matter content T and the metric g relative to the continuous manifold M.[5] If the diffeomorphism is chosen appropriately, it can move around T and g after a certain time t = 0, but leave everything alone before that time. Thus, the new model represents the matter content (now h* T) and the metric (h*g) as differently located relative to the points of M making up space-time. Yet, the new model is also a perfectly valid model of the theory. This looks on the face of it like a form of indeterminism: GTR's equations do not specify how things will be distributed in space-time in the future, even when the past before a given time t is held fixed. See Figure 5:
Holediffeomorphismshifts contents of spacetime
Figure 5: “Hole” diffeomorphism shifts contents of spacetime
Usually the shift is confined to a finite region called the hole (for historical reasons). Then it is easy to see that the state of the world at time t = 0 (and all the history that came before) does not suffice to fix whether the future will be that of our first model, or its shifted counterpart in which events inside the hole are different.
This is a form of indeterminism first highlighted by Earman and Norton (1987) as an interpretive philosophical difficulty for realism about GTR's description of the world, especially the point manifold M. They showed that realism about the manifold as a part of the furniture of the universe (which they called “manifold substantivalism”) commits us to an automatic indeterminism in GTR (as described above), and they argued that this is unacceptable. (See the hole argument and Hoefer (1996) for one response on behalf of the space-time realist, and discussion of other responses.) For now, we will simply note that this indeterminism, unlike most others we are discussing in this section, is empirically undetectable: our two models <M, g, T> and the shifted model <M, h*g, h*T> are empirically indistinguishable.
4.3.2 Singularities
The separation of space-time structures into manifold and metric (or connection) facilitates mathematical clarity in many ways, but also opens up Pandora's box when it comes to determinism. The indeterminism of the Earman and Norton hole argument is only the tip of the iceberg; singularities make up much of the rest of the berg. In general terms, a singularity can be thought of as a “place where things go bad” in one way or another in the space-time model. For example, near the center of a Schwarzschild black hole, curvature increases without bound, and at the center itself it is undefined, which means that Einstein's equations cannot be said to hold, which means (arguably) that this point does not exist as a part of the space-time at all! Some specific examples are clear, but giving a general definition of a singularity, like defining determinism itself in GTR, is a vexed issue (see Earman (1995) for an extended treatment; Callender and Hoefer (2001) gives a brief overview). We will not attempt here to catalog the various definitions and types of singularity.
Different types of singularity bring different types of threat to determinism. In the case of ordinary black holes, mentioned above, all is well outside the so- called “event horizon”, which is the spherical surface defining the black hole: once a body or light signal passes through the event horizon to the interior region of the black hole, it can never escape again. Generally, no violation of determinism looms outside the event horizon; but what about inside? Some black hole models have so-called “Cauchy horizons” inside the event horizon, i.e., surfaces beyond which determinism breaks down.
Another way for a model spacetime to be singular is to have points or regions go missing, in some cases by simple excision. Perhaps the most dramatic form of this involves taking a nice model with a space-like surface t = E (i.e., a well-defined part of the space-time that can be considered “the state state of the world at time E”), and cutting out and throwing away this surface and all points temporally later. The resulting spacetime satisfies Einstein's equations; but, unfortunately for any inhabitants, the universe comes to a sudden and unpredictable end at time E. This is too trivial a move to be considered a real threat to determinism in GTR; we can impose a reasonable requirement that space-time not “run out” in this way without some physical reason (the spacetime should be “maximally extended”). For discussion of precise versions of such a requirement, and whether they succeed in eliminating unwanted singularities, see Earman (1995, chapter 2).
The most problematic kinds of singularities, in terms of determinism, are naked singularities (singularities not hidden behind an event horizon). When a singularity forms from gravitational collapse, the usual model of such a process involves the formation of an event horizon (i.e. a black hole). A universe with an ordinary black hole has a singularity, but as noted above, (outside the event horizon at least) nothing unpredictable happens as a result. A naked singularity, by contrast, has no such protective barrier. In much the way that anything can disappear by falling into an excised-region singularity, or appear out of a white hole (white holes themselves are, in fact, technically naked singularities), there is the worry that anything at all could pop out of a naked singularity, without warning (hence, violating determinism en passant). While most white hole models have Cauchy surfaces and are thus arguably deterministic, other naked singularity models lack this property. Physicists disturbed by the unpredictable potentialities of such singularities have worked to try to prove various cosmic censorship hypotheses that show—under (hopefully) plausible physical assumptions—that such things do not arise by stellar collapse in GTR (and hence are not liable to come into existence in our world). To date no very general and convincing forms of the hypothesis have been proven, so the prospects for determinism in GTR as a mathematical theory do not look terribly good.
4.4 Quantum mechanics
As indicated above, QM is widely thought to be a strongly non-deterministic theory. Popular belief (even among most physicists) holds that phenomena such as radioactive decay, photon emission and absorption, and many others are such that only a probabilistic description of them can be given. The theory does not say what happens in a given case, but only says what the probabilities of various results are. So, for example, according to QM the fullest description possible of a radium atom (or a chunk of radium, for that matter), does not suffice to determine when a given atom will decay, nor how many atoms in the chunk will have decayed at any given time. The theory gives only the probabilities for a decay (or a number of decays) to happen within a given span of time. Einstein and others perhaps thought that this was a defect of the theory that should eventually be removed, by a supplemental hidden variable theory[6] that restores determinism; but subsequent work showed that no such hidden variables account could exist. At the microscopic level the world is ultimately mysterious and chancy.
So goes the story; but like much popular wisdom, it is partly mistaken and/or misleading. Ironically, quantum mechanics is one of the best prospects for a genuinely deterministic theory in modern times! Everything hinges on what interpretational and philosophical decisions one adopts. The fundamental law at the heart of non-relativistic QM is the Schrödinger equation. The evolution of a wavefunction describing a physical system under this equation is normally taken to be perfectly deterministic.[7] If one adopts an interpretation of QM according to which that's it—i.e., nothing ever interrupts Schrödinger evolution, and the wavefunctions governed by the equation tell the complete physical story—then quantum mechanics is a perfectly deterministic theory. There are several interpretations that physicists and philosophers have given of QM which go this way. (See the entry on quantum mechanics.)
More commonly—and this is part of the basis for the popular wisdom—physicists have resolved the quantum measurement problem by postulating that some process of “collapse of the wavefunction” occurs during measurements or observations that interrupts Schrödinger evolution. The collapse process is usually postulated to be indeterministic, with probabilities for various outcomes, via Born's rule, calculable on the basis of a system's wavefunction. The once-standard Copenhagen interpretation of QM posits such a collapse. It has the virtue of solving certain problems such as the infamous Schrödinger's cat paradox, but few philosophers or physicists can take it very seriously unless they are instrumentalists about the theory. The reason is simple: the collapse process is not physically well-defined, is characterised in terms of an anthropomorphic notion (measurement)and feels too ad hoc to be a fundamental part of nature's laws.[8]
In 1952 David Bohm created an alternative interpretation of non relativistic QM—perhaps better thought of as an alternative theory—that realizes Einstein's dream of a hidden variable theory, restoring determinism and definiteness to micro-reality. In Bohmian quantum mechanics, unlike other interpretations, it is postulated that all particles have, at all times, a definite position and velocity. In addition to the Schrödinger equation, Bohm posited a guidance equation that determines, on the basis of the system's wavefunction and particles' initial positions and velocities, what their future positions and velocities should be. As much as any classical theory of point particles moving under force fields, then, Bohm's theory is deterministic. Amazingly, he was also able to show that, as long as the statistical distribution of initial positions and velocities of particles are chosen so as to meet a “quantum equilibrium” condition, his theory is empirically equivalent to standard Copenhagen QM. In one sense this is a philosopher's nightmare: with genuine empirical equivalence as strong as Bohm obtained, it seems experimental evidence can never tell us which description of reality is correct. (Fortunately, we can safely assume that neither is perfectly correct, and hope that our Final Theory has no such empirically equivalent rivals.) In other senses, the Bohm theory is a philosopher's dream come true, eliminating much (but not all) of the weirdness of standard QM and restoring determinism to the physics of atoms and photons. The interested reader can find out more from the link above, and references therein.
This small survey of determinism's status in some prominent physical theories, as indicated above, does not really tell us anything about whether determinism is true of our world. Instead, it raises a couple of further disturbing possibilities for the time when we do have the Final Theory before us (if such time ever comes): first, we may have difficulty establishing whether the Final Theory is deterministic or not—depending on whether the theory comes loaded with unsolved interpretational or mathematical puzzles. Second, we may have reason to worry that the Final Theory, if indeterministic, has an empirically equivalent yet deterministic rival (as illustrated by Bohmian quantum mechanics.)
5. Chance and Determinism
Some philosophers maintain that if determinism holds in our world, then there are no objective chances in our world. And often the word ‘chance’ here is taken to be synonymous with 'probability', so these philosophers maintain that there are no non-trivial objective probabilities for events in our world. (The caveat “non-trivial” is added here because on some accounts, under determinism, all future events that actually happen have probability, conditional on past history, equal to 1, and future events that do not happen have probability equal to zero. Non-trivial probabilities are probabilities strictly between zero and one.) Conversely, it is often held, if there are laws of nature that are irreducibly probabilistic, determinism must be false. (Some philosophers would go on to add that such irreducibly probabilistic laws are the basis of whatever genuine objective chances obtain in our world.)
The discussion of quantum mechanics in section 4 shows that it may be difficult to know whether a physical theory postulates genuinely irreducible probabilistic laws or not. If a Bohmian version of QM is correct, then the probabilities dictated by the Born rule are not irreducible. If that is the case, should we say that the probabilities dictated by quantum mechanics are not objective? Or should we say that we need to distinguish ‘chance’ and ‘probabillity’ after all—and hold that not all objective probabilities should be thought of as objective chances? The first option may seem hard to swallow, given the many-decimal-place accuracy with which such probability-based quantities as half-lives and cross-sections can be reliably predicted and verified experimentally with QM.
Whether objective chance and determinism are really incompatible or not may depend on what view of the nature of laws is adopted. On a “pushy explainers” view of laws such as that defended by Maudlin (2007), probabilistic laws are interpreted as irreducible dynamical transition-chances between allowed physical states, and the incompatibility of such laws with determinism is immediate. But what should a defender of a Humean view of laws, such as the BSA theory (section 2.4 above), say about probabilistic laws? The first thing that needs to be done is explain how probabilistic laws can fit into the BSA account at all, and this requires modification or expansion of the view, since as first presented the only candidates for laws of nature are true universal generalizations. If ‘probability’ were a univocal, clearly understood notion then this might be simple: We allow universal generalizations whose logical form is something like: “Whenever conditions Y obtain, Pr(A) = x”. But it is not at all clear how the meaning of ‘Pr’ should be understood in such a generalization; and it is even less clear what features the Humean pattern of actual events must have, for such a generalization to be held true. (See the entry on interpretations of probability and Lewis (1994).)
Humeans about laws believe that what laws there are is a matter of what patterns are there to be discerned in the overall mosaic of events that happen in the history of the world. It seems plausible enough that the patterns to be discerned may include not only strict associations (whenever X, Y), but also stable statistical associations. If the laws of nature can include either sort of association, a natural question to ask seems to be: why can't there be non-probabilistic laws strong enough to ensure determinism, and on top of them, probabilistic laws as well? If a Humean wanted to capture the laws not only of fundamental theories, but also non-fundamental branches of physics such as (classical) statistical mechanics, such a peaceful coexistence of deterministic laws plus further probabilistic laws would seem to be desirable. Loewer (2004) and Frigg & Hoefer (2015) offer forms of this peaceful coexistence that can be achieved within Lewis' version of the BSA account of laws.
6. Determinism and Human Action
In the introduction, we noted the threat that determinism seems to pose to human free agency. It is hard to see how, if the state of the world 1000 years ago fixes everything I do during my life, I can meaningfully say that I am a free agent, the author of my own actions, which I could have freely chosen to perform differently. After all, I have neither the power to change the laws of nature, nor to change the past! So in what sense can I attribute freedom of choice to myself?
Philosophers have not lacked ingenuity in devising answers to this question. There is a long tradition of compatibilists arguing that freedom is fully compatible with physical determinism; a prominent recent defender is John Fischer (1994, 2012). Hume went so far as to argue that determinism is a necessary condition for freedom—or at least, he argued that some causality principle along the lines of “same cause, same effect” is required. There have been equally numerous and vigorous responses by those who are not convinced. Can a clear understanding of what determinism is, and how it tends to succeed or fail in real physical theories, shed any light on the controversy?
Physics, particularly 20th century physics, does have one lesson to impart to the free will debate; a lesson about the relationship between time and determinism. Recall that we noticed that the fundamental theories we are familiar with, if they are deterministic at all, are time-symmetrically deterministic. That is, earlier states of the world can be seen as fixing all later states; but equally, later states can be seen as fixing all earlier states. We tend to focus only on the former relationship, but we are not led to do so by the theories themselves.
Nor does 20th (21st) -century physics countenance the idea that there is anything ontologically special about the past, as opposed to the present and the future. In fact, it fails to use these categories in any respect, and teaches that in some senses they are probably illusory.[9] So there is no support in physics for the idea that the past is “fixed” in some way that the present and future are not, or that it has some ontological power to constrain our actions that the present and future do not have. It is not hard to uncover the reasons why we naturally do tend to think of the past as special, and assume that both physical causation and physical explanation work only in the past present/future direction (see the entry on thermodynamic asymmetry in time). But these pragmatic matters have nothing to do with fundamental determinism. If we shake loose from the tendency to see the past as special, when it comes to the relationships of determination, it may prove possible to think of a deterministic world as one in which each part bears a determining—or partial-determining—relation to other parts, but in which no particular part (region of space-time, event or set of events, ...) has a special, privileged determining role that undercuts the others. Hoefer (2002a) and Ismael (2016) use such considerations to argue in a novel way for the compatiblity of determinism with human free agency.
• Batterman, R. B., 1993, “Defining Chaos,” Philosophy of Science, 60: 43–66.
• Bishop, R. C., 2002, “Deterministic and Indeterministic Descriptions,” in Between Chance and Choice, H. Atmanspacher and R. Bishop (eds.), Imprint Academic, 5–31.
• Butterfield, J., 1998, “Determinism and Indeterminism,” in Routledge Encyclopedia of Philosophy, E. Craig (ed.), London: Routledge.
• Callender, C., 2000, “Shedding Light on Time,” Philosophy of Science (Proceedings of PSA 1998), 67: S587–S599.
• Callender, C., and Hoefer, C., 2001, “Philosophy of Space-time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer and M. Silberstein (eds), Oxford: Blackwell, pp. 173–198.
• Cartwright, N., 1999, The Dappled World, Cambridge: Cambridge University Press.
• Dupré, J., 2001, Human Nature and the Limits of Science, Oxford: Oxford University Press.
• Dürr, D., Goldstein, S., and Zanghì, N., 1992, “Quantum Chaos, Classical Randomness, and Bohmian Mechanics,” Journal of Statistical Physics, 68: 259–270. [Preprint available online in gzip'ed Postscript.]
• Earman, J., 1984: “Laws of Nature: The Empiricist Challenge,” in R. J. Bogdan, ed.,'D.H.Armstrong', Dortrecht: Reidel, pp. 191–223.
• –––, 1986, A Primer on Determinism, Dordrecht: Reidel.
• –––, 1995, Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, New York: Oxford University Press.
• Earman, J., and Norton, J., 1987, “What Price Spacetime Substantivalism: the Hole Story,” British Journal for the Philosophy of Science, 38: 515–525.
• –––, 1998, “Comments on Laraudogoitia's ‘Classical Particle Dynamics, Indeterminism and a Supertask’,” British Journal for the Philosophy of Science, 49: 123–133.
• Fisher, J., 1994, The Metaphysics of Free Will, Oxford: Blackwell Publishers.
• –––, 2012, Deep Control: Essays on Free Will and Value, New York: Oxford University Press.
• Ford, J., 1989, “What is chaos, the we should be mindful of it?” in The New Physics, P. Davies (ed.), Cambridge: Cambridge University Press, 348–372.
• Frigg, R., and Hoefer, C., 2015, “The Best Humean System for Statistical Mechanics,” Erkenntnis, 80 (3 Supplement): 551–574.
• Gisin, N., 1991, “Propensities in a Non-Deterministic Physics”, Synthese, 89: 287–297.
• Gutzwiller, M., 1990, Chaos in Classical and Quantum Mechanics, New York: Springer-Verlag.
• Hitchcock, C., 1999, “Contrastive Explanation and the Demons of Determinism,” British Journal of the Philosophy of Science, 50: 585–612.
• Hoefer, C., 1996, “The Metaphysics of Spacetime Substantivalism,” The Journal of Philosophy, 93: 5–27.
• –––, 2002a, “Freedom From the Inside Out,” in Time, Reality and Experience, C. Callender (ed.), Cambridge: Cambridge University Press, pp. 201–222.
• –––, 2002b, “For Fundamentalism,” Philosophy of Science v. 70, no. 5 (PSA 2002 Proceedings), pp. 1401–1412.
• Hutchison, K. 1993, “Is Classical Mechanics Really Time-reversible and Deterministic?” British Journal of the Philosophy of Science, 44: 307–323.
• Ismael, J. 2016, How Physics Makes Us Free, Oxford: Oxford University Press.
• Laplace, P., 1820, Essai Philosophique sur les Probabilités forming the introduction to his Théorie Analytique des Probabilités, Paris: V Courcier; repr. F.W. Truscott and F.L. Emory (trans.), A Philosophical Essay on Probabilities, New York: Dover, 1951 .
• Leiber, T., 1998, “On the Actual Impact of Deterministic Chaos,” Synthese, 113: 357–379.
• Lewis, D., 1973,Counterfactuals, Oxford: Blackwell.
• –––, 1994, “Humean Supervenience Debugged,” Mind, 103: 473–490.
• Loewer, B., 2004, “Determinism and Chance,” Studies in History and Philosophy of Modern Physics, 32: 609–620.
• Malament, D., 2008, “Norton's Slippery Slope,” Philosophy of Science, vol. 75, no. 4, pp. 799–816.
• Maudlin, T. 2007, The Metaphysics Within Physics, Oxford: Oxford University Press.
• Melia, J. 1999, “Holes, Haecceitism and Two Conceptions od Determinism,” British Journal of the Philosophy of Science, 50: 639–664.
• Mellor, D. H. 1995, The Facts of Causation, London: Routledge.
• Norton, J.D., 2003, “Causation as Folk Science,” Philosopher's Imprint, 3 (4): [Available online].
• Ornstein, D. S., 1974, Ergodic Theory, Randomness, and Dynamical Systems, New Haven: Yale University Press.
• Popper, K. 1982, The Open Universe: an argument for indeterminism, London: Rutledge (Taylor & Francis Group).
• Ruelle, D., 1991, Chance and Chaos, London: Penguin.
• Russell, B., 1912, “On the Notion of Cause,” Proceedings of the Aristotelian Society, 13: 1–26.
• Shanks, N., 1991, “Probabilistic physics and the metaphysics of time,” South African Journal of Philosophy, 10: 37–44.
• Sinai, Ya.G., 1970, “Dynamical systems with elastic reflections,” Russ. Math. Surveys 25: 137–189.
• Suppes, P., 1993, “The Transcendental Character of Determinism,” Midwest Studies in Philosophy, 18: 242–257.
• –––, 1999, “The Noninvariance of Deterministic Causal Models,” Synthese, 121: 181–198.
• Suppes, P. and M. Zanotti, 1996, Foundations of Probability with Applications. New York: Cambridge University Press.
• van Fraassen, B., 1989, Laws and Symmetry, Oxford: Clarendon Press.
• Van Kampen, N. G., 1991, “Determinism and Predictability,” Synthese, 89: 273–281.
• Werndl, C. 2016, The Oxford Handbook of Philosophy of Science. Oxford: Oxford University Press. Online at, December 2015.
• Winnie, J. A., 1996, “Deterministic Chaos and the Nature of Chance,” in The Cosmos of Science—Essays of Exploration, J. Earman and J. Norton (eds.), Pittsburgh: University of Pitsburgh Press, pp. 299–324.
• Xia, Z., 1992, “The existence of noncollision singularities in newtonian systems,” Annals of Mathematics, 135: 411–468.
The author would like to acknowledge the invaluable help of John Norton in the preparation of this entry. Thanks also to A. Ilhamy Amiry for bringing to my attention some errors in an earlier version of this entry.
Copyright © 2016 by
Carl Hoefer <>
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free |
5a0b768cc3765d91 | The Imaginary Collapse of the Wavefunction
Posted on 22 March 2009
The so-called “collapse” of the wave function in quantum theory is often illustrated by the wave/particle duality. When a photon propagates through a double-slit apparatus, it behaves like a wave. Yet, if it is observed, the non-local wave is collapsed into a single localized particle. However, both theory and experiment show that this is not a clear-cut either/or distinction, as it is misleadingly presented in traditional discussions of the double slit experiment. The interference pattern is not simply there or not, but is gradually deteriorated as more information about which slit the particle went through can be extracted from the photon measurement. This suggests that, in general, there is never any discontinuous or sudden collapse of the wavefunction. All that is ever happening is that we’re pushing information around with measurement interactions in a completely continuous (unitary) way.
Not only is collapse of the wave function totally unverifiable and nonphysical, but another big problem with collapse is that it is in blatant violation of the Schrödinger equation! Any other scientific hypothesis that both violates known laws of physics and is not verifiable would normally be immediately rejected as pseudo-science. Why, then, has the notion of collapse stuck? Perhaps because one consequence of rejecting collapse would seem to be that it would lead us inevitably to the many worlds interpretation. Strange as the many worlds interpretation may be, however, it does have the virtue of being consistent with the laws of physics, at least as we know them so far.
The many worlds interpretation is often rejected as outrageous because it seems to imply that all the separate “worlds” have some actual existence, just like ours. But, it’s more like none of the “worlds” have actual existence, including ours. To make an analogy with the theory of relativity, it’s not like there are many actual velocities of the earth in space, each existing as its own separate actualized “world.” Rather, it’s that the earth has no actual objectively existing velocity at all. Velocity only has meaning relative to a reference frame, and reality does not have any privileged reference frame. We happen to observe things in the reference frame of the Earth where that velocity is zero. If we were on the Moon, things would be different. Is there really some mystery here? How is this so different from quantum theory? The original “relative state” formulation of quantum theory seems to be in line with this view, and calling it a “many worlds” theory is just as misleading as calling relativity theory a “many worlds” theory. It’s just “many reference frames” and one world. One might complain that the “one world” is a strange one, but that’s no less true in relativity theory where nothing has any objective mass, length, time, etc. The only objective realities are the four-dimensional invariants. These are almost as weird as coherent superpositions.
It is good to remember that physical theories in general are abstractions, describing a reality that is beyond our direct experience. We experience our immediate sensations of sight, sound, etc., and never directly experience the abstractions of “atoms” or “fields” which are only indirectly inferred from experience. (The same is actually true of a “chair” or “rock” as well.) These may be useful abstractions, but we never actually experience them directly, and can never know if they really exist the way we think. In fact, we don’t really know that they exist at all. We could be a brain in a vat or having a lucid dream right now. Science tries to balance the belief in some objective reality with the fact that we can never know the thing in itself. As Heisenberg wrote,
It is actually more radical than Heisenberg suggests. Consider again the double-slit experiment. A simple photon which “measures” which slit the particle went through does not actually collapse the wave function to be localized in just one region of space. It merely entangles itself with the system. Provided no decoherence has taken place so that the coherence of the original system is not washed out in many degrees of freedom of the measurement system, then there is no sense in which an irreversible measurement interaction has taken place. So one is still free to decide what will ultimately be measured. Because there has not been any interaction with a particular well-defined measurement apparatus (by which I mean a device that involves decoherence) the attributes of the system are likewise still undefined.
The above situation with regard to a quantum system is analogous to not having defined any particular well-defined reference frame in relativity. If I do not specify a reference frame for an observation of a monolith floating in space, then it has no definite well-defined value for various properties such as velocity, mass and length. Once the reference frame is specified, however, then one can meaningfully talk about definite values for these quantities. Similarly, once one specifies a particular measurement apparatus (that involves decoherence), then one can say there is a well-defined meaning to talking about certain properties. The coherence is lost and there is no practical possibility to erase that measurement choice after the interaction with the measurement apparatus and choose instead to measure a complementary observable. And all observers will agree on what is measured.
In connection with this, Pauli has this interesting statement:
Just as in the theory of relativity a group of mathematical transformations connects all possible coordinate systems, so in quantum mechanics a group of mathematical transformations connects the possible experimental arrangements.
And Bohr writes:
In neither case [of quantum theory or relativity theory] does the appropriate widening of our conceptual framework imply any appeal to the observing subject, which would hinder unambiguous communication of experience. In relativistic argumentation, such objectivity is secured by due regard to the dependence of the phenomena on the reference frame of the observer, while in complementary description all subjectivity is avoided by proper attention to the circumstances required for the well-defined use of elementary physical concepts.
Admittedly, the analogy with relativity only goes so far. In the case of relativity, the choice of reference frame is sufficient to provide a unique and definite value for physical attributes. In quantum systems, on the other hand, although the interaction with a particular decohering measurement apparatus gives a particular observable well-defined meaning, it still does not result in a definite value (i.e., the wavefunction is not collapsed). The analogy with relativity, it seems, is a similarity between the choice of reference frame and the choice of a particular decohering measurement apparatus. These choices are sufficient to give well-defined meaning to certain physical quantities. The difference seems to be that in quantum theory, even though the quantities may have well-defined meaning, they still have not been actualized. For example, once the atom has interacted with the Geiger counter and poison bottle, it makes sense to say that Schrödiner’s cat is either alive or dead (there is no longer any coherence that would allow one to perform a measurement of a complementary observable to the alive/dead observable).
The actualization of a particular value could be described in terms of the many worlds interpretation as the choice of which world “you” get identified with. In relativity, though, one can actually imagine something analogous, but we don’t regard it as a mystery for some reason: The description of the world according to relativity does not specify which moment in spacetime we should be experiencing as “here and now”. So, what determines which point in Minkowski space is “actualized” in our experience as here and now? Why should we experience this here and now rather than some other? This question seems quite similar to the question of why we experience ourselves in one of the many worlds as opposed to some other. What “collapses” us into a particular here and now? Clearly, there is no such collapse, just as there is no collapse in quantum theory. The theory is an abstraction from the here and now. If we get confused and think that we really live in the abstraction, then we become perplexed at how the specific here and now is mysteriously “collapsed” from all the possibilities in the general, abstract world we’ve dreamed up.
There is also an interesting similarity between the role of decoherence, which effectively cuts us off from ever detecting any of the worlds that have decohered from ours, and space-like separation in relativity. There are spacelike separated regions of spacetime that can not have any interaction or communication with us. So, what justification is there for saying that they exist at all? They can never be observed or verified to exist. Is this really any different than the other branches of the universal wave function that we can no longer detect because of decoherence?
Posted in: Philosophy, Science |
d6d44b30a6b3b60c | Copenhagen interpretation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The Copenhagen interpretation is one of the earliest and most commonly taught interpretations of quantum mechanics.[1] It holds that quantum mechanics does not yield a description of an objective reality but deals only with probabilities of observing, or measuring, various aspects of energy quanta, entities that fit neither the classical idea of particles nor the classical idea of waves. The act of measurement causes the set of probabilities to immediately and randomly assume only one of the possible values. This feature of mathematics is known as wavefunction collapse. The essential concepts of the interpretation were devised by Niels Bohr, Werner Heisenberg and others in the years 1924–27.
According to John G. Cramer, "Despite an extensive literature which refers to, discusses, and criticizes the Copenhagen interpretation of quantum mechanics, nowhere does there seem to be any concise statement which defines the full Copenhagen interpretation."[2]
Classical physics draws a distinction between particles and waves, holding that only the latter exhibit waveform characteristics, whereas quantum mechanics is based on the observation that matter has both wave and particle aspects and postulates that the state of every subatomic particle can be described by a wavefunction—a mathematical expression used to calculate the probability that the particle, if measured, will be in a given location or state.
In the early work of Max Planck, Albert Einstein, and Niels Bohr, the existence of energy in discrete quantities had been postulated in order to explain phenomena (such as the spectrum of black-body radiation, the photoelectric effect, and the stability and spectrum of atoms). These phenomena had eluded explanation by classical physics and even appeared to be in contradiction with it. While elementary particles show predictable properties in many experiments, they become highly unpredictable in others, such as when attempting to measure individual particle trajectories through a simple physical apparatus.
The Copenhagen interpretation is an attempt to explain the mathematical formulations of quantum mechanics and the corresponding experimental results. Early twentieth-century experiments on the physics of very small-scale phenomena led to the discovery of phenomena which cannot be predicted on the basis of classical physics, and to the development of new models that described them very accurately. These models could not easily be reconciled with the way objects are observed to behave on the macro scale of everyday human life. Their predictions often appeared counter-intuitive and disturbing to many physicists, including the developers of those models.
Origin of the term[edit]
Werner Heisenberg had been an assistant to Niels Bohr at his institute in Copenhagen during part of the 1920s, when they helped originate quantum mechanical theory. In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930.[3] In the book's preface, Heisenberg wrote:
On the whole the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [i.e., Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics.
The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, apart from some informal popular lectures by Bohr and Heisenberg, which contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg in the 1950s,[4] while criticizing alternate "interpretations" (e.g., David Bohm's[5]) that had been developed.[6] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[7] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[8]
Because it consists of the views developed by a number of scientists and philosophers during the second quarter of the 20th Century, there is no definitive statement of the Copenhagen interpretation.[9] Thus, various ideas have been associated with it; Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[10] Nonetheless, there are several basic principles that are generally accepted as being part of the interpretation:
1. A system is completely described by a wave function \Psi, representing the state of the system, which evolves smoothly in time, except when a measurement is made, at which point it instantaneously collapses to an eigenstate of the observable that is measured.
2. The description of nature is essentially probabilistic, with the probability of a given outcome of a measurement given by the square of the modulus of the amplitude of the wave function. (The Born rule, after Max Born)
3. It is not possible to know the value of all the properties of the system at the same time; those properties that are not known exactly must be described by probabilities. (Heisenberg's uncertainty principle)
4. Matter exhibits a wave–particle duality. An experiment can show the particle-like properties of matter, or the wave-like properties; in some experiments both of these complementary viewpoints must be invoked to explain the results, according to the complementarity principle of Niels Bohr.
5. Measuring devices are essentially classical devices, and measure only classical properties such as position and momentum.
6. The quantum mechanical description of large systems will closely approximate the classical description. (This is the correspondence principle of Bohr and Heisenberg.)
Meaning of the wave function[edit]
The Copenhagen Interpretation denies that the wave function is anything more than a theoretical concept, or is at least non-committal about its being a discrete entity or a discernible component of some discrete entity.
The subjective view, that the wave function is merely a mathematical tool for calculating the probabilities in a specific experiment, has some similarities to the Ensemble interpretation in that it takes probabilities to be the essence of the quantum state, but unlike the ensemble interpretation, it takes these probabilities to be perfectly applicable to single experimental outcomes, as it interprets them in terms of subjective probability.[citation needed]
There are some[who?][citation needed] who say that there are objective variants of the Copenhagen Interpretation that allow for a "real" wave function, but it is questionable whether that view is really consistent with some of Bohr's statements. Bohr emphasized that science is concerned with predictions of the outcomes of experiments, and that any additional propositions offered are not scientific but meta-physical. Bohr was heavily influenced by positivism (or even pragmatism). On the other hand, Bohr and Heisenberg were not in complete agreement, and they held different views at different times. Heisenberg in particular was prompted to move towards realism.[11]
Even if the wave function is not regarded as real, there is still a divide between those who treat it as definitely and entirely subjective, and those who are non-committal or agnostic about the subject. An example of the agnostic view is given by Carl Friedrich von Weizsäcker, who, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist." He suggested instead that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[2]
Nature of collapse[edit]
All versions of the Copenhagen interpretation include at least a formal or methodological version of wave function collapse,[12] in which unobserved eigenvalues are removed from further consideration. The Copenhagen interpretation has always treated wave function collapse as a fundamental, a priori principle. In 1952 David Bohm developed decoherence, an explanatory mechanism for the appearance of wave function collapse. Bohm applied decoherence to Louis DeBroglie's pilot wave theory, producing Bohmian mechanics,[13][14] the first successful hidden variables interpretation of quantum mechanics. Decoherence was then used by Hugh Everett in 1957 to form the core of his many-worlds interpretation.[15] However decoherence was largely[16] ignored until the 1980s.[17][18] Those who hold to the Copenhagen interpretation are willing to say that a wave function involves the various probabilities that a given event will proceed to certain different outcomes. But when an observer obtains one of those outcomes, no probabilities or superposition of the others linger.
Some argue that the concept of the collapse of a "real" wave function was introduced by Heisenberg and later developed by John von Neumann in 1932.[19] However, Heisenberg spoke of the wavefunction as representing our knowledge of a system, and did not use the term "collapse" per se, but instead termed it "reduction" of the wavefunction to a new state representing the change in our knowledge which occurs once a particular phenomenon is registered by the experimenter (i.e. when a measurement takes place).[20]
Acceptance among physicists[edit]
Throughout much of the twentieth century the Copenhagen interpretation had overwhelming acceptance among physicists. Although astrophysicist and science writer John Gribbin described it as having fallen from primacy after the 1980s,[21] according to a poll conducted at a quantum mechanics conference in 1997,[22] the Copenhagen interpretation remained the most widely accepted specific interpretation of quantum mechanics among physicists. In more recent polls conducted at various quantum mechanics conferences, varying results have been found.[23][24][25]
The nature of the Copenhagen Interpretation is exposed by considering a number of experiments and paradoxes.
1. Schrödinger's Cat
The Copenhagen Interpretation: The wave function reflects our knowledge of the system. The wave function (|\text{dead}\rangle + |\text{alive}\rangle)/\sqrt 2 means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive.
2. Wigner's Friend
Wigner puts his friend in with the cat. The external observer believes the system is in the state (|\text{dead}\rangle + |\text{alive}\rangle)/\sqrt 2. His friend, however, is convinced that the cat is alive, i.e. for him, the cat is in the state |\text{alive}\rangle. How can Wigner and his friend see different wave functions?
The Copenhagen Interpretation: The answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily. If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement.
3. Double-Slit Diffraction
Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave?
The Copenhagen Interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's Complementarity Principle).
The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene,[27][28] and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms but, in general, quantum mechanics considers all matter as possessing both particle and wave behaviors. The greater systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation, not exact.
4. EPR (Einstein–Podolsky–Rosen) paradox
Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. The most discomforting aspect of this paradox is that the effect is instantaneous so that something that happens in one galaxy could cause an instantaneous change in another galaxy. But, according to Einstein's theory of special relativity, no information-bearing signal or entity can travel at or faster than the speed of light, which is finite. Thus, it seems as if the Copenhagen interpretation is inconsistent with special relativity.
The Copenhagen Interpretation: Assuming wave functions are not real, wave-function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin of the other. However, another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light.
Copenhagenists claim that interpretations of quantum mechanics where the wave function is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of many worlds[29] and the transactional interpretation[30][31] (TI) maintain that Copenhagen interpretation is fatally non-local.
The claim that EPR effects violate the principle that information cannot travel faster than the speed of light have been countered by noting that they cannot be used for signaling because neither observer can control, or predetermine, what he observes, and therefore cannot manipulate what the other observer measures. However, this is a somewhat spurious argument, in that the speed of light limitation applies to all information, not to what can or cannot be subsequently done with the information. On the other hand, the special theory of relativity contains no notion of information at all. The fact that no classical body can exceed the speed of light (no matter how much acceleration is applied) is a consequence of classical relativistic mechanics. As the correlation between the two particles in an EPR experiment is most probably not established by classical bodies or light signals, the displayed non-locality is not at odds with special relativity.[citation needed]
A further argument against Copenhagen interpretation is that relativistic difficulties about establishing which measurement occurred first or last, or whether they occured quite at the same time, also undermine the idea that in "different" instants and measurements different outcomes can occur. The spin would be kept as a "constant" for a continuous interval of time, i.e. as a real variable, and thus it would seem to violate the general rule (of the classic Copenhagen interpretation) that every measurement gives nothing else than a random outcome subject to certain probabilities.[citation needed]
The completeness of quantum mechanics (thesis 1) was attacked by the Einstein-Podolsky-Rosen thought experiment which was intended to show that quantum physics could not be a complete theory.
Experimental tests of Bell's inequality using particles have supported the quantum mechanical prediction of entanglement.
The Copenhagen Interpretation gives special status to measurement processes without clearly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of the electron." Heisenberg says,
Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory.[32]
Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."[33] and "Do you really think the moon isn't there if you aren't looking at it?"[34] exemplify this. Bohr, in response, said, "Einstein, don't tell God what to do."[35]
Steven Weinberg in "Einstein's Mistakes", Physics Today, November 2005, page 31, said:
All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?
Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus.
The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe.[36]
E. T. Jaynes,[37] from a Bayesian point of view, argued that probability is a measure of a state of information about the physical world. Quantum mechanics under the Copenhagen Interpretation interpreted probability as a physical phenomenon, which is what Jaynes called a Mind Projection Fallacy.
The Ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". Although the Copenhagen interpretation is often confused with the idea that consciousness causes collapse, it defines an "observer" merely as that which collapses the wave function.[32] Quantum information theories are more recent, and have attracted growing support.[38][39]
If the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. For an atemporal interpretation that “makes no attempt to give a ‘local’ account on the level of determinate particles”,[40] the conjugate wavefunction, ("advanced" or time-reversed) of the relativistic version of the wavefunction, and the so-called "retarded" or time-forward version[41] are both regarded as real and the transactional interpretation results.[40] Dropping the principle that the wave function is a complete description results in a hidden variable theory.
Many physicists have subscribed to the instrumentalist interpretation of quantum mechanics, a position often equated with eschewing all interpretation. It is summarized by the sentence "Shut up and calculate!". While this slogan is sometimes attributed to Paul Dirac[42] or Richard Feynman, it seems to be due to David Mermin.[43]
See also[edit]
Notes and references[edit]
1. ^ Hermann Wimmel (1992). Quantum physics & observed reality: a critical interpretation of quantum mechanics. World Scientific. p. 2. ISBN 978-981-02-1010-6. Retrieved 9 May 2011.
2. ^ a b Cramer, John G. (July 1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics 58 (3): 649. Bibcode:1986RvMP...58..647C. doi:10.1103/revmodphys.58.647.
3. ^ J. Mehra and H. Rechenberg, The historical development of quantum theory, Springer-Verlag, 2001, p. 271.
4. ^ Howard, Don (2004). "Who invented the Copenhagen Interpretation? A study in mythology". Philosophy of Science: 669–682. JSTOR 10.1086/425941.
5. ^ Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables. I & II". Physical Review 85 (2): 166–193. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166.
6. ^ H. Kragh, Quantum generations: A History of Physics in the Twentieth Century, Princeton University Press, 1999, p. 210. ("the term 'Copenhagen interpretation' was not used in the 1930s but first entered the physicist’s vocabulary in 1955 when Heisenberg used it in criticizing certain unorthodox interpretations of quantum mechanics.")
7. ^ Werner Heisenberg, Physics and Philosophy, Harper, 1958
8. ^ Olival Freire Jr., "Science and exile: David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics", Historical Studies on the Physical and Biological Sciences, Volume 36, Number 1, 2005, pp. 31–35. ("I avow that the term ‘Copenhagen interpretation’ is not happy since it could suggest that there are other interpretations, like Bohm assumes. We agree, of course, that the other interpretations are nonsense, and I believe that this is clear in my book, and in previous papers. Anyway, I cannot now, unfortunately, change the book since the printing began enough time ago.")
9. ^ In fact Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics. Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation Stanford Encyclopedia of Philosophy
10. ^ "There seems to be at least as many different Copenhagen interpretations as people who use that term, probably there are more. For example, in two classic articles on the foundations of quantum mechanics, Ballentine (1970) and Stapp(1972) give diametrically opposite definitions of 'Copenhagen.'", Asher Peres (2002). "Popper's experiment and the Copenhagen interpretation". Stud. History Philos. Modern Physics 33 (23): 10078. arXiv:quant-ph/9910078.
11. ^ "Historically, Heisenberg wanted to base quantum theory solely on observable quantities such as the intensity of spectral lines, getting rid of all intuitive (anschauliche) concepts such as particle trajectories in space-time. This attitude changed drastically with his paper in which he introduced the uncertainty relations – there he put forward the point of view that it is the theory which decides what can be observed. His move from positivism to operationalism can be clearly understood as a reaction on the advent of Schrödinger’s wave mechanics which, in particular due to its intuitiveness, became soon very popular among physicists. In fact, the word anschaulich (intuitive) is contained in the title of Heisenberg’s paper.", from Claus Kiefer (2002). "On the interpretation of quantum theory - from Copenhagen to the present day". arXiv:quant-ph/0210152 [quant-ph].
12. ^ "To summarize, one can identify the following ingredients as being characteristic for the Copenhagen interpretation(s)[...]Reduction of the wave packet as a formal rule without dynamical significance", Claus Kiefer (2002). "On the interpretation of quantum theory - from Copenhagen to the present day". arXiv:quant-ph/0210152 [quant-ph].
13. ^ David Bohm, A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", I, Physical Review, (1952), 85, pp 166–179
14. ^ David Bohm, A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", II, Physical Review, (1952), 85, pp 180–193
15. ^ Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (1957) pp 454–462.
16. ^ H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundation of Physics, vol. 1, pp. 69-76, (1970).
17. ^ Wojciech H. Zurek, Pointer Basis of Quantum Apparatus: Into what Mixture does the Wave Packet Collapse?, Physical Review D, 24, pp. 1516–1525 (1981)
18. ^ Wojciech H. Zurek, Environment-Induced Superselection Rules, Physical Review D, 26, pp.1862–1880, (1982)
19. ^ "the “collapse” or “reduction” of the wave function. This was introduced by Heisenberg in his uncertainty paper [3] and later postulated by von Neumann as a dynamical process independent of the Schrodinger equation", Claus Kiefer (2002). "On the interpretation of quantum theory - from Copenhagen to the present day". arXiv:quant-ph/0210152 [quant-ph].
20. ^ W. Heisenberg "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik," Zeitschrift für Physik, Volume 43, 172-98 (1927), as translated by John Wheeler and Wojciech Zurek, in Quantum Theory and Measurement (1983), p. 74. ("[The] determination of the position selects a definite "q" from the totality of possibilities and limits the options for all subsequent measurements. ... [T]he results of later measurements can only be calculated when one again ascribes to the electron a "smaller" wavepacket of extension λ (wavelength of the light used in the observation). Thus, every position determination reduces the wavepacket back to its original extension λ.")
21. ^ Gribbin, J. Q for Quantum
22. ^ Max Tegmark (1998). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortsch.Phys. 46 (6–8): 855–862. arXiv:quant-ph/9709032. Bibcode:1998ForPh..46..855T. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q.
23. ^ M. Schlosshauer; J. Koer; A. Zeilinger (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 44 (3): 222–230. arXiv:1301.1069. doi:10.1016/j.shpsb.2013.04.004.
24. ^ C. Sommer, "Another Survey of Foundational Attitudes Towards Quantum Mechanics", arXiv:1303.2719
25. ^ T. Norsen, S. Nelson, "Yet Another Snapshot of Foundational Attitudes Toward Quantum Mechanics", arXiv:1306.4646
26. ^ Erwin Schrödinger, in an article in the Proceedings of the American Philosophical Society, 124, 323-38.
27. ^ Nairz, Olaf; Brezger, Björn; Arndt, Markus; Zeilinger, Anton (2001). "Diffraction of Complex Molecules by Structures Made of Light". Physical Review Letters 87 (16). arXiv:quant-ph/0110012. Bibcode:2001PhRvL..87p0401N. doi:10.1103/PhysRevLett.87.160401.
28. ^ Brezger, Björn; Hackermüller, Lucia; Uttenthaler, Stefan; Petschinka, Julia; Arndt, Markus; Zeilinger, Anton (2002). "Matter-Wave Interferometer for Large Molecules". Physical Review Letters 88 (10): 100404. arXiv:quant-ph/0202158. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334.
29. ^ Michael price on nonlocality in Many Worlds
30. ^ Relativity and Causality in the Transactional Interpretation
31. ^ Collapse and Nonlocality in the Transactional Interpretation
32. ^ a b Werner Heisenberg, Physics and Philosophy, Harper, 1958, p. 137.
33. ^ "God does not throw dice" quote
34. ^ A. Pais, Einstein and the quantum theory, Reviews of Modern Physics 51, 863-914 (1979), p. 907.
35. ^ Bohr recollected his reply to Einstein at the 1927 Solvay Congress in his essay "Discussion with Einstein on Epistemological Problems in Atomic Physics", in Albert Einstein, Philosopher-Scientist, ed. Paul Arthur Shilpp, Harper, 1949, p. 211: " spite of all divergencies of approach and opinion, a most humorous spirit animated the discussions. On his side, Einstein mockingly asked us whether we could really believe that the providential authorities took recourse to dice-playing ("ob der liebe Gott würfelt"), to which I replied by pointing at the great caution, already called for by ancient thinkers, in ascribing attributes to Providence in everyday language." Werner Heisenberg, who also attended the congress, recalled the exchange in Encounters with Einstein, Princeton University Press, 1983, p. 117,: "But he [Einstein] still stood by his watchword, which he clothed in the words: 'God does not play at dice.' To which Bohr could only answer: 'But still, it cannot be for us to tell God, how he is to run the world.'"
36. ^ 'Since the Universe naturally contains all of its observers, the problem arises to come up with an interpretation of quantum theory that contains no classical realms on the fundamental level.', Claus Kiefer (2002). "On the interpretation of quantum theory - from Copenhagen to the present day". arXiv:quant-ph/0210152 [quant-ph].
37. ^ Jaynes, E. T. (1989). "Clearing up Mysteries--The Original Goal". Maximum Entropy and Bayesian Methods: 7.
38. ^ Kate Becker (2013-01-25). "Quantum physics has been rankling scientists for decades". Boulder Daily Camera. Retrieved 2013-01-25.
39. ^ "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". 2013-01-06. Retrieved 2013-01-25.
40. ^ a b The Quantum Liar Experiment, RE Kastner, Studies in History and Philosophy of Modern Physics, Vol41, Iss.2,May2010
41. ^ The non-relativistic Schrödinger equation does not admit advanced solutions.
42. ^
43. ^ N. David Mermin. "Could Feynman Have Said This?". Physics Today 57 (5).
Further reading[edit]
• G. Weihs et al., Phys. Rev. Lett. 81 (1998) 5039
• M. Rowe et al., Nature 409 (2001) 791.
• J.A. Wheeler & W.H. Zurek (eds), Quantum Theory and Measurement, Princeton University Press 1983
• A. Petersen, Quantum Physics and the Philosophical Tradition, MIT Press 1968
• H. Margeneau, The Nature of Physical Reality, McGraw-Hill 1950
• M. Chown, Forever Quantum, New Scientist No. 2595 (2007) 37.
• T. Schürmann, A Single Particle Uncertainty Relation, Acta Physica Polonica B39 (2008) 587. [1]
External links[edit] |
c8cc918d8d9914d8 | Korteweg–de Vries equation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Cnoidal wave solution to the Korteweg–de Vries equation, in terms of the square of the Jacobi elliptic function cn (and with value of the parameter m = 0.9).
Numerical solution of the KdV equation ut + uux + δ2uxxx = 0 (δ = 0.022) with an initial condition u(x, 0) = cos(πx). Its calculation was done by the Zabusky-Kruskal scheme.[1] The initial cosine wave evolves into a train of solitary-type waves.
In mathematics, the Korteweg–de Vries equation (KdV equation for short) is a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an exactly solvable model, that is, a non-linear partial differential equation whose solutions can be exactly and precisely specified. KdV can be solved by means of the inverse scattering transform. The mathematical theory behind the KdV equation is a topic of active research. The KdV equation was first introduced by Boussinesq (1877, footnote on page 360) and rediscovered by Diederik Korteweg and Gustav de Vries (1895).[2]
The KdV equation is a nonlinear, dispersive partial differential equation for a function \phi of two real variables, space x and time t :[3]
\partial_t \phi + \partial^3_x \phi + 6\, \phi\, \partial_x \phi =0\,
with ∂x and ∂t denoting partial derivatives with respect to x and t.
The constant 6 in front of the last term is conventional but of no great significance: multiplying t, x, and \phi by constants can be used to make the coefficients of any of the three terms equal to any given non-zero constants.
Soliton solutions[edit]
Consider solutions in which a fixed wave form (given by f(X)) maintains its shape as it travels to the right at phase speed c. Such a solution is given by \phi(x,t) = f(x − ct − a) = f(X). Substituting it into the KdV equation gives the ordinary differential equation
-c\frac{df}{dX}+\frac{d^3f}{dX^3}+6f\frac{df}{dX} = 0,
or, integrating with respect to X,
-cf+\frac{d^2 f}{dX^2}+3f^2=A
where A is a constant of integration. Interpreting the independent variable X above as a virtual time variable, this means f satisfies Newton's equation of motion in a cubic potential. If parameters are adjusted so that the potential function V(X) has local maximum at X = 0, there is a solution in which f(X) starts at this point at 'virtual time' −∞, eventually slides down to the local minimum, then back up the other side, reaching an equal height, then reverses direction, ending up at the local maximum again at time ∞. In other words, f(X) approaches 0 as X → ±∞. This is the characteristic shape of the solitary wave solution.
More precisely, the solution is
\phi(x,t)=\frac12\, c\, \mathrm{sech}^2\left[{\sqrt{c}\over 2}(x-c\,t-a)\right]
where sech stands for the hyperbolic secant and a is an arbitrary constant.[4] This describes a right-moving soliton.
Integrals of motion[edit]
The KdV equation has infinitely many integrals of motion (Miura, Gardner & Kruskal 1968), which do not change with time. They can be given explicitly as
\int_{-\infty}^{+\infty} P_{2n-1}(\phi,\, \partial_x \phi,\, \partial_x^2 \phi,\, \ldots)\, \text{d}x\,
where the polynomials Pn are defined recursively by
P_n &= -\frac{dP_{n-1}}{dx} + \sum_{i=1}^{n-2}\, P_i\, P_{n-1-i}
\quad \text{ for } n \ge 2.
The first few integrals of motion are:
• the mass \int \phi\, \text{d}x,
• the momentum \int \phi^2\, \text{d}x,
• the energy \int \frac{1}{3} \phi^3 - \left( \partial_x \phi \right)^2\, \text{d}x.
Only the odd-numbered terms P(2n+1) result in non-trivial (meaning non-zero) integrals of motion (Dingemans 1997, p. 733).
Lax pairs[edit]
The KdV equation
\partial_t\phi = 6\, \phi\, \partial_x \phi - \partial_x^3 \phi
can be reformulated as the Lax equation
L_t = [L,A] \equiv LA - AL \,
with L a Sturm–Liouville operator:
L &= -\partial_x^2 + \phi,
A &= 4 \partial_x^3 - 3 \left[ 2\phi\, \partial_x + (\partial_x \phi) \right]
and this accounts for the infinite number of first integrals of the KdV equation (Lax 1968).
Least action principle[edit]
The Korteweg–de Vries equation
\partial_t \phi - 6\phi\, \partial_x \phi + \partial_x^3 \phi = 0, \,
is the Euler–Lagrange equation of motion derived from the Lagrangian density, \mathcal{L}\,
\mathcal{L} = \frac{1}{2} \partial_x \psi\, \partial_t \psi
+ \left( \partial_x \psi \right)^3
- \frac{1}{2} \left( \partial_x^2 \psi \right)^2 \quad \quad \quad \quad (1) \,
with \phi defined by
\phi = \frac{\partial \psi}{\partial x} = \partial_x \psi. \,
Long-time asymptotics[edit]
It can be shown that any sufficiently fast decaying smooth solution will eventually split into a finite superposition of solitons travelling to the right plus a decaying dispersive part travelling to the left. This was first observed by Zabusky & Kruskal (1965) and can be rigorously proven using the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems.[5]
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation was not studied much after this until Zabusky & Kruskal (1965), discovered numerically that its solutions seemed to decompose at large times into a collection of "solitons": well separated solitary waves. Moreover the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position). They also made the connection to earlier numerical experiments by Fermi, Pasta, Ulam, and Tsingou by showing that the KdV equation was the continuum limit of the FPU system. Development of the analytic solution by means of the inverse scattering transform was done in 1967 by Gardner, Greene, Kruskal and Miura.[6][7]
Applications and connections[edit]
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
Many different variations of the KdV equations have been studied. Some are listed in the following table.
Name Equation
Korteweg–de Vries (KdV) \displaystyle \partial_t\phi + \partial^3_x \phi + 6\, \phi\, \partial_x\phi=0
KdV (cylindrical) \displaystyle \partial_t u + \partial_x^3 u - 6\, u\, \partial_x u + u/2t = 0
KdV (deformed) \displaystyle \partial_t u + \partial_x (\partial_x^2 u - 2\, \eta\, u^3 - 3\, u\, (\partial_x u)^2/2(\eta+u^2)) = 0
KdV (generalized) \displaystyle \partial_t u + \partial_x^3 u = \partial_x^5 u
KdV (generalized) \displaystyle \partial_t u + \partial_x^3 u + \partial_x f(u) = 0
KdV (Lax 7th) Darvishi, Kheybari & Khani (2007)
+\partial_{x} & \left\{
\right. \\ & \left. \quad
KdV (modified) \displaystyle \partial_t u + \partial_x^3 u \pm 6\, u^2\, \partial_x u = 0
KdV (modified modified) \displaystyle \partial_t u + \partial_x^3 u - (\partial_x u)^3/8 + (\partial_x u)(Ae^{au}+B+Ce^{-au}) = 0
KdV (spherical) \displaystyle \partial_t u + \partial_x^3 u - 6\, u\, \partial_x u + u/t = 0
KdV (super) \displaystyle \partial_t u = 6\, u\, \partial_x u - \partial_x^3 u + 3\, w\, \partial_x^2 w,
\displaystyle \partial_t w = 3\, (\partial_x u)\, w + 6\, u\, \partial_x w - 4\, \partial_x^3 w
KdV (transitional) \displaystyle \partial_t u + \partial_x^3 u - 6\, f(t)\, u\, \partial_x u = 0
KdV (variable coefficients) \displaystyle \partial_t u + \beta\, t^n\, \partial_x^3 u + \alpha\, t^nu\, \partial_x u= 0
Korteweg–de Vries–Burgers equation \displaystyle \partial_t u + \mu\, \partial_x^3 u + 2\, u\, \partial_x u -\nu\, \partial_x^2 u = 0
See also[edit]
1. ^ N.J. Zabusky and M. D. Kruskal, Phy. Rev. Lett., 15, 240 (1965)
2. ^ Darrigol, O. (2005), Worlds of Flow: A History of Hydrodynamics from the Bernoullis to Prandtl, Oxford University Press, p. 84, ISBN 9780198568438
3. ^ See e.g. Newell, Alan C. (1985), Solitons in mathematics and physics, SIAM, ISBN 0-89871-196-7 , p. 6. Or Lax (1968), without the factor 6.
4. ^ Alexander F. Vakakis (31 January 2002). Normal Modes and Localization in Nonlinear Systems. Springer. pp. 105–108. ISBN 978-0-7923-7010-9. Retrieved 27 October 2012.
5. ^ See e.g. Grunert & Teschl (2009)
6. ^ Gardner, C.S.; Greene, J.M.; Kruskal, M.D.; Miura, R.M (1967), Method for solving the Korteweg–de Vries equation, Physical Review Letters 19 (19): 1095–1097, Bibcode:1967PhRvL..19.1095G, doi:10.1103/PhysRevLett.19.1095.
7. ^ Dauxois, Thierry; Peyrard, Michel (2006), Physics of Solitons, Cambridge University Press, ISBN 0-521-85421-0
External links[edit] |
61f43556671587f6 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Bernard Baars
Gregory Bateson
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Charles Darwin
Terrence Deacon
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Martin Heisenberg
Werner Heisenberg
John Herschel
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
Simon Kochen
Stephen Kosslyn
Ladislav Kovàč
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Juan Roederer
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
William Thomson (Kelvin)
Peter Tse
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Free Will
Mental Causation
James Symposium
Stuart Hameroff
Stuart Hameroff, a medical doctor specializing in anesthesiology, knew that Van der Waals- London forces in hydrophobic pockets of various neuronal proteins had been proposed as the mechanisms by which anesthetic gases selectively erase consciousness. Anesthetics bind by their own London force attractions with electron clouds of the hydrophobic pocket, presumably impairing normally-occurring London forces governing the protein switching required for consciousness.
Biologist Charles Sherrington had speculated in the 1950's that information might be stored in the brain in microtubules, lattices of tubulin dimers. Hameroff decided that the bits of information might be stored in discrete states of tubulin, interacting by dipole-dipole interactions with neighboring tubulin states. These structures are orders of magnitude smaller than the biological cells, providing vast amounts of potential information storage.
A hydrophobic pocket in tubulin develops electron resonance rings in the pocket. Single electrons in each ring repel each other, as the net dipole moment of their electron cloud flips under external London force oscillations.
Although Hameroff did not provide a specific read-write mechanism, he modeled the tubulin states as cellular automata (these "cells" are the fundamental units of John Conway's "Game of Life") that would need to change states at synchronized time steps, governed by the coherent voltage oscillations. (Although brain wave oscillations are well-known, those observed are at very low frequencies compared to the proposed oscillation in the microtubules - 109/sec.) Each automaton cell interacts with its neighbor cells at discrete, synchronized time steps, the state of each cell at any particular time step determined by its state and its neighbor cell states at the previous time step, and rules governing the interactions. In such ways, using simple neighbor interactions in simple lattice grids, cellular automata might perform complex computations and generate complex patterns.
The estimated total information processing in the tubulin of a single neuron is of the same order of magnitude as that for the entire brain, if storage is at the synapses of the neural networks. This surprised (and annoyed) some cognitive scientists, but again, no plausible read/write mechanism was proposed for either computational model.
In 1989, Roger Penrose published The Emperor's New Mind, which was followed in 1994 by Shadows of the Mind. There he proposed a solution to the measurement problem in quantum mechanics by extending the standard framework's idea of a random collapse (or reduction) of the wave function with a more "objective" collapse he called "objective reduction" (OR).
Objective reduction would terminate the deterministic evolution of the wave function predicted by the Schrödinger equation. (Another scheme to force the collapse was proposed by Ghirardi, Rimini, and Weber.) Penrose initially looked to quantum gravity as the driving force behind OR.
Note that the traditional connection between consciousness and the collapse of the wave-function was the result of early work by John von Neumann and Eugene Wigner. They assumed that a conscious observer was needed to make a measurement (producing at least one bit of information). Without an observer, goes their argument, the wave-function would not collapse, leading to paradoxes like Schrödinger's Cat. Many other physicists deny that a conscious observer is necessary for a physical measurement. [See our solution to the measurement problem.]
Hameroff and Penrose began working together in the 1990's to develop an "orchestrated" version of objective reduction.
The Orch OR Scheme
According to Orch OR, the (objective) reduction is not the entirely random process of standard theory, but acts according to some non-computational new physics (see Penrose 1989, 1994). The idea is that consciousness is associated with this (gravitational) OR process, but occurs significantly only when the alternatives are part of some highly organized structure, so that such occurrences of OR occur in an extremely orchestrated form. Only then does a recognizably conscious event take place. On the other hand, we may consider that any individual occurrence of OR would be an element of proto-consciousness.
The OR process is considered to occur when quantum superpositions between slightly differing space-times take place, differing from one another by an integrated space-time measure which compares with the fundamental and extremely tiny Planck (4-volume) scale of space-time geometry. Since this is a 4-volume Planck measure, involving both time and space, we find that the time measure would be particularly tiny when the space-difference measure is relatively large (as with Schrödinger's cat), but for extremely tiny space-difference measures, the time measure might be fairly long, such as some significant fraction of a second. We shall be seeing this in more detail shortly, together with its particular relevance to microtubules. In any case, we recognize that the elements of proto-consciousness would be intimately tied in with the most primitive Planck-level ingredients of space-time geometry, these presumed 'ingredients' being taken to be at the absurdly tiny level of 10−35m and 10−43s, a distance and a time some 20 orders of magnitude smaller than those of normal particle-physics scales and their most rapid processes. These scales refer only to the normally extremely tiny differences in space-time geometry between different states in superposition, and OR is deemed to take place when such space-time differences reach the Planck level. Owing to the extreme weakness of gravitational forces as compared with those of the chemical and electric forces of biology, the energy EG is liable to be far smaller than any energy that arises directly from biological processes. However, EG is not to be thought of as being in direct competition with any of the usual biological energies, as it plays a completely different role, supplying a needed energy uncertainty that then allows a choice to be made between the separated space-time geometries. It is the key ingredient of the computation of the reduction time τ. Nevertheless, the extreme weakness of gravity tells us there must be a considerable amount of material involved in the coherent mass displacement between superposed structures in order that τ can be small enough to be playing its necessary role in the relevant OR processes in the brain. These superposed structures should also process information and regulate neuronal physiology. According to Orch OR, microtubules are central to these structures, and some form of biological quantum computation in microtubules (most probably primarily in the more symmetrical A-lattice microtubules) would have to have evolved to provide a subtle yet direct connection to Planck-scale geometry, leading eventually to discrete moments of actual conscious experience.
Hameroff and colleagues Travis Craddock and Jack Tuszynski have made a strong case for memory storage in microtubules, quite apart from the claims of the Penrose-Hameroff Orch-OR scheme. Microtubules are tiny, but highly ordered structures that could encode vast amounts of information per neuron. In a 2012 article, Hameroff suggests the Ca2+ - Calmodulin complex CaMKII may encode information in the microtubules. CaMKII is a serine-threonine protein kinase that has been known for years to play a major role in cell signaling and can also function as a molecular switch, staying in an active state long after the bursts of post-synaptic Ca2+ have returned to base levels. CaMKII is implicated in the standard theory of long-term potentiation by the generation of new synapses. It accounts for more than one percent of all the proteins in the brain.
Hameroff notes that the geometry of CaMKII - a snow-flake shaped double hexagon of twin hexameric rings - and the diameter - 20nm - make the CaMKII a nice fit with microtubules - 15nm internal diameter and 25nm external (and up to 25 microns in length!).
Each monomer is an EF hand motif consisting of two alpha-helices linked by a short "loop region." The helices can each bind two Ca2+ ions, and change their configuration like an index finger and thumb to become an active Ca2+ - Calmodulin complex. Each of the kinase monomers can activate separately, phosphorylating (or not) a substrate protein. So Hameroff points out that the twelve units in the holoenzyme can encode 12 bits of digital information.
He says:
In this paper we evaluated possible information inputs to microtubules in the context of brain neuronal memory encoding and long-term potentiation (LTP). A key intermediary in LTP involves the hexagonal holoenzyme calcium-calmodulin kinase II. When activated by synaptic calcium influx, the snowflake-shaped CaMKII extends sets of 6 foot-like kinase domains outward, each domain able to phosphorylate a substrate or not (thus convey 1 bit of information). As CaMKII activation represents synaptic information, subsequent phosphorylation by CaMKII of a particular substrate may encode memory, e.g. as ordered arrays of 6 bits (one ‘byte’). We used molecular modeling to examine feasibility of collective phosphorylation (and thus memory encoding) by CaMKII kinase domains of tubulins in a microtubule lattice.
We show, first, complementary electrostatics and mutual attraction between individual CaMKII kinase domains and tubulin surfaces. We also demonstrate two plausible sites for direct phosphorylation of tubulin by a CaMKII kinase domain, and calculate binding energies in the range of 6 to 36 kcal/mol per CaMKII-tubulin phosphorylation event. This indicates encoding which is robust against degradation, yet inexpensive, requiring on the order of 2% of overall brain metabolism for maximal encoding in all 1011 neurons.
We then compare size and hexagonal configuration of the six extended foot-like kinase domains of activated CaMKII with hexagonal lattices of tubulin proteins in MTs. We find that CaMKII size and geometry of 6 extended kinase domains precisely match hexagonal arrays of tubulin in both A-lattice and B-lattices.
Conclusion. We demonstrate a feasible and robust mechanism for encoding synaptic information into structural and energetic changes of microtubule (MT) lattices by calcium-activated CaMKII phosphorylation. We suggest such encoded information engages in ongoing MT information processes supporting cognition and behavior, possibly by generating scale-free interference patterns via reaction-diffusion or other mechanisms. As MTs and CaMKII are widely distributed in eukaryotic cells, the hexagonal bytes and trytes suggested here may reflect a real-time biomolecular information code akin to the genetic code.
Rescuing Free Will (from the Libet Experiments)
Hameroff describes the free will problem in his 2012 article "How quantum brain biology can rescue conscious free will,"
Hameroff argues that his Orch-OR theory provides the model for consciousness and causal agency needed for "conscious free will."
Orch-OR also addresses the problem of classical determinism (the major impediment to belief in free will is that every action is pre-determined). Hameroff says:
But the major problem with free will that Hameroff hopes to solve is the objection raised by the Libet experiments that consciousness comes "too late". He says that Orch-OR can send quantum information backward in time to resolve this problem:
Does consciousness come too late?
Brain electrical activity appearing to correlate with conscious perception of a stimulus can occur after we respond to that stimulus, seemingly consciously. Accordingly, consciousness is deemed epiphenomenal and illusory (Dennett, Wegner,). However evidence for backward time effects in the brain (Libet et al., Bem, Ma et al.), and in quantum physics (e.g., to explain entanglement, Penrose, Aharonov and Vaidman, Bennett and Wiesner) suggest that quantum state reductions in Orch-OR can send quantum information backward in (what we perceive as) time, on the order of hundreds of milliseconds. This enables consciousness to regulate axonal firings and behavioral actions in real-time, when conscious choice is felt to occur (and actually does occur), thus rescuing consciousness from necessarily being an epiphenomenal illusion.
Exactly how the science-fiction-like idea of sending information back in time works, that something comes back from the future without creating infinitely recursive time loops, is not made clear. Hameroff discusses three cases, of which two at least are unlikely to involve information going backward in time, the Einstein-Podolsky-Rosen experiment and the Libet experiments.
Information Sent Backward in Time in EPR?
In the time evolution of an entangled two-particle state according to the Schrödinger equation, we can visualize it - as we visualize the single-particle wave function - as collapsing when a measurement is made. The discontinuous "jump" is also described as the "reduction of the wave packet." This is apt in the two-particle case, where the superposition of | + - > and | - + > states is "projected" or "reduced: to one of these two-particle states, and then further reduced to the product of independent one-particle states.
In the two-particle case (instead of just one particle making an appearance), when either particle is measured we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other conserved properties such as spin. No information need be "transmitted" for the experimenter to know this information.
And it is incorrect to say that one particle - A - is sent one way and another particle - B - is sent the other way, so that when A is measured, information must be sent to B. Nothing is known until one is measured - and that either one - since they are indistinguishable particles.
Finally, since the two particles, once measured, are in a spacelike separation,there is no clear "backward-in-time" or "forward-in-time" relation between them. As first pointed out by C.W.Rietdijk in 1966, then by Hilary Putnam a year later, and Roger Penrose in his 1989 The Emperor's New Mind, in different reference frames A can occur before B or vice versa.
John Bell suggested there might be a preferred frame to analyze the problem of entanglement and nonlocality. The preferred frame is the one in which particles A and B are measured simultaneously, which is what happens from the particles' viewpoint.
Conscious Experience Sent Backward in Time?
Hameroff cites Benjamin Libet's belief that something about conscious experience must refer backwards in time.
To account for his results, [Libet] further concluded that subjective information is referred backwards in time from the time of neuronal adequacy to the time of the EP (Figure (Figure9B).9B). Libet's backward time assertion was disbelieved and ridiculed (e.g., Churchland (1981), Pockett (2002), but never refuted (Libet, 2002, 2003).
Pockett's 2002 criticism is not the important one. In 2004 she cited Daniel Pollen's 1975 research that showed direct cortical surface stimulation inhibits neuronal activity for several hundred milliseconds. THis was the cause of Libet's observed latency. There is no need for anything to go backward in time.
Moreover, the more familiar Libet data on the readiness potential, may only show the mind developing alternative possibilities for action just before actually and consciously deciding. The idea that the early RP is already a decision, rather than the forming of an intention, is simply misinterpretation.
The abrupt and rapid decisions to flex a finger measured by Libet bear little resemblance to the kinds of two-stage deliberate decisions for which we can first freely generate alternative possibilities for action, then evaluate which is the best of these possibilities in the light of our reasons, motives, values, and desires - first "free," then "will."
For Teachers
For Scholars
Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar |
26fc674e8906622b | Tuesday, September 18, 2007
Not quite infinite
Lubos has a memo where he discusses how physicists make (finite) sense of divergent sums like 1+10+100+1000+... or 1+2+3+4+5+... . The last is, as string theorists know, of course -1/12 as for example explained in GSW. Their trick is to read that sum as the value at s=-1 of and define that value via the analytic continuation of the given expression which is well defined only for real part of s>1.
Alternatively, he regularises as . Then, in an obscure analogy with minimal subtraction throws away the divergent term and takes the finite remainder as the physical value.
He justifies this by claiming agreement with experiment (here in the case of a Casimir force). This, I think, however, is a bit too weak. If you rely on arguments like this it is unclear how far they take you when you want to apply them to new problems where you do not yet know the answer. Of course, it is good practice for physicists to take calculational short-cuts. But you should always be aware that you are doing this and it feels much better if you can say "This is a bit dodgy, I know, and if you really insist we could actually come up with a rigorous argument that gives the same result.", i.e. if you have a justification in your sleeve for what you are doing.
Most of the time, when in a physics calculation you encounter an infinity that should not be there (of course, often "infinity" is just the correct result, questions like how much energy I have to put into the acceleration of an electron to bring it up to the speed of light? come to my mind), you are actually asking the wrong question. This could for example be because you made an idealisation that is not physically justified.
Some examples come to my mind: The 1+2+3+... sum arises when you try to naively compute the commutator of two Virasoro generators L_n for the free boson (the X fields on the string world sheet). There, L_n is given as an infinite sum over bilinears in a_k's, the modes of X. In the commutator, each summand gives a constant from operator ordering and when you sum up these constants you face the sum 1+2+3+...
Once you have such an expression, you can of course regularise it. But you should be suspicious that it is actually meaningful what you do. For example, it could be that you can come up with two regularisations that give different finite results. In that case you should better have an argument to decide which is the better one.
Such an argument could be a way to realise that the infinity is unphysical in the first place: In the Virasoro example, one should remember that the L_n stand for transformations of the states rather than observables themselves (outer vs. inner transformations of the observable algebra). Thus you should always apply them to states. But for a state that is a finite linear combination of excitations of the Fock vacuum there are always only a finite number of terms in the sum for the L_n that do not annihilate the state. Thus, for each such state the sum is actually finite. Thus the infinite sum is an illusion and if you take a bit more care about which terms actually contribute you find a result equivalent to the -1/12 value. This calculation is the one you should have actually done but the zeta function version is of course much faster.
My problem with the zeta function version is that to me (and to all people I have asked so far) it looks accidental: I have no expansion of the argument that connects it to the rigorous calculation. From the Virasoro algebra perspective it is very unnatural to introduce s as at least I know of no way to do the calculation with L_n and a_k with a free parameter s.
Another example are the infinities that arise in Feynman diagrams. Those arise when you do integrals over all momenta p. There are of course the usual tricks to avoid these infinities. But the reason they work is that the integral over all p is unphysical: For very large p, your quantum field theory is no longer the correct description and you should include quantum gravity effects or similar things. You should only integrate p up the scale where these other effects kick in and then do a proper computation that includes those effects. Again, the infinity disappears.
If you have a renormalisable theory you are especially lucky: There you don't really have to know the details of that high energy theory, you can subsume them into a proper redefinition of your coupling constants.
A similar thing can be seen in fluid dynamics: The Navier-Stokes equation has singular solutions much like Einstein's equations lead to singularities. So what shall we do with for example infinite pressure? Well, the answer is simple: The Navier-Stokes equation applies to a fluid. But the fluid equations are only an approximation valid at macroscopic scales. If you look at small scales you find individual water molecules and this discreteness is what saves you actually encountering infinite values.
There is an approach to perturbative QFT developed by Epstein and Glaser and explained for example in this book that demonstrates that the usual infinities arise only because you have not been careful enough earlier in your calculation.
There, the idea is that your field operators are actually operator valued distributions and that you cannot always multiply distributions. Sometimes you can, if their singularities (the places where they are not a function but really a distribution) are in different places or in different directions (in a precise sense) but in general you cannot.
The typical situation is that what you want to define (for example delta(x)^2) is still defined for a subset of your test functions. For example delta(x)^2 is well defined for test functions that vanish in a neighbourhood of 0. So you start with a distribution defined only for those test functions. Then, you want to extend that definition to all test-functions, even those that are finite around 0. It turns out that if you restrict the degree of divergence (the maximum number of derivatives acting on delta, this will later turn out to be related to the superficial scaling dimension) to be below some value, there is a finite dimensional solution space to this extension problem. In the case of phi^4 theory for example the two point distribution is fixed up to a multiple of delta(x) and a multiple of the d'Alambertian of delta(x), the solution space is two dimensional (if Lorentz invariance is taken into account). The two coefficients have to be fixed experimentally and of course are nothing but mass and wave function renormalisation. In this approach the counter terms are nothing but ambiguities of an extension problem of distributions.
I has been shown in highly technical papers, that this procedure is equivalent to BPHZ regularization and dimensional regularisation and thus it's save to use the physicist's short-cuts. But it's good to know that the infinities that one cures could have been avoided in the first place.
My last example is of slightly different flavour: Recently, I have met a number of mathematical physicists (i.e. mathematicians) that work on very complicated theorems about what they call stability of matter. What they are looking at is the quantum mechanics of molecules in terms of a Hamiltonian that includes a kinetic term for electrons and Coulomb potentials for electron-electron and electron-nucleus interactions. The position of the nuclei are external (classical) parameters and usually you minimise them with respect to the energy. What you want to show is that the spectrum of this Hamiltonian is bounded from below. This is highly non-trivial as the Coulomb potential itself alone is not bounded from below (-1/r becomes arbitrarily negative) and you have to balance it with the kinetic term. Physically, you want to show that you cannot gain an infinite amount of energy by throwing an electron into the nucleus.
Mathematically, this is a problem about complicated PDE's and people have made progress using very sophisticated tools. What is not clear to me is if this question is really physical: It could well be that it arises from an over-simplification: The nuclei are not point-like and thus the true charge distribution is not singular. Thus the physical potential is not unbounded from below. In addition, if you are worried about high energies (as would be around if the electron fell into a nucleus) the Schrödinger equation would no longer be valid and would have to be replaced with a Dirac equation and then of course the electro-magnetic interaction should no longer be treated classically and a proper QED calculation should be done. Thus if you are worried about what happens to the electron close to the nucleus in Schrödinger theory, you are asking an unphysical question. What still could be a valid result is that you show (and that might look very similar to a stability result) is that you don't really get out of the area of applicability of your theory as the kinetic term prevents the electrons from spending too much time very close to the nucleus (classically speaking).
What is shared by all these examples, is that some calculation of a physically finite property encounters infinities that have to be treated and I tried to show that those typically arise because earlier in your calculation you have not been careful and stretched an approximation beyond its validity. If you would have taken that into account there wouldn't have been an infinity but possible a much more complicated calculation. And in lucky cases (similar to the renormalisable situation) you can get away with ignoring these complications. However you can sleep much better if you know that there would have been another calculation without infinities.
Update: I have just found a very nice text by Terry Tao on a similar subject to "knowing there is a rigorous version somewhere".
Joe Polchinski said...
In chapter 1 of my book, eq. 1.3.34, I derive the `correct' value of this infinite sum by the requirement that one cancel the Weyl anomaly introduced by the regulator by a local counterterm; this fixes the finite value completely.
At various points later in the book (see index item `normal ordering constants') I derive the constant by a fully finite calculation that respects the Weyl symmetry throughout.
Robert said...
For those readers who don't have Joe's book at hand let me reproduce his argument: In the cut-off version, epsilon is if fact dimension-full and a constant, n independent term would as well be the consequence of a world sheet cosmological constant. Thus the 1/epsilon^2 is in fact a renormalisation of the world-sheet cosmological constant. This would be in conflict with Weyl invariance and thus one has to add a counter term which makes it vanish.
This is what I should have written instead of calling the argument "obscure".
This leaves me still looking for a physical justification for the introduction of s in the zeta regularisation and the hope that physics is actually analytic in s. Maybe this could be related to dimensional regularisation on the world sheet?
Lumo said...
Dear robert, I am somewhat confused by your skepticism. A similar comment to yours by ori - I suppose it could even be Ori Ganor - appeared on my blog.
Why I am confused? Because I think that Joe's argument is, at the level of physics, a rigorous argument. Let me start with the vacuum energy subtraction.
We require Weyl invariance of the physical quantities. So the total zero-point function must vanish. It is clearly the case because such a result is dimensionful and any dimensionful quantity has a scale and breaks scale invariance.
So one exactly needs to add a counterterms to have the total vacuum energy vanish and this counterterm thus exactly has the role of killing the 1/epsilon^2 term. Joe has a lot of detailed extra factors of length etc. in his formulae to make it really transparent how the terms depend on the length. This makes the mathematical essence of the regularization more convoluted than it is but it should make the physical interpretation much more unambiguous.
Now the zeta function.
You ask about the "hope" that physics is analytical in complex "s". I don't know why you call it a hope. It is a easily demonstrable fact that is, as you correctly hint, analogous to the case of dim reg. Just substitute a complex "s" and calculate what the result is. You only get nice functions so of course the result is locally holomorphic in "s".
Just like in the case of dimreg, one doesn't have to have an interpretation of complex values of "s". The only thing we call "physics for complex s" are the actual formulae and their results and they are clearly holomorphic.
Beisert and Tseytlin have checked a highly nontrivial zeta-function regularization of some AdS/CFT spinning calculation up to four loops. That's where they argued to understand the three-loop discrepancy as an order of limits issue.
See also a 600+ citation paper by Hawking who checks curved spaces in all dimensions etc. These regularizations work and it's no coincidence.
Robert said...
you misunderstand me. I have no doubt that in field theory calculations where for example you want to compute tr(log(O)) for some operator O as this gives you the 1 loop effective action zeta function regularisation of log(0) works as well as any other regularisation (and often nicer as it preserves more symmetries than more ad hoc versions).
What I am looking for is a version where you not only reinterpret n as 1/n^s for s=-1 once you encounter an obviously divergent expression but start out with something that includes s from the beginning such that for say Re(s)>1 everything is finite at all stages and in the end you can take s->-1 analytically. Can you come up with (s dependent) definitions of a_n and their commutation relations or L_n such that the commutator of L_n's (which is something you calculate rather than define) gives the expression including s?
BTW, in the LQG version of the string, the correct constant appears as Tr([A_2,B_2]) where A and B are generators of diffeomorphisms and the subscript 2 refers to
A_2 = (A + JAJ)/2
where J multiplies positive modes by i and negative modes by -i. Thus it's the 'beta'-part in the language of Boguliubov transformations. Needless to mention this expression is in fact finite even though there is a trace in an infinite dimensional Hilbert space as can be shown that A_2 is a Hilbert-Schmidt operator (that is the product of two such operators has a finite trace). Of course you need an infinite dimensional space for a commutator to have a non-vanishing trace.
Lumo said...
More generally about your comments, Robert.
I think that it is entirely wrong to say "this argument is dodgy blah blah blah" (in the context of the vacuum energy subtraction) because the argument is transparent and rigorous when looked at properly. Both of them in fact.
Also, I disagree with your general statement that an infinity means that we have asked a wrong question. Only IR divergences are about wrong questions. UV divergences are about a theory being effective. But even QCD that is UV finite gives UV divergences - they're responsible e.g. for the running. There's no way to ask a better question about the exact QCD theory that we know and love that would remove the infinity.
QCD also falsifies your statement that "the integral over all p is unphysical". It's not unphysical. QCD is well-defined at arbitrarily high values of "p" but it still requires one to deal with and subtract the infinities properly.
Sorry to say but the comments that physicists are always expected to say "we're dodgy, everything is unreliable, we need experiments" just mean that you don't quite understand the technology. Your comments are Woit-Lite comments. In each case, there is a completely well-defined answer to the questions whether a particular symmetry constrains the terms or not, whether a given regularization preserves the symmetry or not, and consequently, whether a given regularization gives a correct result or not. There is no ambiguity here whatsoever and the examples listed are guaranteed to give the right results.
Lumo said...
Dear Robert, concerning your comment, I understood pretty well that you wanted to define the whole theory for complex unphysical values of "s".
That's exactly why I pre-emptively wrote that it is wrong to try to define the whole theory for wrong values of "s" just like it is wrong to define a theory in a complex dimension "d" in dimreg. Such a theory probably doesn't exist, especially not in the dimreg case.
But you don't need the full theory in 3.98+0.2i spacetime dimensions in order to prove that dimreg preserves gauge invariance, do you? In the same way, you don't need to define the operator algebra in a CFT for complex values of "s" or something like that.
I don't understand how to combine this discussion with the "LQG version of a string". The texts I wrote above were trying to help to clarify how the quantities actually behave in correct physics while LQG is a supreme example how the divergences and other things are treated physically incorrectly.
Of course that things I write are incompatible with the LQG quantization. But the reason is that the LQG quantization is wrong while e.g. Joe's arguments are correct. Your conclusion that physics is ambiguous is not a correct conclusion.
Robert said...
All I am saying is you should have a way (fine if done retroactively) to treat infinities without them actually occurring. And if you do that by adding an epsilon dependent counter term (that diverges by itself when you take epsilon to 0) that's fine with me. As long as you can physically justify it.
Otherwise you are prone to arguments like
And sorry, "an argument is correct if it gives the correct result" is not good enough. I would like to have a way to decide if an argument is valid before I know the answer from somewhere else.
Robert said...
By "LQG string" I meant our version where we (in a slightly mathematically more careful language) re-derive the usual central charge (same content, different formalism) rather than the polymer version (different content of which you know I do not approve).
Lumo said...
Dear Robert, I disagree that one can only trust a theory if infinities never occur. A particular regularization that replaces infinities by finite numbers as the intermediate results is just a mathematical trick but the actual physical result is independent of all details of the regularization which really means that it directly follows from a correct calculation inside the theory that contains these infinities.
In other words, you only need the Lagrangian of standard QCD (one that leads to divergent Feynman diagrams) plus correct physical rules that constrain/dictate how to deal with infinities to get the right QCD predictions. You don't need any theory that is free of infinities. Such a theory is just a psychological help if one feels uncertain.
I agree with you that one should be able to decide whether an argument is correct before the result is compared with another one. And indeed, it is possible. This is what this discussion is about. You argue that it is impossible to decide whether an argument or calculation is correct as long as it started with an infinite expression, and others are telling you that it is possible.
If you rederive the same physics in what you call "LQG string", why do you talk about "LQG string" as opposed to just a "string"? Cannot you reformulate your argument in normal physics as opposed to one of kinds of LQG physics?
Sabine's calculation you linked to is manifestly wrong because she doubles one of the infinities in order to subtract them and get a wrong finite part. There was no symmetry principle that would constrain the right result in her calculation. The original integral was perfectly convergent and she just added (2-1) times infinity (by rescaling the cutoff by a factor of 2 in one term), pretending that 2-1=0. I don't quite know why you think that I am prone to such arguments. ;-) Maybe Sabine is but I am not.
She didn't make any proper analysis of counterterms, any proper analysis of any symmetries, and she didn't make any analytical continuation of anything to a convergent region either. Why do you think it's analogous to a valid calculation?
If you mentioned it because of the relationship between 1+2+3+... and 1-2+3-4+..., the derived relationship between them may remind you of Sabine's wrong calculation. But it is not analogous. These rescalings and alternating sums can be calculated by the zeta function regularization that allows me to make these arguments adding subseries and rescaling them.
For example, you get the correct sum for antiperiodic fields, 1/2 + 3/2 + 5/2 + ... can also be calculated by taking the normal sum 1+2+3 and subtracting a multiple of it from itself.
So if the zeta-function reg gives a Weyl-invariant value of the alternating sum, it also gives the right value of the normal sum as well as the shifted Neveu-Schwarz sum and others.
Lumo said...
Let me say more physically what she actually did. In order to calculate a convergent integral in the momentum space (x), she wrote it as a difference of two divergent ones. That would be perfectly compatible with physics and nothing wrong could follow from it. The error only occurs when she rescales the "x" by a factor of 1/2 or 2 in the two terms. This is equivalent to confusing what is her cutoff - by a factor of two up or down. Because her integral is logarithmically divergent, it is a standard example of a running coupling. So she has effectively added "g(2.lambda)-g(lambda/2)" - the difference of gauge couplings at two different scales, pretending that it is zero. Of course, it is not zero: this is exactly the way how running couplings arise.
An experienced physicist would never make this error - using inconsistent cutoffs for different contributions in the same expression. Hers is just a physics error, if we interpret it as a physics calculation. One can't say that her calculation is analogous to the correct calculations such as Joe's subtractions of the vacuum energy even though it seems that this is precisely what you're saying.
There is a very clear a priori difference between correct and wrong calculations: correct ones have no physical errors of this or other kinds.
Robert said...
My final comment for tonight: For those readers who did not get this from my comments above: I completely agree with Joe's derivation of including a regularisation and imposing Weyl invariance. Do not try to convince me it is correct. It is.
My point about Sabine's calculation was that you can of course (and nobody I believe doubts this) produce non-sense if you are not careful about infinite quantities. Once you regulate, the error is obvious.
My final remark (and this is not serious, thus I will delete any comments referring to it) is that there is a shorter version of Sabine's argument which goes: "int dx/x is always zero in dimensional regularisation" (this is how I learned to actually apply dim reg from a particle phenomenologist: Bring your integrals to form finite + int dx/x and set the second term to zero).
Anonymous said...
When physicists proceed 'formally', its usually explicitly stated as such.
There are many examples throughout history where this actually turns out to be wrong when done rigorously.
The interesting thing (for mathematicians) is when it turns out to be correct, as it usually means theres some hidden principle in there somewhere and often can lead to new and nontrivial mathematics (eg distribution theory).
Lumo said...
Dear Robert, if you exactly agree with Joe's derivation, why do you exactly write that this derivation is based on an "obscure analogy with minimal subtraction"?
There is nothing obscure about it and, if looked at properly, there is nothing obscure about the minimal subtraction either. One can easily prove why it works whenever it works.
I agree that one must be careful about infinite quantities but we seem to disagree what it means to be careful. In my picture, it means that you must carefully include them whenever they are nonzero. In the polymer LQG string that you researched, for example, they are very careful to throw all these important terms arising as infinities away which is wrong, and your work is an interpolation between the correct result and the wrong result which is thus also wrong, at least partially. ;-)
I disagree that your "nonserious" comment is not serious. It is absolutely serious. Don't try to erase this comment because of it. The comment that you call "nonserious" is the standard insight - certainly taught in QFT courses at most good graduate schools - that power law divergences are zero in dim reg. In the case of the log divergence it is still true as long as you consistently extract the finite part by taking correct limits of the integral.
Thomas Larsson said...
Why are zeta-function techniques better than simply calculating the action of the Virasoro generators on some state? It is very easy to compute [L_m, L_-m] |0>, and you can read off the central charge from this, without ever having to introduce any infinities.
What is less trivial is how to generalize this to d dimensions, where the diffeomorphism generators are labelled by vectors m = (m_0, m_1, ...) in Z^d rather than an scalar integer m in Z. In fact, I was stuck on this problem for many years (and ran out of funding in the meantime), before it was solved in a seminal paper by Rao and Moody.
amused said...
Hi Robert, Lubos, and anyone else,
I have a question/doubt about something Lubos wrote in his post on this topic and would appreciate your views or clarifications. (Normally I would post this on the blog of the person who wrote it,
but seeing as in this case it's Lubos...hope you don't mind me posting it here instead)
LM wrote:
"The fact that different regularizations lead to the same final results is a priori non-trivial but can be mathematically demonstrated to be inevitably true by the tools of the renormalization group."
Is this really true? E.g., I don't recall any mention of this in Peskin & Schroeder's book, even though they discuss RG group in detail.. To explain my doubts, consider the case of perturbative QCD: two different regularizations which preserve gauge invariance are dimensional reg. and lattice formulation. In fact there are a whole lot of different possible lattice discretizations, and not all of them can be expected to produce results which agree with the physical ones obtained using dimensional regularization. E.g., there must at least be some kind of locality condition on the lattice QCD formulation that one uses, and I don't think anyone knows at present what the mildest possible locality requirement is that guarantees that the lattice formulation will produce correct results. In light of this, I don't see how it can be asserted that different regularizations (which preserve the appropriate symmetries) are always guaranteed to give the same final results...
Robert said...
I know there is some literature about different regularisation/renormalisation schemes giving identical results but trying to locate some using google scholar was unsuccessful. I know for sure that BPHZ and Epstein-Glaser have been shown to be equivalent and would be surprised if the ones more often used in practical calculations (i.e. dim reg) would not have been connected as well. Step zero for such a proof (which in character is mathematical and not very physics oriented) is to define what exactly you mean by scheme X. That would have to be a prescription that works at all loop order for all graphs and not like in QFT textbooks where a few simple graphs are calculated (most often only one loop so they do not encounter overlapping divergencies) and then a "you proceed along the same lines for other graphs" instruction is given.
Lattice regularisation, however, is very different in spirit as it is not perturbative (it does not expand in the coupling constant) so it is not supposed to match a perturbative calculation up to some fixed loop order. Thus it does not compare directly with Feynman graph calculations. Only the continuum limit of the lattice theory is supposed to match with an all loop calculation that also takes into account non-perturbative effects.
In fact, the lattice version of gauge theories is probably the best definition of what you mean by "the full quantum theory including non-perturbative effects" as those are not computed directly in perturbation theory and there are only indirect hints from asymptotic expansions and of course S-duality.
OTOH, starting from the lattice theory, you have to show that the continuum limit in fact has Lorentz symmetry and is causal, two properties that this regularisation destroys. Once you managed this, it's likely you are not too far from claiming the 1 million dollars:
amused said...
Thanks Robert. You seem to have in mind the nonperturbative lattice formulation used in computer simulations, but there is also a perturbative version which does expand in the coupling constant - see, e.g., T.Reisz, NPB 318 (1989) 417 where perturbative renormalizability of lattice QCD was proved to all orders in the loop expansion. However, it is not clear to me that this will always give the correct physical results for any choice of lattice QCD formulation. There must surely be some conditions on the formulation; in particular some minimal locality condition. That's why I was surprised by the claim that any regularization (preserving the symmetries) will must lead to the same end results
(Btw, extraction of physics results from the lattice involves perturbative calculations as well as the computer simulations. I recall some nice posts about this on the "life on the lattice" blog at some point..)
cecil kirksey said...
Interesting subject. I think I can accept the "mathematical" definition of summing divert series because the "sum" can be defined in a potentially consistent manner.
However, in any real world situation the question does it EVER make sense using such a divergent series? Would it ever make sense add (sum?)an infinite number of measurable quantites? If not exactly what is being added in ST? Thanks. |
94c064bcbe7c36a1 | Take the 2-minute tour ×
The Wikipedia article on Moseley's law seems to show that the screening of heavy atoms is by 1 electron charge exactly (in the limit of large Z, experimental precision, within nonrelativistic limits, and).
But why is this exactly one unit? The other K-shell electron is not screening exactly one unit, and this seems to be a conspiracy of other electrons. I suspect it is because of an unappreciated hole-picture of deep holes in heavy atoms (electrons missing in deep shells), and I will describe this theory briefly.
If you remove an electron from close to the nucleus, the electron-hole behaves as an object with positive charge and negative mass (this is why it orbits the nucleus that it is repelled by). The state is not a vacuum quite, because of the presence of other electrons, but the rigidity of the Fermi liquid near the nucleus of a heavy atom means that the hole behaves as a single particle. This single-particle behavior is in the potential background of the nucleus and the other electrons, and it is possible that the result can give an exact 1 unit screening. I developed the formalism a little bit to see what the form should be, but I did not see any reason for 1 unit screening. Perhaps there is none, but it looks to be more than a coincidence.
share|improve this question
I am confident that your explanation is at least on the right track. At any rate, even the Wikipedia article does say that $(Z-1)$ arises because of differences in the electron-electron interactions between the initial and final states and holes could be very useful to quantify this difference. It must be possible to justify the number 1 in some way. – Luboš Motl Apr 13 '12 at 10:06
Ron, what about charge conservation? In truth there are Z-1 electrons going around and the hole must take its apparent charge from the Z of the nucleus to have charge conservation of the system from afar. Charge number is quantized after all in units of e.This might leave the nucleus at Z-1. – anna v Apr 13 '12 at 10:38
@annav: This doesn't work--- the Z-1 is only for K-shell, and I don't see a reason this is linked to the number of electrons. I also am not sure this is nonrelativistically exact--- I just saw a convergence in the values at large Z by eye. – Ron Maimon Apr 13 '12 at 19:32
Are you saying that by "remove" you mean the electron is on a higher energy shell but still attached to the atom? If it is off the atom then the atom is ionized and will have a charge +1 . – anna v Apr 14 '12 at 3:10
look at "binding energy" here : en.wikipedia.org/wiki/Ionization_energy . – anna v Apr 14 '12 at 3:14
1 Answer 1
I doubt that it is exactly 1. See Effective_nuclear_charge for references therein. The Clementi tables goes up to Radon and uses Slater Type orbitals. You can calculate it on your own with a quantum chemistry programm, defining the correct symmetry group for your exited wavefunction with an empty k-shell. The 1s electron coefficients are not integer but close to (Z-1).
Another important point is the inclusion of relativistic effects, most important for s-shell electrons in heavy elements. E.g. the hyperfine interaction (the Fermi contact term) is increased by about 20% for heavy elements
share|improve this answer
I know it isn't exactly 1 for finite Z, the question is whether it asymtptotes to 1 for large Z. I see it pass 1, but I think for large atoms it is even closer to 1. It might be coincidental, but I don't think so. – Ron Maimon Apr 13 '12 at 16:19
I don't want relativistic--- the question is whether a nonrelativistic enormous Z potential with Z electrons and one K-shell hole has energy given by Bohr model with (Z-1). It's a well defined mathematical question, and I don't see a clear "no". If you find the answer for Z=400 and it is significanty different from 1, it is good evidence that the asymptotic value is not exactly 1, but maybe 1.08 or something. I was trying to see if there is a reason for a near-1 value, and this limit of Z going to infinity is the only thing I could think of. But +1 for the link – Ron Maimon Apr 13 '12 at 19:31
Also, you might be right, it might not be 1 exactly, and if you give a little evidence, I will accept. The problem is that I don't know how far you can get a good solution for the K-shell screening--- I never looked at the numerical methods. Experimentally, extremely heavy atoms could screw up the convergence due to relativistic corrections. – Ron Maimon Apr 13 '12 at 19:46
Please wait for an anwer from another user, that is a specialist in x-ray analysis or a particle physicist. The Schrödinger equation is just analytically solvable for the hydrogen atom, for other you have to rely on numerical methods. I stumbled over this problem from the reverse - the effective potential of the valence electron for alkalis. There is always this assumption of an "light electron", the electron cloud screening Z-1 charge. But this picture is not right at all. – Alex1167623 Apr 13 '12 at 19:52
Yes--- the valence picture is only a crude rough approximation. But the inner picture should be mathematically correct because of the large separation between inner shells and outer shells in energy and distance both. This means that in the limit of large Z, the inner shell transitions and orbits are exactly (nonrelativistically) described by a one-hole dynamical picture, where the x-ray transitions are the negative mass positive charge hole going up in "n" (down in energy--- it has negative mass) with matrix elements given by the hole dipole moment. This isn't in the literature, should be. – Ron Maimon Apr 13 '12 at 20:14
Your Answer
|
75d247e9d6d39b21 | Viewpoint: Light Bends Itself into an Arc
Zhigang Chen, Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132, USA
Published April 16, 2012 | Physics 5, 44 (2012) | DOI: 10.1103/Physics.5.44
Nondiffracting Accelerating Wave Packets of Maxwell’s Equations
Ido Kaminer, Rivka Bekenstein, Jonathan Nemirovsky, and Mordechai Segev
Published April 16, 2012 | PDF (free)
+Enlarge image Figure 1
(Left) Ref. [1]. (Right) Courtesy D. N. Christodoulides
Figure 1 Kaminer et al. showed that shape-preserving beams of light that travel along a circular trajectory emerge as solutions to Maxwell’s equations. (Left) Calculated propagation of a self-bending beam. This solution assumes the wave’s electric field is polarized in the transverse direction (TE polarization). (Right) Illustration of a nondiffracting beam bending around an obstacle.
Apart from the broadening effects of diffraction, light beams tend to propagate along a straight path. Mirrors, lenses, and light guides are all ways to force light to take a more circuitous path, but an alternative that many researchers are exploring is to prepare light beams that can bend themselves along a curved path, even in vacuum. In a paper in Physical Review Letters, Ido Kaminer and colleagues at Technion, Israel, report on wave solutions to Maxwell’s equations that are both nondiffracting and capable of following a much tighter circular trajectory than was previously thought possible [1]. Apart from fundamental scientific interest, such wave solutions may lead to the possibility of synthesizing shape-preserving optical beams that make curved left- or right-handed turns on their own. The equations describing these light waves could also be generalized to describe similar behavior in sound and water waves.
The idea for making specially shaped light waves that could bend without dispersion actually emerged from quantum physics. In 1979, Berry and Balazs realized that the force-free Schrödinger equation could give rise to solutions in the form of nonspreading “Airy” wave packets [2] that freely accelerate even in the absence of any external potential. This early work remained dormant in the literature for decades, until Christodoulides and co-workers demonstrated the optical analog of Airy wave packets: specially shaped beams of light that did not diffract over long distances but could bend (or, self-accelerate) sideways [3] (see 28 November 2007 Focus story ). Such self-accelerating Airy beams have since attracted a great deal of interest due to their unique properties, and they provide the basis for a number of proposed applications including optical micromanipulation [4], plasma guidance and light bullet generation [5], and routing surface plasmon polaritons [6] (see 6 September 2011 Viewpoint ).
A typical simplification when solving Maxwell’s wave equations is to assume the light waves are paraxial, meaning the angle between the wave vectors that constitute a wavepacket and the optical axis is small enough that the wave does not deviate too much from its propagation direction. Under this paraxial approximation, the resulting time-independent scalar Helmholtz equation takes the same form of the Schrödinger equation, and it was this relationship that led to the proposal that finite-power optical Airy beams could be attainable in experiments [3]. (Unlike other nondiffracting beams such as the well-known Bessel beams [7], Airy beams have a unique spatial phase structure, do not rely on simple conical superposition of plane waves, and can self-accelerate.) At small angles, Airy beams follow parabolic trajectories similar to ballistic projectiles moving under the force of gravity [8], but at large angles, beyond the paraxial approximation, they cannot maintain their shape-preserving property as they propagate. Therefore, it is important to identify mechanisms that could allow self-accelerating beams to propagate in a true diffraction-free manner even for large trajectory bending.
Several studies have searched for accelerating beams beyond the paraxial regime. In one study [9], nonparaxial Airy beams were sought as exact solutions of Maxwell’s wave equation, but these beams tend to break up and decay as they propagate because parts of them consist of evanescent waves. In yet another study [10], the so-called caustic method was used to effectively stretch the paraxial Airy beams to the nonparaxial regime, but these “caustic-designed” accelerating beams don’t preserve their shape like nondiffracting paraxial Airy beams. Thus, a natural question arose: Could a beam accelerate at large nonparaxial angles but still hold its shape? Since beam propagation is governed by Maxwell’s equations, it was equivalent to asking: Were there any solutions to Maxwell’s equations that allowed nondiffracting, self-accelerating beams?
In their new work, Kaminer et al. report they have found shape-preserving nonparaxial accelerating beams (NABs) as a complete set of general solutions to the full Maxwell’s equations. Differently from the paraxial Airy beams that accelerate along a parabolic trajectory, these nonparaxial beams accelerate in a circular trajectory.
To find the solutions for the NABs, Kaminer et al. started with the scalar Maxwell’s wave equation for a given polarization, such as TE (transverse electric) polarization, where the electric field is perpendicular to the direction of the wave. Since the equation exhibits full symmetry between the x and z coordinates, the solutions of shape-preserving beams must have circular symmetry. The authors therefore transformed the equation into polar coordinates and looked for shape-preserving solutions where the field amplitude didn’t vary with angle. In polar coordinates, the solution to the wave equation is a Bessel function; transforming back to Cartesian coordinates, the solution must be separated into forward and backward propagating waves in Fourier space. However, only the forward propagating part forms the desired accelerating beam, so the Kaminer et al. solutions are properly called “half Bessel wave packets.”
The authors found a solution for TM (transverse magnetic) polarization through a similar procedure. For both TE and TM polarizations, the beams preserve their shape while the quarter-circle bending could occur after a propagation distance of just 35μm. In addition, the authors studied the properties of these Bessel-like accelerating beams and found that the Poynting vector of the main lobe can turn by more than 90 [1].
The left part of Fig. 1 shows a typical solution of the shape-preserving beam, which can sweep out a quarter circle. It is important to note that this one-dimensional beam propagates initially along the longitudinal z direction while its curved trajectory in the x-z plane is determined by a Bessel function. This is quite different from the traditional nondiffracting Bessel beam, which propagates in a straight line while its two-dimensional transverse pattern follows a Bessel function [7].
As the authors point out, the nonparaxial shape-preserving accelerating beams found in their work originates from the full vector solutions of Maxwell’s equation. Moreover, in their scalar form, these beams are the exact solutions for nondispersive accelerating wave packets of the most common wave equation describing time-harmonic waves. As such, this work has profound implications for other linear wave systems in nature, ranging from sound waves and surface waves in fluids to many kinds of classical waves. Furthermore, based on previous successful demonstrations of self-accelerating Airy beams [3–6], one would expect that the nonparaxial Bessel-like accelerating beams proposed in this study could be readily realized in experiment. Apart from many exciting opportunities for these beams in various applications, such as beams that self-bend around an obstacle (Fig. 1, right), one might expect one day light could really travel around a circle by itself, bringing the search for an “optical boomerang” into reality.
1. I. Kaminer, R. Bekenstein, J. Nemirovsky, and M. Segev, Phys. Rev. Lett. 108, 163901 (2012).
3. G. A. Siviloglou and D. N. Christodoulides, Opt. Lett. 32, 979 (2007); G. A. Siviloglou, J. Broky, A. Dogariu, and D. N. Christodoulides, Phys. Rev. Lett. 99, 213901 (2007).
4. J. Baumgartl, M. Mazilu, and K. Dholakia, Nature Photon. 2, 675 (2008).
5. P. Polynkin, M. Kolesik, J. V. Moloney, G. A. Siviloglou, and D. N. Christodoulides, Science 324,229 (2009); A. Chong, W. H. Renninger, D. N. Christodoulides, and F. W. Wise, Nature Photon. 4,103 (2010).
6. P. Zhang, S. Wang, Y. Liu, X. Yin, C. Lu, Z. Chen, and X. Zhang, Opt. Lett. 36,3191 (2011); A. Minovich, A. E. Klein, N. Janunts, T. Pertsch, D. N. Neshev, and Y. S. Kivshar, Phys. Rev. Lett. 107,116802 (2011); L. Li, T. Li, S. M. Wang, C. Zhang, and S. N. Zhu, 107,126804 (2011).
7. J. Durnin, J. J. Miceli, Jr., and J. H. Eberly, Phys. Rev. Lett. 58,1499 (1987).
8. G. A. Siviloglou J. Broky, A. Dogariu, and D. N. Christodoulides, Opt. Lett. 33,207 (2008); Y. Hu, P. Zhang, C. Lou, S. Huang, J. Xu, and Z. Chen, 35,2260 (2010).
9. A. V. Novitsky and D. V. Novitsky, Opt. Lett. 34, 3430 (2009).
10. L. Froehly, F. Courvoisier, A. Mathis, M. Jacquot, L. Furfaro, R. Giust, P. A. Lacourt, and J. M. Dudley, Opt. Express 19,16455 (2011).
About the Author: Zhigang Chen
Zhigang Chen
Zhigang Chen received his Ph.D. in physics from Bryn Mawr College in 1995. He was a postdoctoral research associate and then a senior research staff member at Princeton University. He has been on the faculty in the Department of Physics and Astronomy at San Francisco State University since 1998.
Subject Areas
New in Physics |
9fc518fc12082c74 | Email updates
Open Access Research article
Valence atom with bohmian quantum potential: the golden ratio approach
Mihai V Putz
Author Affiliations
Laboratory of Computational and Structural Physical Chemistry, Biology-Chemistry Department, West University of Timişoara, Pestalozzi Street No.16, Timişoara, RO-300115, Romania
Chemistry Central Journal 2012, 6:135 doi:10.1186/1752-153X-6-135
Received:6 September 2012
Accepted:29 October 2012
Published:12 November 2012
© 2012 Putz; licensee Chemistry Central Ltd.
The alternative quantum mechanical description of total energy given by Bohmian theory was merged with the concept of the golden ratio and its appearance as the Heisenberg imbalance to provide a new density-based description of the valence atomic state and reactivity charge with the aim of clarifying their features with respect to the so-called DFT ground state and critical charge, respectively.
The results, based on the so-called double variational algorithm for chemical spaces of reactivity, are fundamental and, among other issues regarding chemical bonding, solve the existing paradox of using a cubic parabola to describe a quadratic charge dependency.
Overall, the paper provides a qualitative-quantitative explanation of chemical reactivity based on more than half of an electronic pair in bonding, and provide new, more realistic values for the so-called “universal” electronegativity and chemical hardness of atomic systems engaged in reactivity (analogous to the atoms-in-molecules framework).
Electronegativity; Chemical hardness; Bohmian mechanics; Heisenberg imbalance equation; Slater electronic density
Graphical abstract
Recently, the crucial problem regarding whether chemical phenomena are reducible to physical ones has had an increasingly strong impact on the current course of conceptual and theoretical chemistry. For instance, the fact that elements arrange themselves in atomic number (Z) triads in approximately 50% of the periodic system seems to escape custom ordering quantifications [1,2]. The same applies to the following: the fascinating golden ratio (τ) limit for the periodicity of nuclei beyond any physical first-principle constants, which provides specific periodic laws for the chemical realm [3-6]; the fact that atoms have no definite atomic radii in the sense of a quantum operator, and even the Aufbau principle, which, although chemically workable, seems to violate the Pauli Exclusion Principle [7]; at the molecular level, the well-celebrated reaction coordinate, which, although formally defined in the projective energy space, does not constitute a variable to drive optimization in the course of chemical reactions, appearing merely as a consequence of such reactions [8]; the problem of atoms in molecules [9], i.e., how much of the free atoms enter molecules and how much independency the atoms preserve in bonding; and chemical bonding itself, which ultimately appears to be reinterpreted as a special case of bosonic condensation with the aid of bondons – the quantum bosons of chemical bonding, which, without being elementary, imbue chemical compounds with a specific reality [10,11].
In the same context, the specific measure of chemical reactivity, electronegativity (χ), which lacks a definite quantum operator but retains an observable character through its formal identity with the macroscopic chemical potential χ=-μ[12,13], was tasked with carrying quantum information within the entanglement environment of Bohmian mechanics [14-17] and has thus far been identified with the square root of the so-called quantum potential χ = VQ1/2[6].
However, the striking difference between an atom as a physical entity, with an equal number of electrons and protons (thus in equilibrium), and the same atom as a chemical object, with incomplete occupancy in its periphery quantum shells (thus attaining equilibrium by changing accepting or releasing electrons), is closely related to the electronegativity phenomenology in modeling chemical reactivity. Moreover, this difference triggers perhaps the most important debate in conceptual chemistry: the ground vs. valence state definition of an atom.
The difficulty may be immediately revealed by considering the variation in the total energy (of the ground and/or valence state – see below for an explanation of their difference) around the physical equilibrium (neutral atom) attained between the release (by ionization, I) and receipt (through affinity, A) of electrons toward chemical equilibrium (in molecules, chemical bonding). Accordingly, the curve passing through these points apparently only behaves as shown in Figure 1(a), while in all systems (with numerical I and A), the obtained interpolating curve presents a minimum toward accepting electrons (see Figure 1(b)), thus confirming the electronegativity concept as a chemical reality, although with a predicted fractional charge (for example, the critical charge N*) on an atom at chemical equilibrium (i.e., not reducible/comprehensible to/by an ordinary physical description of atoms).
thumbnailFigure 1. The two energy curves (thick lines) for the quantum atom in (a) the apparent or reactive ground state and (b) the shifted or critical ground state.
However, the physical-to-chemical paradox continues in an even more exciting fashion as follows. When, in light of the above discussion, electronegativity is recognized with the two-point limits shown in Figure 1(b), namely [13,18]
the limits represent tangents to a curve that does not describe chemical equilibrium but an excited state driven by the parabolic form
which happens to correspond to the celebrated density functional theory (DFT) working energy expression [13,19-21] written in terms of electronegativity and chemical hardness, respectively defined as follows [13,22-25]:
The point is that curve (2) is not chemically minimized, although it is very often assumed to be in the DFT invoked by the chemical reactivity literature [13,26-29]; however, the curve cannot be considered indicative of a sort of ground state (neither reactive nor critical states of Figure 1). Additionally, by comparing the curves of Figure 1 (a) and (b), the curve of eq. (2) occurs above both the reactive and critical curves of Figure 1; it thus should represent the chemical valence state with which to operate. Therefore, much caution should be taken when working with eq. (2) in assessing the properties of atoms, molecules, atoms in molecules, etc. Nevertheless, this is another case of chemistry not being reducible to physics and should be treated accordingly. It is worth noting that Parr, the “father” of eq. (2) and a true pioneer of conceptual density functional theory [30,31], had tried to solve this dichotomy by taking the “valence as the ground state of an atom in a perturbed environment”. This statement is not entirely valid because perturbation is not variation such that it may be corrected by applying the variational principle to eq. (2), for example. In fact, using such variation should be considered a double variational technique that is necessary to arrive at the celebrated chemical reactivity principles of electronegativity and chemical hardness, as recently shown [32].
The current line of work takes a step forward by employing the double variation of the parabolic energy curve of type (2) to provide the quantum (DFT) valence charge of an atom (say, N**) and to compare it either quantitatively and qualitatively with the chemical critical charge N*. The goal of these efforts is to gain new insight into the valence state and chemical reactivity at the quantum level. To this end, the relation of Bohmian mechanics to the concept of the golden ratio will be essential and will be introduced in the following.
The consequences of the joint consideration of Bohmian mechanics and the golden ratio for the main atomic systems will be explored, and the quantum chemical valence state will be accordingly described alongside the so-called universal electronegativity and chemical hardness, refining the work of Parr and Bartolotti [33] as well as generalizing the previous Bohmian-Boeyens approach [3,4].
Background methods
Two apparently disjoint theories of matter will be employed to characterize the quantum valence of an atom: Bohmian mechanics – furnishing the main equation for total energy – and the fundamental quantum mechanics through the Heisenberg combined with de Broglie principles providing the wave-particle indeterminacy framework in which the golden ration dependency of Z/N naturally appears as quantifying the valence states of atoms considered the “ground state” of the atomic chemical reactivity.
Bohmian mechanics
Because of the need to reduce Copenhagen’s indeterminacy for quantum phenomena, i.e., by associating it the quantum description of “Newtonian” forms of motion, though by preserving probability densities, quantum averages, etc., the so-called “minimalist” quantum theory may be formulated following the Bohm quantum mechanical program as follows.
One begins with the general eikonal wave-function form [14]
which represents the mid-way between wave and particle mechanics because it contains both information regarding Hamilton-Jacobi theory and the Wentzel-Kramers-Brillouin (WKB) approximation [34] through the principal phase function S(r,t) while preserving the amplitude relationship with the systems’ quantum density:
In this framework, the Schrödinger equation,
decomposes into real and imaginary parts. The real part can be expressed as follows:
representing a continuous “fluid” of particles driven by the “guidance” momentum:
moving under a joint external potential V(r) as well as under the so-called quantum potential influence:
The consequences are nevertheless huge. For example, this methodology allows for the interpretation of the trajectories orthogonal to constant surfaces, by cancelling the Laplacian of the wave fronts ∇ r2S(r, t) = 0, which are obtained from eqs. (8) and (9) as the quantum equation of motion:
Equation (11) resembles the classical Newtonian acceleration-force relationship only in a formal way; in fact, it generalizes it: it prescribes acceleration motion even in the absence of an external classical potential. This is essential in explaining why the inter-quark forces increase with the increase in inter-quark distances, no matter how great a separation is considered (a specific quantum effect), due to the presence of a quantum potential that does not fall off with distance as V does. It also nicely explains the observed interference patterns in double-slit experiments in the absence of classical forces. Alike, eq. (11) also appears suited for modeling chemical reactivity for the valence atoms as free particles in a virtually infinite potential environment to characterize their reactive behavior. In this regard, it is worth considering for such atoms the uniform motion by having ∂ p/∂ t = 0 through the time-constant associated wavefront condition and action S(r=cnst.,t)=cnst. (equivalent with Lagrangean constancy), in all given chemical space-points (atomic basins within molecule complex) [35]. This picture is also equivalently to have
applied to eq. (8). By doing so, one obtains
which can be rearranged as follows:
such that the total energy of a the valence system is now entirely driven by the quantum potential:
At this point, one can see that when turning to electronegativity and combining eq. (15) with DFT definition (3), one obtains a generalization of the previous Boeyens formulation [6]:
which is the variation in the quantum potential with electron exchange under a constant classical or external potential.
However, for a quantum characterization of the valence state, we are interested in how the energy described by eq. (15) varies under a quantum potential (10)
when the above relations (6) and (10) are substituted into eq. (15).
It is worth noting that although we obtained the total energy (17) in the Bohmian mechanics context, it showcases a clear electronic density dependency, not under a density functional (as DFT would require) but merely as a spatial function, which is a direct reflection of the entanglement behavior of Bohmian theory through the involvement of a quantum potential. However, in most cases, and especially for atomic systems, eq. (17) will yield numerical values under custom density function realizations.
Golden ratio imbalance for valence states of atoms
Atomic stability and periodicity remain major issues in the structural theories of matter; fortunately, they both have been largely solved by wave-particle (W/P) complementarily quantum behavior; phenomenologically, such relationship can be expressed as “WAVEPARTICLE = constant”, while it may be quantized (by Planck’s constant h) in the light of Heisenberg principle as [36]
Remarkably, when fixing the particle’s observable property, say O, while letting wave information to vary, say ΔO, equation (18a) takes the workable form
having as the preeminent realization the Bohr-de Broglie formulation a, leading with the first rationalization of the atomic periodicity [37]. However, when about the atomic chemical reactivity a similar analysis may be provided in terms of the number of electrons to atomic number ratio (N/Z): one may fix the observable (“particle”) character of the reactive atomic system by the ratio itself
while modeling its evolving (“wave”) character by the natural variation of the previous ratio in terms of exchanged electrons respecting the neutral state:
When combining eqs. (19a) and (19b) into eq. (18b) on the lowest quantized state (nO=1), the “ground state” of atomic reactivity that is the atom in its valence state so to speak, and within atomic units’ formulation (i.e. by putting h=1, since the actual reactivity quantification involves only numbers with no dimension), one has the so called Heisenberg imbalance equation for valence atoms
that can be rewritten as
Eq. (20b) has the elementary acceptable solution
which establishes, the direct “chemical” connection between the number of electrons and the atomic charge by means of the golden ratio
generalizing the “physical” connection between nuclear (cosmic) synthesis at high pressure and atomic stability in the gas phase (Z=N); one has therefore the actual physical-to-chemical electronic charge – atomic number relationships
Worth remarking the results of type (20) and (22), here based on chemical reactivity specialization of Heisenberg type equations (18a) and/or (18b), were previously obtained at the level of neutron-protonic imbalance, inside the atomic nuclei, based on well-founded empirical observations [6]. The present golden ratio appearance is ultimately sustained also by the deviation from the N=Z condition for so-called “quark atoms” (as another way in considering the atoms in a quantum valence state), earlier identified as true matter’s entities responsible for matter’s reactivity at the atomic level [38].
Therefore the atomic structure branching (22) can be regarded as the present golden ratio extension to valence atom and as such employed; actually, its consequences regarding the characterization of the quantum valence states of atoms within the Bohmian quantum potential are the main aims of the present endeavor and will be discussed next.
Atomic implementation and discussion
On Slater density for valence atoms
Density is considered a “goldmine” in current computational and conceptual quantum chemistry due to its link with observable quantities, energy density functionals in particular, as celebrated by DFT [13,20,39,40]. However, to quantitatively approach the chemical phenomenology presented in Figure 1, involving the ionization-to-affinity atomic description, the general Slater [33] density (involving the orbital parameter ξ dependency) will be here employed for the first trial on modeling the combined Bohmian and gold-ratio features of valence atom; it assumes the general (trough still crude) working form:
For the reactivity at the valence atomic level, or for some outer shell (n) considered at the atomic frontier, one may assume almost electronic free motion or at least electronic motion under almost vanishing nuclear potential V(r); this way the density (23a), while entering the quantum potential (10) recovers the negative kinetic energy by the virial identity (14). Analytically, since eqs. (6), (10) and (23a), one has ∇ r2ρ1/2 = ξ2ρ1/2 and the actual valence atomic virial realization looks like
Equation (24) leaves with the identity:
that may be further rewritten with the help of the atomic Bohr-de Broglie relationship (see the note a) to provide the atomic frontier radii shell-dependency
Remarkably, the same result is obtained when employing a far more reach atomic shell structure description, namely when starting with the full atomic radial Schrödinger density [25]
and imposing the null-gradient condition [41], ∇rρn(r, ξ) = 0, in accordance with the celebrated Bader condition of electronic flux of atoms-in-molecules [9,42], to yield:
The identity between eqs. (25b) and (25c) gives sufficient support to the present Slater density approach eq. (23a) in modeling the valence atoms or the atoms at their frontiers approaching reactivity (i.e. atoms-in-molecules complexes by chemical reactions).
Quantum chemical bonding and reactivity indices
Once convinced by the usefulness of the Slater density form (23a) for the present valence atomic analysis, one will next employ it under the so called Parr-Bartolotti form [33]
such that to obey the N-normalization condition, as required by DFT [43-47],
by applying the Slater integral recipe
It nevertheless showcases the parametric ξ dependency that can be smeared out by considering the variational procedure
upon applying the total atomic energy
where the components are individually evaluated within a radial atomic framework with the respective results for [21,48]
● kinetic energy
● nucleus-electronic interaction
● inter-electronic interaction (see also Appendix)
With these results, the optimum atomic parameter is quantified by the electronic number as follows:
which immediately releases the working electronic density
Having the completely analytical density in terms of number of reactive electrons as in eq. (33), worth pointing here on the so called sign problem relating with its variation, e.g., its gradient, the gradient of its square root, etc. Although this problem usually arises in density functional theory when specific energy functionals are considered in gradient forms, see for instance ref. [49], there is quite instructive discussing the present behavior and its consequences.
For instance, one can adapt either eqs. (25b) or (25c) through considering the present form (32) for the orbital exponent to be
Here, one combines the frontier and maximum atomic radii with atoms-in-molecules phenomenology, as above indicated, to arrive to the present identification for the number of valence electrons possible to be involved in the same chemical bonding state as being Nbonding in (25d). Accordingly, the Figure 2 reveals interesting features of the present Slater-Parr-Bartolotti atomic density with quantum potential:
● the fact that the (covalent) bond length is proportional to the atomic radii and in inverse correlation with bonding order is well known [50], and this it is also nicely reflected in eq. (25d); however, changing the sign to negative radii as surpassing the threshold 21/5 and fixing in fact the limit Nbonding=4, is consistent with maximum bond order met in Nature; it is also not surprising this self-released limit connects with golden ratio by the golden-spiral optimization of bond-order [51]; more subtle, it connects also with the 4π symmetry of two spherical valence atoms making a chemical bond (Figure 2, inset): such “spinning” reminds of the graviton symmetry [52] (the highest spherical symmetry in Nature, with spin equal 2) and justifies the recent treatments of chemical bonding by means of the quasi-particles known as bondons [10,53], as well as the use of the 4D complex projective geometry in modeling the chemical space as a non-Euclidian one, eventually with a time-space metrics including specific “gravitational effects” describing the bonding [51];
● the “gap” between the atomic systems contributing 2 to 3 electrons to produce chemical bond is about double of the golden ratio, <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a> ; therefore, this gap marks the passage from the space occupied by a pair of electrons and that required when the third electron is added on the same bonding state: it means that the third electron practically needs one golden measure (τ) to (covalently) share with each of the existing pairing electrons, while increasing the bond order to the level of three; it is therefore a space representation of the Pauli exclusion principle itself, an idea also earlier found in relation with dimensionless representation of a diatomic bonding energy (2τ) at its equilibrium bonding distance τ[54]; when the fourth electron is coming into the previous system, in order the maximum fourth order of bonding to be reach the chemical bonding space is inflating about five times more, yet forbidding further forced incoming electrons into the same space of bonding state as the bonding radius becomes negative in sign.
thumbnailFigure 2. Representation of the bonding length as a function of bonding electrons from valence atoms in molecule(s), based on eq. (25d), while marking the double golden ratio 2τ gap between the bonding lengths of the second and third bonding order, as well as the forbidden chemical bonding region for Nbonding ≥21/5 for the electrons participating in the same bonding. Further connection of chemical bonding and the 4D space to model it is suggested by the inset picture illustrating the 2-fold (4π) spinning symmetry of the adduct atom respecting the bonding direction, after [51].
Having revealed the chemical bonding information carried by the density (33) when considered for combined valence atoms-in-molecules, it is next employed on energetically describing the atomic reactivity as a propensity for allowing electronic exchanging and bonding. As such, it leaves the total quantum (Bohmian) energy in (17) with the compact form
Note that the actual working total energy is not that obtained by replacing the density (33) in eqs. (31a)-(31c) and then in total energy (30) because here the double-variational procedure was considered; that is, the first optimization condition was considered as in eq. (29), and the resulting (optimum) density (33) was then employed in the quantum energy (17), which in turn was obtained by applying the variational eq. (12) to the perceived phase transition in the Bohm eikonal wave-function (5). To emphasize the accuracy of eq. (17) over that of (30) with density (33), when one considers the last case, eq. (30) yields the following non-quadratic form for energy:
which is not appropriate for describing the valence state of an atom, as eq. (2) prescribes, despite being similar in form to the Bohmian-based result of eq. (34a). Thus, the previous limitation of the Parr-Bartolotti conclusion [33] and the paradox raised in describing the valence (parabolically) state with the optimized atomic density (33) are here solved by the double (or the orthogonal) variational implementation, as recently proved to be customary for chemical spaces [32]. In the light of this remark one may explain also the sign difference between the “physical” energy (34b) and that obtained for the “chemical” situation (34a): through simple variational procedure for “physical” energy (30) the result (34b) is inherently negative – modeling systems stability in agreement with the upper branch of eq. (22), whereas the double variational algorithm employing optimized density (33) into the Bohmian shaped energy (17) it produces the positive output (34a) associated with activation energy characteristic for chemical reactivity corresponding to the lower branch of eq. (22).
Therefore, to be accurate, one should consider the quantum potential related optimized energy (34a) instead of simply the orbital optimized one of eq. (34b). Therefore, assuming that eq. (34a) appropriately describes the atomic valence state in DFT (see the upper/reactive curve in Figure 1b), the next task is to search for the quantum valence charge for which the valence energy approaches its optimum value (or the “ground state” of the atomic chemical-reactivity, i.e. the previously golden-ratio quantification of the valence atomic state); to this aim, at this point, one can employ the golden ratio relationship (21a) and first rewrite eq. (34a) as
which is minimized at the value
However, one must again apply the double-variational procedure, now in terms of number of electrons, i.e., reconsidering eq. (36) with the golden ratio at the reactive (chemical) electronic level of eq. (22) such that a second equation is formed
with the positive solution
This expression avails of the significance of the maximum number of electrons, for a given atom, possibly engaged in a reactive environment by either (or both) accepting or (and) ceding electrons to or from its valence state, see Table 1.
Table 1. Synopsis of the critical charges in the physical ground state (N*) as well as for chemical reactive (valence) state (N**) for atoms of the first four periods of the periodic table of elements, as computed from the minimum point of associated interpolations of ionization and electronic affinities[33]and of eq. (38), respectively
The result of this process is different from the expected physical result (NSTABIL=Z) according to the upper branch of eq. (22), which is higher than the physical one until reaching the carbon system (ZINTRCHANGE=6.8), while continuing below it thereafter (see Figure 3).
thumbnailFigure 3. The comparative shapes of the valence electrons to be engaged in chemical reactivity (continuous curve) computed using eq. (38) based on the combined optimal Bohm total energy (35) with the golden ratio imbalance of eq. (22), respecting the stable physical case (dot-dashed curve), and of their differences (dashed curve); all originate at the 0thatom (the neutron, Z=0).
The above interchange (effective) atomic number through which the chemical (reactive) state is associated with lower charge respecting the physical state may be also be found at the energetic level based on quantum equation (34a), as specialized for the two branches of Figure 3 for the N(Z) dependence. Thus, the chemical (reactive) state takes the analytic form
and interchanges with the ground state EQ1(NSTABLE → Z) at the points {3.5,6.8}, as observed also from Figure 4; however, the interchanging point beyond which all chemical atomic systems are more stable in the chemical or reactive state than in the physical ground state is consistently recovered.
thumbnailFigure 4. The same comparative shapes shown in Figure3, here at the level of energy (34a) specialized for the reactive and the stable N(Z) dependencies of Figure3; the various plots successively display increasingly large atomic Z-ranges to better emphasize the chemical vs. physical behavior (see text).
Nevertheless, the energetic analysis also reveals the atomic systems Be, B and C to be situated over the corresponding physical stable states; this may explain why boron and carbon present special chemical phenomenology (e.g., triple electronic bonds and nanosystems with long C-bindings, respectively), which is not entirely explained by ordinary physical atomic paradigms [55-60].
The energetic discourse may be complete with the electronegativity and chemical hardness evaluations by applying the DFT definitions (3) and (4) to physical and chemical energies, respectively. In the first case, expression (34b) is applied to provide the following so-called “universal” forms of Parr and Bartolotti [33]:
The result, nevertheless, appears to be an unusually higher increase in chemical hardness than in electronegativity, which certainly cannot be used to model a reactive-engaged tendency because it is more stable (by chemical hardness) than reactive (by electronegativity); it is, however, consistent with the physical stability of the system, provided by the single variational procedure through which eq. (34b) was produced.
Instead, to chemically model reactivity, the double variation procedure is applied and eq. (34a) is substituted into eqs. (3) and (4), though by considering also the double reactive procedure for charge as well, i.e., by considering eq. (38) with the golden ratio information of (22) to respectively yield the results
Remarkably, the actual electronegativity of (42) obtained by the quantum Bohm and golden ratio double procedure yields sensible results similar to those of the single variational approach (40); however, the chemical hardness of (43) is approximately 5-fold lower than its “stable” counterpart (41), affirming therefore the manifestly reactive framework it produces – one described by a quadratic equation (34a) instead of a cubic one (34b).
Charge waves in gauge chemical reactivity
Finally, one considers the chemical reactivity discussion as based on the gauge reaction that equilibrates the chemical bond by symmetrical bond polarities [25]
such that the reactive electrons are varied on the reunited intervals of eq. (1); such analysis was previously employed to fundament systematic electronegativity and chemical hardness definitions by the averaging (through the integration) factor
along the reaction path accounting for the acidic (electron accepting, 0 ≤ N ≤ +1) and basic (electron donating, –1 ≤ N ≤ 0) chemical behaviors.
In this scaled (gauge) context of reactivity, the foregoing discussion is dedicated to investigating the link between the critical ground state charge (N*) and the valence or reactive state (N**). While the first appears as a consequence of naturally fitting the three points in Figure 1 (the ionization, neutral and affinity states), with the effect of biasing the minimum of the energetic curve in Figure 1b with respect to the apparent Parr-DFT curve in Figure 1a, and is thus derived graphically (see Figure 5), the valence charge is based on the combined quantum energy and golden ratio information in eq. (38). Both are reported for the indicated number of atomic systems of the periodic table of elements in Table 1. One notes, for instance, that while the critical ground state charge N* always lies in the range [0.5,1], the valence charge N** may span the interval [0,1]; one may interpret such behavior as being associated with the difference between the fraction ½ and integer “1” in driving the principles of chemical reactivity and the electrophilicity equalization principle in particular, when the “quantum transition” 1/2→1 is required in the energy exchange of chemical systems for it to be valid for both electronegativity and chemical reactivity principles [61]; nevertheless such scaling it is equivalent with above acidic-basic gauge averaging of eq. (44b). This way, the valence charge problem may be extended to the interval [0,2], at its turn seen as a gauge transformation of the chemical reactivity charge domain [−1, +1], where one reencounters the challenging problem of whether the “One electron is less than half what an electron pair is” [62], the response to which is generally complex but may here be approached through the following steps.
thumbnailFigure 5. A graphical interpolation for selected elements of Table1in terms of their ionization, neutral and affinity states, aiming to determine the critical (displaced) charge of the DFT ground state, as prescribed by Figure1b.
First, by employing the data presented in Table 1, one constructs the so-called “continuous” ground and valence charge states by appropriately fitting over the first four periods of elements, here restrained to 10th-order polynomials. This is performed by interpolating every three points of the 32 elements presented in Table 1, although by spanning the atomic number range Z ∈ [1, 53], thus yielding (see also the allied representations of Figure 6):
thumbnailFigure 6. The critical ground state and valence charge points for the elements of Table1and their 10th-order continuous interpolations according to eqs. (45a) and (45b).
Equations (45a) and (45b) are then combined into a sort of special charge wave function based on their difference on the golden ratio scale (see Figure 6 for graphical representation)
with the peculiar property that its square-integrated form over the Z-range of interpolation gives
The result (47) has the following conceptual fundamental quantitative interpretation: the difference between the ground and valence optimum charges is regulated by the golden ratio scale, or in other terms,
such that it provides a sort of normalization corrected by the golden ratio value; it also fulfills the interesting relationship:
In any case, the present analysis provides the qualitative result that the difference between the critical ground state and optimal valence charges is more than half of an electronic pair, giving rise to the significant notion that chemical reactivity is not necessarily governed by a pair of electrons but governed by no less than half of a pair and is related to the golden ratio (τ > 0.5).
However, fractional values in general and those related to the golden ratio particular, may be interpreted as a consistent manifestation of the quantum mechanical (i.e., wave functional) approach of chemical phenomena, here at the reactivity level. Moreover, the quadratic critical charge function (46), as shown in Figure 7, clearly reveals that a higher contribution to electronic pair chemistry is given by the third period of elements and by the third and fourth transitional elements in particular, a result that nicely agrees with the geometrical interpretation of the chemical bond, particularly the crystal ligand field paradigm of inorganic chemistry [9].
thumbnailFigure 7. The linear and quadratic charge “wave function” of eq. (46).
Also a local analysis of the type of charge that is dominant in atomic stability, i.e., the critical physical ground state or the chemical valence reactive state based on eqs. (45a) and (45b), respectively, may be of considerable utility in refined inorganic chemistry structure-reactivity analysis. To the same extent, it depends on the degree of the polynomials used to interpolate the critical and valence charges over the concerned systems; however, through the present endeavor, we may assert that the analysis should be of the type (48), which in turn remains a sort of integral version of the imbalance equation (20a), in this case for the ground-valence charge gap states of a chemical system.
Aiming to hint at the solution to the current debate regarding the physical vs. chemical definition of an atom and as a special stage of a larger project regarding quantum chemical orthogonal spaces, the present work addresses the challenging problem of defining and characterizing valence states with respect to the ground state within conceptual density functional theory. We are aware of the earlier warnings raised by Parr and Bartolotti and others [18,33,63] regarding the limits of density functional theory and of the total energy of atomic systems combined with a Slater-based working density to provide a quadratic form in terms of system charge, as required by the general theory of chemical reactivity of atoms and molecules in terms of electronegativity and chemical hardness. Fortunately, we discovered that the Bohmian form of the total energy of such atomic systems provides, instead, the correct behavior, although it is only density-function-dependent and not a functional expression. Moreover, this finding was reached through the so-called double variational procedure, which, as emphasized earlier, was likely to reproduce the chemical reactivity principles of electronegativity and chemical hardness in an analytical manner; however, such a double analytical variational approach is consistent with the recent advanced chemical orthogonal spaces approaches of chemical phenomenology [64] as being at least complementary to the physical description of many-electronic systems when they are engaging in reactivity or equilibrium as the atoms-in-molecules Bader theory prescribes [9,42]. With the present Bohmian approach, the total energy is in fact identified with the quantum potential, thus inherently possessing non-locality and appropriate reactivity features, which are manifested even over long distances [10,11,53]; this also generalizes the previous Boeyens electronegativity formulation of electronegativity [5,6] from the direct relationship between a quantum potential and its charge derivative. The double algorithm was also implemented to discriminate the valence from the ground state charges, this time by using the golden ratio imbalance equation as provided by adaption of the Heisenberg type relationship to chemical reactivity for atoms. This corresponds to an analytical unfolding of the physical and chemical imbalance of the electronic charge stability of atomic systems, paralleling the deviation from the equal electron-to-proton occupancy in physical systems toward electron deficiency in the valence states of chemical systems. This dichotomy was implemented by the golden ratio presented in eq. (22). As a consequence, the difference between valence and ground state charge systems is naturally revealed and allows for the explanation of chemical reactivity and bonding in terms of fractional electron pairs, althrough driven by the golden ratio under the so-called physical-to-chemical charge difference wave function and associated normalizations, all of which represent elaborated or integral forms of the basic imbalance atomic equation. The present results are based on 10th-order polynomial fitted over 32 elements from the first 54 elements of the first four periods of periodic table of elements and can be further pursued by performing such systematic interpolations that preserve the golden ratio relationships, as advanced herein; they may also provide a comprehensive picture of how valence electrons may always be projected/equalized/transposed into ground state electrons within the perspective of further modeling chemical reactions when chemical reactivity negotiates the physical molecular stabilization of atoms in molecules.
a For circular orbits, the lowest ones in each atomic shells – including the valence ones, one has ΔOr=2πr, with r the orbital radii thereof, while O=p is the fixed particle’s momentum on that orbit; therefore, when combined into eq. (18b) they provide the celebrated Bohr-de Broglie relationship rp=nħ solving the atomic spectra of Hydrogen atom in principal quantum numbers (n).
Appendix: Semi-classical inter-electronic energy
For the inter-electronic interaction, see Figure 8; in evaluating Vee[ξ] of eq. (31c), the two-electronic density is approximated by the Coulombic two mono-electronic density product, thus neglecting the second-order density matrix effects associated with the exchange-correlation density.
However, for the analytical evaluation of the electron–electron repulsion energy using the density (23b), much care must be taken. For instance, one has to use the electrostatic Gauss theorem, which states that the classical electrostatic potential outside a uniform spherical shell of charge is just what it would be if that charge were localized at the center of the shell and that the potential everywhere inside such a shell is that at the surface, [21,48] see Figure 8. Therefore, the electronic repulsion energy becomes
which recovers the expression presented by eq. (31c), when the Slater integral type of Eq. (28) is also employed. Note that the electron–electron repulsion term was written by also considering the Fermi-Amaldi (N-1)/N factor [13], which ensures the correct self-interaction behavior: when only one electron is considered, the self-interaction energy must be zero, Vee (N→1)→0.
thumbnailFigure 8. Representation of the space regions of the 1st and 2nd electrons, their potential influences and reciprocal interaction [21,48]
Competing interests
The author declares he has no competing interests.
This work was supported by CNCS-UEFISCDI agency through the research project TE16/2010-2013 within the PN II-RU-TE-2009-1 framework. Inspiring discussions with Profs. Boeyens (University of Pretoria, South Africa) and von Szentpaly (Stuttgart University, Germany) are kindly acknowledged. Constructive referees’ comments and stimulus is also sincerely thanked. This paper is dedicated to Prof. Robert G. Parr for his pioneering quantum work on atomic valence states.
1. Putz MV: Big chemical ideas in the context: the periodic law and the Scerri’s Periodic Table.
Int J Chem Model 2011, 3:15-22. OpenURL
2. Scerri ER: The Periodic Table – Its Story and Its Significance. Oxford-New York: Oxford University Press; 2007. PubMed Abstract OpenURL
3. Boeyens JCA: Emergent Properties in Bohmian Chemistry. In Quantum Frontiers of Atoms and Molecules. Edited by Putz MV. New York: Nova Publishers Inc; 2011:191. OpenURL
4. Boeyens JCA: New Theories for Chemistry. Amsterdam: Elsevier; 2005. OpenURL
5. Boeyens JCA: Chemistry from First Principles. Heidelberg-Berlin: Springer; 2008. OpenURL
6. Boeyens JCA, Levendis DC: Number Theory and the Periodicity of Matter. Heidelberg-Berlin: Springer; 2008. OpenURL
7. Kaplan IG: Is the Pauli exclusive principle an independent quantum mechanical postulate?
Int J Quantum Chem 2002, 89:268-276. Publisher Full Text OpenURL
8. Scerri ER: Just how ab initio is ab initio quantum chemistry?
Found Chem 2004, 6:93-116. OpenURL
9. Bader RFW: Atoms in Molecules - A Quantum Theory. Oxford: Oxford University Press; 1990. OpenURL
10. Putz MV: The bondons: The quantum particles of the chemical bond.
Int J Mol Sci 2010, 11:4227-4256. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
11. Putz MV: Quantum Theory: Density, Condensation, and Bonding. Toronto: Apple Academics & CRC Press; 2012. OpenURL
12. Parr RG, Donnelly RA, Levy M, Palke WE: Electronegativity: the density functional viewpoint.
J Chem Phys 1978, 68:3801-3808. Publisher Full Text OpenURL
13. Parr RG, Yang W: Density Functional Theory of Atoms and Molecules. New York: Oxford University Press; 1989. OpenURL
14. Bohm D: A suggested interpretation of the quantum theory in terms of “hidden” variables. I.
Phys Rev 1952, 85:166-179. Publisher Full Text OpenURL
15. Bohm D: A suggested interpretation of the quantum theory in terms of “hidden” variables. II.
Phys Rev 1952, 85:180-193. Publisher Full Text OpenURL
16. Bohm D, Vigier JP: Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations.
Phys Rev 1954, 96:208-216. Publisher Full Text OpenURL
17. Cushing JT: Quantum Mechanics – Historical Contingency and the Copenhagen Hegemony. Chicago & London: The University of Chicago Press; 1994. PubMed Abstract | Publisher Full Text OpenURL
18. Von Szentpály L: Modeling the charge dependence of total energy and its relevance to electrophilicity.
Int J Quant Chem 2000, 76:222-234. Publisher Full Text OpenURL
19. Ayers PW, Parr RG: Variational principles for describing chemical reactions: the Fukui function and chemical hardness revisited.
J Am Chem Soc 2000, 122:2010-2018. Publisher Full Text OpenURL
20. Geerlings P, De Proft F, Langenaeker W: Conceptual density functional theory.
Chem Rev 2003, 103:1793-1874. PubMed Abstract | Publisher Full Text OpenURL
21. Putz MV: Contributions within Density Functional Theory with Applications in Chemical Reactivity Theory and Electronegativity. Parkland:; 2003. OpenURL
22. Parr RG: Density functional theory.
Annu Rev Phys Chem 1983, 34:631-656. Publisher Full Text OpenURL
23. Parr RG, Pearson RG: Absolute hardness: companion parameter to absolute electronegativity.
J Am Chem Soc 1983, 105:7512-7516. Publisher Full Text OpenURL
24. Putz MV: Absolute and Chemical Electronegativity and Hardness. New York: Nova Publishers Inc.; 2008. OpenURL
25. Putz MV: Systematic formulation for electronegativity and hardness and their atomic scales within density functional softness theory.
Int J Quantum Chem 2006, 106:361-389. Publisher Full Text OpenURL
26. Chattaraj PK, Parr RG: Density functional theory of chemical hardness.
Struct Bond 1993, 80:11-25. Publisher Full Text OpenURL
27. Chattaraj PK, Sengupta S: Popular electronic structure principles in a dynamical context.
J Phys Chem 1996, 100:16129-16130. OpenURL
28. Chattaraj PK, Maiti B: HSAB principle applied to the time evolution of chemical reactions.
J Am Chem Soc 2003, 125:2705-2710. PubMed Abstract | Publisher Full Text OpenURL
29. Chattaraj PK, Duley S: Electron affinity, electronegativity, and electrophilicity of atoms and ions.
J Chem Eng Data 2010, 55:1882-1886. Publisher Full Text OpenURL
30. Ayers PW, Parr RG: Variational principles for describing chemical reactions: reactivity indices based on the external potential.
J Am Chem Soc 2001, 123:2007-2017. PubMed Abstract | Publisher Full Text OpenURL
31. Kohn W, Becke AD, Parr RG: Density functional theory of electronic structure.
J Phys Chem 1996, 100:12974-12980. Publisher Full Text OpenURL
32. Putz MV: Chemical action concept and principle.
MATCH Commun Math Comput Chem 2011, 66:35-63. OpenURL
33. Parr RG, Bartolotti LJ: On the geometric mean principle of electronegativity equalization.
J Am Chem Soc 1982, 104:3801-3803. Publisher Full Text OpenURL
34. Kleinert H: Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets. 3rd edition. Singapore: World Scientific; 2004. OpenURL
35. Guantes R, Sanz AS, Margalef-Roig J, Miret-Artés S: Atom–surface diffraction: a trajectory description.
Surf Sci Rep 2004, 53:199-330. Publisher Full Text OpenURL
36. Putz MV: On Heisenberg uncertainty relationship, its extension, and the quantum issue of wave-particle duality.
Int J Mol Sci 2010, 11:4124-4139. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
37. Pauling L, Wilson EB: Introduction to Quantum Mechanics with Applications to Chemistry. New York: Dover Publications; 1985. OpenURL
38. Lackner KS, Zweig G: Introduction to the chemistry of fractionally charged atoms: electronegativity.
Phys Rev D 1983, 28:1671-1691. Publisher Full Text OpenURL
39. Hohenberg P, Kohn W: Inhomogeneous electron gas.
Phys Rev 1964, 136:B864-B871. Publisher Full Text OpenURL
40. Putz MV: Density functionals of chemical bonding.
Int J Mol Sci 2008, 9:1050-1095. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
41. Ghosh DC, Biswas R: Theoretical calculation of absolute radii of atoms and ions. Part 1. The atomic radii.
Int J Mol Sci 2002, 3:87-113. Publisher Full Text OpenURL
42. Bader RFW: Definition of molecular structure: by choice or by appeal to observation?
J Phys Chem A 2010, 114:7431-7444. PubMed Abstract | Publisher Full Text OpenURL
43. Dreizler RM, Gross EKU: Density Functional Theory. Heidelberg: Springer Verlag; 1990. OpenURL
44. Kryachko ES, Ludena EV: Energy Density Functional Theory of Many Electron Systems. Dordrecht: Kluwer Academic Publishers; 1990. OpenURL
45. Cramer CJ: Essentials of Computational Chemistry. Chichester: Wiley; 2002. OpenURL
46. Capelle K: A bird's-eye view of density-functional theory.
Braz J Phys 2006, 36:1318-1343. Publisher Full Text OpenURL
47. Jensen F: Introduction to Computational Chemistry. Chichester: John Wiley & Sons; 2007. OpenURL
48. Parr RG: The Quantum Theory of Molecular Electronic Structure. Reading-Massachusetts: WA Benjamin, Inc.; 1972. OpenURL
49. Cohen AJ, Mori-Sánchez P, Yang W: Challenges for density functional theory.
Chem Rev 2012, 112:289-320. PubMed Abstract | Publisher Full Text OpenURL
50. Petrucci RH, Harwood WS, Herring FG, Madura JD: General Chemistry: Principles & Modern Applications. 9th edition. New Jersey: Pearson Education, Inc.; 2007. PubMed Abstract | Publisher Full Text OpenURL
51. Boeyens JC, Levendis DC: The structure lacuna.
Int J Mol Sci 2012, 13:9081-9096. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
52. Hawking S: The Universe in a Nutshell. New York: Bantam Books; 2001. OpenURL
53. Putz MV, Ori O: Bondonic characterization of extended nanosystems: application to graphene's nanoribbons.
Chem Phys Lett 2012, 548:95-100. OpenURL
54. Boeyens JCA: A molecular–structure hypothesis.
Int J Mol Sci 2010, 11:4267-4284. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
55. March NH: Electron Density Theory of Many-Electron Systems. New York: Academic; 1991. OpenURL
56. Wentorf RH Jr: Boron: another form.
Science 1965, 147:49-50. PubMed Abstract | Publisher Full Text OpenURL
57. Eremets MI, Struzhkin VV, Mao H, Hemley RJ: Superconductivity in Boron.
Science 2001, 293:272-274. PubMed Abstract | Publisher Full Text OpenURL
58. van Setten MJ, Uijttewaal MA, de Wijs GA, de Groot RA: Thermodynamic stability of boron: the role of defects and zero point motion.
J Am Chem Soc 2007, 129:2458-2465. PubMed Abstract | Publisher Full Text OpenURL
59. Widom M, Mihalkovic M: Symmetry-broken crystal structure of elemental boron at low temperature.
Phys Rev B 2008, 77:064113. OpenURL
60. Putz MV (Ed): Carbon Bonding and Structures: Advances in Physics and Chemistry. Dordrecht-London: Springer Verlag; 2011.
Cataldo F, Milani P (Series Editors): Carbon Materials: Chemistry and Physics, Vol. 5.
61. Putz MV, Mingos DMP (Eds): Applications of Density Functional Theory to Chemical Reactivity. Berlin-Heidelberg: Springer Verlag; 2012. [Struct Bond] OpenURL
62. Ferreira R: Is one Electron less than half what an electron pair is?
J Chem Phys 1968, 49:2456-2457. Publisher Full Text OpenURL
63. Bergmann D, Hinze J: Electronegativity and charge distribution.
Struct Bond 1987, 66:145-190. Publisher Full Text OpenURL
64. Putz MV: Chemical Orthogonal Spaces. Kragujevac: Kragujevac University Press; 2012.
[Gutman I (Series Editor): Mathematical Chemistry Monographs, Vol. 14]. |
24e533cca420fa76 | Theoretical Concepts and Reaction Mechanisms
Yuri V. Il'ichev
Cordis Corporation, a Johnson and Johnson Company
P.O. Box 776, Welsh and McKean Roads, Spring House, PA 19477-0776
1. Chemistry of Electronically Excited States
Aren't you excited already? Not yet? Let us then adopt a step-by-step approach in order to introduce you to a fascinating world of excited-state reactions. The term photochemistry generally applies to chemical modifications induced by interaction of light (electromagnetic radiation) with matter. Therefore, light is always one of the reactants in a photochemical system. Electromagnetic radiation with the wavelength ranging from ~800 nm (near-IR) to ~150 nm (far UV) is of primary importance for photochemistry and photobiology, but the wavelength regions adjacent to this range are also of interest for certain applications. With the advent of lasers, multiphoton photochemistry, i.e. chemistry initiated by simultaneous absorption of two or more photons, came into wide use. This made IR radiation of particular interests for photochemists. The wavelength range of 150-800 nm corresponds to the photon energy ranging from 800 to 150 kJ mol-1 (Figure 1).
Figure 1
Figure 1. The electromagnetic spectrum.
These energies are much higher than those associated with thermal motion at ambient temperature and are comparable to the energies of chemical bonds. That is why photochemistry is often referred to as high-energy chemistry. The fact that the spectral region mentioned above contains electromagnetic radiation detectable by the human eye (visible light) suggests an interrelation of photochemistry and vision mechanisms. Humans can see radiation in this part of the spectrum because visual receptors are organic compounds that absorb light with these wavelengths. Notice that the spectral maximum of the solar radiation reaching the earth surface is located within the visible light range (~500 nm).
The basis for understanding light-matter interaction and chemical reactivity is quantum mechanics. According to this theory, a complete description of any molecular system can be provided by a function that is obtained upon solving the Schrödinger equation (a rather complex differential equation first introduced by Erwin Schrödinger). This function of multiple variables is called wavefunction. Generally, an infinite number of solutions (wavefunctions) with the corresponding values of the system energy are obtained from the Schrödinger equation. However, only some solutions with their characteristic values of the energy are physically acceptable. Thus, only certain values of the energy are allowed, although the number of these acceptable values can still be infinite. In other words, the energy of the molecular system is quantized.
Energy quantization is of primary importance for photochemistry, because it implies that only photons that bring a definite quantity of energy corresponding to a difference between two allowed energy values can be absorbed. The quantum mechanical results are conveniently illustrated by denoting discrete energy values according to a certain principle and plotting them on a graph with a vertical energy scale (compare to the Jablonski diagram in the module on Basic Photophysics). When a particular energy value, E, was designated with a certain symbol, e.g., S0, the system having the energy E is referred to as being in the state S0. Notice that this description may be incomplete because some systems may have the same energy, but be described by two different wavefunctions and, therefore, be in two different quantum states (this phenomenon is called degeneracy).
The application of quantum mechanics to molecular systems requires approximate methods because the Schrödinger equation cannot be solved exactly for many-body systems. Fortunately, nuclei are much heavier than the electrons, and consequently, their motion is much slower than that of the electrons. To a good approximation, the nuclei can be considered as fixed centers of potential and a description of the electronic motion can be obtained by solving the Schrödinger equation for a large number of different, but fixed nuclear positions. This way of separating electronic and nuclear motion is known as the Born-Oppenheimer approximation, or adiabatic approximation.
Generally, an infinite number of solutions (wavefunctions) with the corresponding electronic energies will be obtained for the electronic Schrödinger equation at each fixed nuclear configuration. The lowest electronic energy plotted against all internal variables (in general, 3N-6 for the system of N nuclei or 3N-5 for a linear N-atomic molecule) forms a multidimensional hypersurface corresponding to the ground state. This surface together with those corresponding to higher energies is referred to as the adiabatic potential energy surface. The electronic states are often designated according to the total spin (S or T for the singlet and triplet state with the spin 0 and 1, respectively, see also module on Basic Photophysics) and the relative energy (index "0" for the lowest energy, etc.). The majority of organic molecules are singlets in the lowest energy state, which is therefore referred to as the singlet ground state, or S0 state. Diatomic molecules have a single geometric parameter, internuclear distance, and therefore their potential energy surfaces reduce to curves. In this case a 2D-plot is sufficient for the presentation. Typical energy curves for the ground and first excited state are depicted in Figure 2.
Quantum mechanical treatment of nuclear motion within the Born-Oppenheimer approximation requires a solution of the nuclear Schrödinger equation with the electronic energy as the potential. Separation of different types of nuclear motion may often be achieved as a first approximation to complex molecular dynamics. This separation leads to several equations that are simpler than the original Schrödinger equation for the nuclei. There are three basic types of motion: translation, rotation, and vibration. Translation is the motion of the system as a whole, rotation is a motion in which the spatial orientation of the body changes, and vibration describe relative motion of the nuclei. Molecules moving freely in a macroscopic vessel may be treated as though their translational energy is not quantized. Another way of putting this is that the translational energy levels are so closely spaced that this type of motion may be well described with classical mechanics. Rotational and vibrational motion requires quantum mechanical treatment, which typically produces discrete energy levels such as shown in Figure 2.
Figure 2
Figure 2. Potential energy curves corresponding to the ground state (black, S0) and first excited state (blue, S1) of a diatomic molecule. The states were assumed to be of singlet multiplicity. The energy levels for the vibrational motion are shown as black and blue lines inside the curves. Red lines in the insert show rotational levels for the the zero vibrational level of S0. Notice that characteristic energy for the rotational motion is much smaller than that for the vibrational motion, and the latter is much smaller than the energy associated with electronic motion.
The potential energy surfaces obtained by solving the electronic Schrödinger equation provide the basis on which chemical reactivity can be analyzed. It is a standard practice in photochemistry to define a common ground-state surface for all molecular species of the same stoichiometry. Minima on this surface can be identified with the equilibrium structures (all isomers and/or intermolecular complexes with the same formula). Statistical mechanics provides an answer to the question how properties of a macroscopic system are related to those of molecules constituting the system. The answer is given in terms of probabilities to find a molecule in a particular microscopic state or, in other words, in terms of the population of molecular energy states (see Boltzmann distribution in Basic Photophysics).
For the vast majority of molecular systems at any reasonable temperature only the ground electronic state is populated. Therefore, thermal chemistry is almost exclusively governed by the properties of the ground state. Notice that some excited vibrational states have typically non-zero population and most molecules are in the excited rotational levels at ambient temperature.
In contrast, photochemistry can only be understood if one considers properties of excited electronic states, which are typically populated by light absorption. These facts together with a fundamental understanding of the quantum nature of light provide the basis for interpreting photochemistry, not so much as high-energy chemistry, which utilizes light merely as an energy source, but more as reactivity of electronically excited species. These species can and often do exhibit chemical properties that are largely different from those of the ground-state species. Properties of S0 surface and the two lowest excited-state surfaces of different multiplicity, S1 and T1, are of primary importance for photochemistry.
2. Photochemistry Laws
The first law of photochemistry states that only the light absorbed by a molecule can produce photochemical modification in the molecule. Here and below, the term "molecule" is broadly defined and includes also atoms, radicals, etc. The law emphasizes the importance of light absorption by the molecule involved in the primary photoprocess, which is a chemical reaction or a physical process involving directly excited species. All aspects and consequences of this law must be considered for quantitative analysis of a photoreaction. This is generally taken for granted, but the frequent practice of comparing photochemical kinetic traces for different molecules without referring to their absorbance suggest that it is ignored more often than one may assume.
The second law of photochemistry was formulated at the beginning of 20th century when the quantum theory was just emerging. It states that one molecule is excited for each quantum of radiation absorbed. In other words, the absorption of light by a molecule is a one-photon process (see Figure 3). Therefore for a primary photoprocess only one molecule reacts for each photon absorbed. Typically several competing processes occur in the excited state. In this case, the second law can be reformulated as: the sum of the quantum yields (defined in Section 3, below) for the primary processes must be unity.
It has taken about 20 years and the development of quantum mechanics to predict two-photon absorption (Figure 3). The first experimental observation of the two-photon absorption was made when lasers were developed. Further development in laser technology made almost routine the generation of ultrashort light pulses (10-12 - 10-15 s). Such ultrafast lasers made possible not only experimental study, but also the broad application of multiphoton processes. Multiphoton fluorescence is widely used in imaging of cells and biological tissues. Multiphoton photochemistry recently received attention as a tool for time-resolved studies of important biological processes.
Figure 3a Figure 3b
a b
Figure 3. (a) Schematic illustration of the light absorption by a rectangular sample. To a first approximation, molecules can be considered as opaque disks whose average cross-sectional area,Sigma, in cm2 molecule-1 represents the effective area that is impermeable for photons of a certain wavelength. We may consider an infinitesimal slab, dx, of a rectangular sample with a cross-section, S, which is equal to that of the light beam. The average intensity of light entering the slab is denoted I, and expressed in photon s-1. The intensity absorbed in the slab can be written as: Formula 1 , where N is the concentration in molecule cm-3. Integrating this equation from 0 to l (sample length in cm) we obtain the Beer-Lambert law for one-photon absorption: Formula 2 . If the concentration C is expressed in mol L-1 then the natural logarithm is usually substituted with the decimal one and the cross-section is replaced with the decimal molar absorptivity Formula 3.
Thus, we obtain: Formula 4.
(b) Energy diagrams for one- and two-photon absorption. The average rate of n-photon absorption per molecule in photons s1 molecule-1 can be approximated as: Formula 5, where Sigman is the cross-section of n-photon absorption, I is the average intensity in photon s-1, and S is the cross-section in cm2 of the laser beam entering the sample. For two-photon absorption the cross section, Sigma2, has dimension of cm4 s photon-1 molecule -1 and it is often expressed in GM, where
1 GM = 10-50 cm4 s photon-1 molecule-1. The unit was selected to honor Maria Göppert-Mayer who first predicted multiphoton absorption. The measured absorption rate Wn is the number of photons absorbed per s: Formula 6, where Vex is the excitation volume. For one-photon absorption we obtain: Formula 7. This expression corresponds to the Beer-Lambert law limit for low absorption: Formula 8.
3. Photochemical Kinetics
Quantum yield is the major characteristics of a photochemical reaction. The quantum yield, also called the quantum efficiency, is defined as the number of events occurring per photon absorbed. These events might be related to physical processes responsible for energy dissipation (such processes are discussed in Basic Photophysics), but they also might be related to molecules of a chemical product formed upon photoirradiation. Generally, the (total) quantum yield of a photoreaction, Phi, is:
Formula 9
Eq. (1) would define the quantum yield of product formation, Phip, if the number of product molecules would be determined. If the two numbers in Eq.(1) are measured per time and volume unit then the quantum yield is expressed in terms of rates (For more information on rate of reaction, see IUPAC Gold Book):
Formula 10
The latter quantity is also referred to as the differential quantum yield. Notice that these two definitions of the quantum yield agree only if the yield is constant during the course of the reaction. Eqs.(1) and (2) indicate that two separate measurements may be required to determine a quantum yield. In the simplest set-up, a reaction cell is mounted in a fixed position relative to the light source. The cell is charged with the sample of interest and irradiated. Photochemical conversion is determined with a suitable experimental technique (spectroscopy, chemical analysis, etc.). Afterwards, the cell is replaced with an actinometer, which is also irradiated. Before describing how actinometers work, it is important to say again that the amount of the radiation absorbed by the sample, rather than the total amount of light, has to be quantified.
An actinometer is a physical device or chemical system, which is used to determine the number of photons in a light beam. Physical devices convert the energy of absorbed photons into another energy form, which may be easily quantified. The devices that operate by converting photon energy into heat represent 'primary' standards of actinometry. Other physical devices and chemical systems must be calibrated. Chemical actinometers are photoreactive mixtures with well-established photochemistry and known quantum yields. Two representative systems for liquid-phase actinometry are potassium ferrioxalate system and azobenzene system. In both cases, the photoconversion is monitored spectrophotometrically. It is interesting that the most frequently used ferrioxalate system has relatively complex chemistry. Its description in the textbooks hardly goes beyond the statement that Fe(III) is reduced and oxalate is simultaneously oxidized upon photoirradiation. In contrast, the photochemistry of azobenzene is extremely simple (Scheme 1). The isomerization reaction proceeds cleanly in both directions and the solution may be regenerated and reused many times.
Scheme 1
Scheme 1
A chemical reaction is just one of multiple routes to the loss of excitation. The light absorption produces an excited-state species that inevitably loses its energy through various deactivation mechanisms. To highlight essential features of photochemical kinetics, we will analyze the simplest system with multiple pathways of deactivation that are characterized by the rate constants corresponding to unimolecular irreversible processes. In the present context, a clear-cut distinction between photophysical processes and a single photochemical reaction will be made. It is assumed that one-photon absorption leads to the direct population of the reactive singlet excited state. The mechanism described corresponds to a scheme shown in Scheme 2.
Scheme 2
Scheme 2. Kinetic scheme for a simple system with a photoreactive singlet state.
The rate constants kf , kic , and kisc refer to spontaneous emission, internal conversion and intersystem crossing, respectively. The rate constant kr corresponds to a chemical reaction. Assuming that population of S* by light absorption is characterized by the constant rate, W, in mol s-1, and the steady-state approximation (See IUPAC Gold Book) can be applied to the excited species we obtain the expression for the quantum yield of the photochemical reaction Phi0:
Formula 11
Formula 12
[Note: Eq.(3) was solved to obtain an expression for W, which was then used in Eq.(4).]
Quantum yields for the three other processes shown in Scheme 2 can be defined in the same way. Notice that the sum of all the quantum yields is equal to unity, as stated by the second law of photochemistry. The quantum yield for the photoreaction can be interpreted as the fraction of singlet excited molecules that undergo chemical transformation, i.e., the ratio of the number of molecules that react to the total number of S*. Because there always exists several routes to the loss of excitation, the quantum yield rather than the absolute rate constant must be used to compare the efficiencies of photochemical conversion for different reactive systems.
The steady-state approximation is inapplicable under conditions of time-dependent excitation. If a very short laser pulse is used to produce the excited species (so-called Delta-pulse excitation) the light absorption rate W can be neglected and Eq.(3) is easily integrated:
Formula 13
where Tau0 is the observed lifetime of the singlet excited state.
Formula 14
The observed lifetime is an average quantity defined for a large ensemble of the excited molecules. It can be measured with any experimental technique that is capable of detecting the excited species. To take an example, time-resolved fluorescence gives a convenient way of measuring Tau0 provided that certain experimental conditions are fulfilled. Fluorescence detection relies on the photocurrent signal, which is linearly proportional, within certain limits, to the total number of photons emitted. The number of photons, in its turn, is proportional to the number of molecules in the singlet excited state, because individual molecules have time-independent probability to emit light. The fluorescence intensity measured as a function of time depends therefore on the concentration of S*, which is given by Eq.(5). In contrast to the observed lifetime, the radiative lifetime Tauf = 1/kf corresponds to the fluorescence decay rate in the absence of any other deactivation processes.
As mentioned above, light should be considered as one of the reactants in photochemical reactions. Therefore, 'effective concentration' of photons, which is given as the number of light quanta absorbed by the photoreactant, needs to be specified when one compares concentration-time profiles for photochemical conversion. In contrast, the efficiency of a thermal reaction can be visualized by plotting the normalized concentration of the reactant or product against time. To clarify this point we need to analyze the photochemical kinetics in more detail. According to the reaction scheme shown in Scheme 2, the rate of the product formation is:
Formula 15
Inasmuch as concentrations are determined spechtrophotometrically, it is useful to rewrite Eq.(7) in terms of absorbances:
Formula 16
where Formula 17 and Formula 18 refer to absorbance at the irradiation and observation wavelength, respectively. The absorbance, Formula 19 , is measured at the observation wavelength and infinite time, i.e., after complete conversion to the product. In the case of very weak absorption (A<<1), Eq.(8) is easily integrated and experimental data are linearized in the coordinates corresponding to the following equation:
Formula 20
If we assume that the wavelength where only the reactant absorbs was selected for observation then absorbance in Eq.(9) can be replaced with the reactant concentration. Now a simpler equation, which looks very similar to the rate equation for the first-order thermal reaction can be obtained:
Formula 21
In contrast to thermal reactions, the proportionality coefficient Formula 22 in Eq.(10) is not just a rate constant independent of the initial concentration, but a complex quantity depending on three parameters. Therefore, the time dependence of the reactant concentration cannot be directly used to compare photoreactivity of different molecules, or even the same molecule if it was measured under different irradiation conditions. We could say that we need to know not only the reactant concentration but also effective 'light concentration',Formula 23 , in order to analyze photochemical systems. Even if we use the same light source for two systems we cannot directly compare results unless we know how much light was absorbed by each system. Figure 4 shows simulated concentration profiles for two systems that realize the same photochemical reaction S --> P, but differ in spectral parameters and quantum yields. This plot shows how misleading could be a comparison of relative concentrations plotted against time for photochemical reactions if the system is not completely specified.
Figure 4
Figure 4. Time profiles for the normalized concentrations of two compounds undergoing an irreversible first-order photoreaction with a quantum yield of 0.1 and 1.0. Solutions containing these compounds at the same initial concentrations were irradiated with the same mercury lamp equipped with a 365 nm narrow-band filter. Which line, blue or red, corresponds to the molecule with the higher quantum yield (more photoreactive)? This question can only be answered when the absorbances at the irradiation wavelength are compared (see the insert for the absorption spectra). The substance corresponding to the blue curve has 60 times larger absorbance at 365 nm, which is responsible for faster conversion despite the 10 times lower quantum yield for its photoreaction. As to the question, the correct answer is that the red line corresponds to the molecule with the photoreaction quantum yield of 1.0. However, an extremely weak absorption at 365 nm results in a relatively slow phototransformation of this compound.
In a typical photochemical experiment the concentration of the excited species and transients (S* and T* in Scheme 2) is negligible in comparison to that of the ground-state species (S and P). Assuming that T* is not reactive and initially we have only the reactant at the concentration [S]0 we can write:
[S]0 ≈ [P]+[S].
Assuming that light absorption by all transient can also be neglected we can rewrite Eq.(7) as follows:
Formula 24
where W([S],[P],t) is the rate of light absorption by the reactant which is a function of time t and concentrations both the reactant and product. To obtain the expression for W we will use the Beer-Lambert law (Figure 3), and the fact the absorbances of components in a mixture add up together:
Formula 25
Here I0 is the intensity of monochromatic light entering the sample expressed in mol L-1s-1, Formula 26 and Formula 27 are molar absorptivities of the reactant and product (M-1 cm-1), and l is the optical path (cm). The course of a photochemical reaction is often monitored spectrophotometrically at wavelength(s) different from the irradiation wavelength. By using the Beer-Lambert law we may write for the absorbance measured at the irradiation and observation wavelength at time t: Formula 28 and Formula 29. By using the absorbance, Formula 30, measured at the observation wavelength and infinite time and the three equations shown above we obtain Eq.(8). In the general case, Eq.(8) cannot be integrated. But it can be easily solved for very low and very high absorbance, and also for a special case when the product does not absorb at the irradiation wavelength, Formula 27 = 0. In the later case we obtain:
Formula 31
Formula 32
4. Theoretical Models of Photochemical Reactions
Within the Born-Oppenheimer approximation, potential energy surfaces govern nuclear motion and, therefore, chemical reactivity. However, in studying photochemistry it is also good to keep in mind that this is just an approximation, which is not automatically valid for all possible geometries and experimental conditions. A comprehensive picture of nuclear dynamics can be obtained from the time-dependent Schrödinger equation. However, a detailed account of nuclear motion can also be inferred from classical trajectories for a point moving without friction on the potential energy surface. The moving point may represent a chemically reactive system which consists of one or several molecular species. In the latter case one considers all reactants as a "supermolecule". The forces acting on the nuclei are given by minus the gradient of the potential (electronic energy) at this point. Recall that the gradient for a function of many variables is a vector formed by the first derivatives with respect to each of the variables.
Points on the surface that are characterized by the gradient vector of zero length are called stationary points. Their location is of primary importance for chemical reactivity. The nature of stationary points is determined by the secondary derivatives, the so-called Hessian matrix. If all the eigenvalues of this matrix are positive, the point is a minimum, which can be assigned to a reactant, product or intermediate. A first-order saddle point has all positive eigenvalues except for one, which is negative. It means that it is a maximum with respect to a single coordinate and a minimum in all other directions. Passage from one minimum to another one describes a chemical reaction and a saddle point between the two minima represents the transition state. Because of difficulties in representation of multidimensional hypersurfaces one-dimensional cross-sections through them are frequently used. The cross-sections may be compared to the potential energy curves of diatomic molecules and may often look similar to such curves. However, they must be interpreted with caution. For example, a saddle point may appear both as a minimum and maximum on two different cross-sections.
Thermal reactions are generally considered to be adiabatic, i.e., they are represented by the motion on the lowest potential energy surface. Another way of putting this is that these reactions occur exclusively in the ground state. Therefore, knowledge of the ground-state potential surface is sufficient for modeling thermal reactivity with reaction rate theories. In contrast, the theoretical treatment of any photochemical reaction requires information about potential energy surfaces for more than one state. The photoreaction starts from the ground state of the reactant(s), necessarily proceeds via electronically excited state(s), and ends with the product(s) in the ground state. Therefore, photochemical reactions inevitably include diabatic processes, i.e., a transition from one potential surface to another. This statement should illuminate the complexity of the theoretical analysis of photoreactions, especially because reliable calculations of the potential energy surfaces for electronically excited states of reasonably large molecules still represent a challenge for computational chemistry. Nevertheless, many fundamental aspects of complex photoinduced reactions still can be understood from qualitative analysis of potential energy surfaces.
Figure 5
Figure 5. Franck-Condon Principle. The vibrational functions of two electronic states are approximately harmonic oscillator-like functions. The most probable position of the nuclei in the ground state corresponds to the maximum of the probability distribution function for the zero level (red curve). The energy gap between vibrational levels is usually large enough so that population of excited levels is small. An electronic transition caused by light absorption is represented by a vertical line (block arrow). The highest probability of the transition corresponds to the largest overlap between the ground-state and excited-state vibrational wavefunctions. The overlap is greatest for the S1 vibrational level whose classical turning point is near the equilibrium distance of the ground state.
Upon light absorption, a molecular system may be transferred from the ground state to an electronically excited state. According to the Franck-Condon principle, this transition tends to occur between those vibrational levels of two electronic states that have the same nuclear configurations. The time required for the absorption of a light quantum (~1 fs) is much shorter than a characteristic time of a nuclear vibration (~100 fs), and therefore, the nuclei cannot change their relative positions during the act of excitation. In other words, transitions between two potential energy surfaces can be represented by vertical lines connecting them (see Figure 5). In the course of a photochemical reaction there is a considerable time interval when the molecular system is out of the thermal equilibrium (a few ps in condensed phase, up to ms in low pressure gas phase reactions). It means that the population of vibrational energy levels may differ strongly from that predicted by the Boltzmann distribution (see Basic Photophysics). As a consequence of "vertical" electronic transitions and different equilibrium geometries of the ground and first excited state (Figure 5), immediately after excitation the molecular system will likely be in an excited vibrational state ("hot" molecule).
The amount of extra energy available for nuclear motion is a function of the excitation energy (wavelength). Vibrational excitation may also result from internal conversion or intersystem crossing, when electronic energy is converted into kinetic energy of the nuclei. It is known that internal conversion from S1 to S0 can be so fast in some systems that the thermal equilibration is first achieved only in the ground state. In solution, "hot" molecules in the first excited or ground state are quickly cooled down via interactions with the surroundings. Thermal equilibrium is normally established within a few picoseconds. Nevertheless this time is long enough to comprise several vibrational periods. The excess of kinetic energy may help the reactant(s) to overcome the barrier and relax into a new minimum. Chemical reactions of this type are called "hot". They preferentially occur in the gas phase at lower pressure where molecular collision frequency is much smaller than in the condensed phase.
We have already discussed that theoretical analysis of thermal reactions can be accomplished when minima and saddle points on the ground-state surface are allocated. The situation is much more complex for photochemical reactions. Difficulties emerge when one needs to explore several potential energy surfaces in detail. Luckily, only a few excited-state surfaces are of importance for the majority of photoreactions. Even so, topology of the three surfaces, S0, S1 and T1, which are almost without exception needed to understand the photoreaction mechanism, may be extremely complex. Minima on S1 and T1 surfaces may be anticipated in the regions near the ground-state equilibrium geometries and near geometries, corresponding to intermolecular complexes. The latter minima reflect much larger polarizability of excited species and therefore higher affinity to other molecules. Excited complexes can be formed from two molecules of the same type (excimer), or from two different molecules (exciplex). Return from the minima of these two types to the ground state usually does not produce a chemical change (Figure 6a) unless significant geometrical changes accompany the excitation and/or multiple close-spaced minima exist on the ground state surface (Figure 6b). Formaldehyde provides an example for large geometrical distortions in the excited state, the molecule is planar in S0, and pyramidal in S1 and T1.
Figure 6
Figure 6. Schematic representation of the energy profiles corresponding to the ground state and the first excited state (a) for a system that undergoes an excited-state reaction but achieves no chemical conversion upon returning to the ground state and (b) for a system with partial conversion upon jumping to the ground state. Light absorption is represented by red block arrows, light emission by white block arrows.
In addition to localizing minima on the potential surfaces, finding the regions where the surfaces may cross or come very close to each other is of primary importance. The Born-Oppenheimer approximation is generally invalid in the vicinity of surface crossings and additional effects must be taken into account to describe the time evolution of the molecular species. The non-crossing rule states that potential energy curves can cross only if the electronic states have different symmetry (spatial or spin). Therefore the wavefunctions in the crossing region predicted by the simplest approximation has to be modified to avoid crossing of the potential energy curves (Figure 7). The non-crossing rule is strictly valid only for diatomic molecules. Intersection or touching of potential energy surfaces in polyatomic systems is generally allowed even if they belong to the states of the same symmetry. Recent studies showed that such crossing, also called conical intersection because of the topology of the surfaces at the crossing point, is quite common. The question whether a true conical intersection or avoided crossing is observed for a particular system of interest can be answered only with quantum mechanical calculations of high accuracy. These calculations recently became feasible for relatively large organic molecules, but reliable data are available just for a few systems.
Figure 7
Figure 7. Adiabatic (solid) and non-adiabatic energy curves (dashed) for the S0 and S1 states. The light absorption is a vertical transition (block red arrow). Nuclear motion after excitation is governed by the S1 curve. Blue arrows show the motion in the case of avoided crossing and the black broken arrow corresponds to the allowed crossing.
Two hypothetical surfaces for the ground- and an excited state are depicted in Figure 8. The fact that multidimensional potential-energy surfaces may have numerous regions where they come very close to each other is of great importance for understanding photochemical mechanisms. First, non-radiative transitions such as internal conversion and intersystem crossing have much higher probability in these regions. Second, conical intersections (or weakly avoided crossings) serve as bottlenecks through which the photoreaction passes on the way from excited-state species to the ground-state products. In this sense crossing points are analogous to the transition states on the adiabatic surfaces. An essential distinct feature of the conical intersection is the presence of two independent pathways for the reaction (path f) as compared to the single path through the saddle point.
Figure 8
Figure 8. Potential energy surfaces of the ground and an excited state with various pathways (dashed lines) following the light absorption (red arrow).
"Vertical" excitation typically leads to vibrationally excited species. Thermal equilibrium may be established during the lifetime of the excited state, meaning that vibrational relaxation takes place and the photoreaction starting from a minimum on the excited-state surface is said to have an excited-state intermediate (path a). Return from the first or even the second minimum reached on the excited-state surface often does not produce a new species (right part of path c) and the whole sequence may be considered as a photophysical process. A typical example is the protolytic dissociation of 1-naphthol in the singlet excited state (Scheme 3). The acidity of this molecule increases dramatically upon excitation (pKa = 9.2 and 0.4 for S0 and S1) and proton is transferred to a suitable acceptor such as water. It has to be noted that Scheme 3 does not account for all photoprocesses occurring in
1-naphthol solutions.
Scheme 3
Scheme 3
The primary excited-state intermediate in Figure 8 may produce a new molecule in the excited state, which undergoes further modifications (path b), or returns to a new minimum on the ground-state surface (left part of path c). A jump from the excited-state surface can be accomplished via non-radiative transition (path c) or light emission
(path d). An illustrative example of an excited-state intermediate in the photochemical reaction is the interaction of 9-cyanophenathrene with tetramethylethene in benzene that forms a cycloadduct via a singlet exciplex (Scheme 4).
Scheme 4
Scheme 4
The reaction sequences represented by motion on the excited-state adiabatic surface are usually called adiabatic reactions. If the loss of excitation occurs anywhere on the reaction path between the points corresponding to reactants and products, then such photoreaction may be referred to as diabatic (also called non-adiabatic). It is also possible that the vibrational relaxation first occurs in the ground state (path e in Figure 8). Such a photoreaction is called "direct". A direct reaction proceeds through a funnel (path f), which is a region of the potential energy surface where the probability for a jump from one energy surface to another one is very high. Funnels usually correspond to conical intersections or weakly avoided crossings. To characterize a molecule in a funnel one needs not only the positions of the nuclei but also their velocity vectors. In some systems passage through a conical intersection may also be separated from the excited-state minimum initially populated by a small barrier (paths a and c assuming that surfaces now cross at the point corresponding to path c). The presence of a S1-S0 conical intersection separated from the "vertical" geometry by a small barrier has been predicted for benzene. This funnel is responsible for the opening of efficient deactivation channel leading to disappearance of fluorescence and isomerization (Scheme 5) when the benzene molecule has enough vibrational energy to overcome the barrier.
Scheme 5
Scheme 5
5. Factors Determining Outcome of a Photochemical Reaction
The wide variety of molecular mechanisms of photochemical reactions makes a general discussion of such factors very difficult. The chemical nature of the reactant(s) is definitely among the most important factors determining chemical reactivity initiated by light. However, a better understanding of this aspect may be gained from a closer examination of the individual groups of chemical compounds. The nature of excited states involved in a photoreaction is directly related to the electronic structure of the reactant(s).
Environmental variables, i.e., parameters that are not directly related to the chemical nature of the reacting systems, may also strongly affect photochemical reactivity. It is useful to distinguish between variables that are common for thermal and photochemical reactions, and those that are specific for the reactions of excited species. The first group includes reaction medium, reaction mixture composition, temperature, isotope effects to name the most important. The distinctive feature of photochemical reactions is that these parameters almost always operate under conditions when one or more photophysical processes compete with a photoreaction. The result of a photoinduced transformation can only be understood as the interplay of several processes corresponding to passages on and between at least two potential energy surfaces. We saw that even the simplest system, shown in Scheme 2, corresponds to parallel reactions in terms of reaction kinetics.
Reaction medium may directly modify the potential energy surfaces of the ground and excited states and hence affect the photoreactivity. The outcome of the two reactions presented in Schemes 3 and 4 changes dramatically when solvent polarity and hydrogen bonding capacity are changed. The protolytic photodissociation of 1-naphthol is completely suppressed in aprotic solvents because of unfavorable solvation energies both for the anion and proton. Under such conditions, proton transfer reaction cannot compete with the deactivation. The formation of two new products (Scheme 6) in the reaction of 9-cyanophenathrene with tetramethylethene is observed in methanol, because the exciplex dissociated into radical ions. It means that the potential energy minimum corresponding to the ion-radical pair shifts below that of the exciplex in polar solvents. The ion-radical formation is often followed by proton transfer reactions.
Scheme 6
Scheme 6
Solvent viscosity will strongly affect photoreactions where the encounter of two reactants or a substantial structural change are required. In highly viscous or solid solutions the loss of excitation via light emission or unimolecular non-radiative deactivation is more probable than a chemical modification of the excited species. On the other hand, slow diffusion in viscous solutions may prevent self-deactivation of the triplet state via a bimolecular process called triplet-triplet annihilation and enhance the efficiency of a photoreaction from this state. Triplet-triplet annihilation belongs to electronic-energy transfer processes, which may be classified as quenching of excited states. Quenching rate is a very important factor in discussing effects of medium and reaction mixture composition on photoreactivity. Quenching of excited states is a general phenomenon that is realized via different mechanisms. Any process that leads to the disappearance of the excited state of interest may be considered as quenching. In general it can be represented as:
Scheme 7
Scheme 7
Notice that the quencher molecule Q may belong to the same kind of chemical species as the excited molecule, and be either in the ground or in an excited state. S' corresponds to the ground state or to an excited state of lower energy. For the purpose of our discussion we separated quenching described by Scheme 7 from all other processes, including the photoreaction of interest introduced in Scheme 2. Obviously, this separation is just a matter of convention. Generally, any chemical reaction of the excited species can be considered as a quenching process for fluorescence. Scheme 7 can easily be incorporated into the reaction scheme (see Scheme 8) and into our kinetic analysis as an additional pseudo-unimolecular rate constant kq[Q].
Scheme 8
Scheme 8. Kinetic scheme for a simple system with a photoreactive singlet state in the presence of a quencher.
In the presence of a quencher, Q, the observed lifetime of the excited molecule and therefore the quantum yield of the photoreaction may be significantly reduced.
(11) Formula 33
(12) Formula 34
Eq. (4), (6), (11), and (12) can be combined into a single one:
(13) Formula 35
where index "0" refers to the system without quenching. If we would consider the fluorescence quantum yield instead of the photoreaction yield, we would obtain a similar equation, which is known as the Stern-Volmer equation. The mechanism just considered corresponds to so-called dynamic quenching that results purely from encounters between excited molecules and the quencher. It is also conceivable that Q and S form a ground-state complex, which has a different reactivity and/or does not fluoresce. This situation is referred to as static quenching. In the case of static quenching, the quantum yield is diminished but the observed lifetime remains constant. In any event, the existence of quenching emphasizes the importance of the concentration as a controlling factor in photochemistry. For many systems, the quenching rate constant, kq, is close to the diffusion-controlled limit which is of the order 1010 M-1s-1 at ambient temperature in liquid solutions. It means that quenching effects may become noticeable at the quencher concentrations > 1 mM and > 1 µM for the singlet state and triplet state with characteristic lifetimes of 10 ns and 10 µs, respectively. Thus even minor impurities may cause photoreaction quenching. The concentration of the photoreactive compound S may also play an important role if self-quenching takes place.
Because of the energy conservation law the excitation energy in a quenching process must be either dissipated in the form of thermal energy, or accumulated in the form of chemical energy of the quenching products or transferred to the quencher Q. According to these three possibilities one may distinguish physical mechanisms of quenching from chemical ones and from energy transfer. However, a clear cut is not always possible or worth making. The formation of excimers is frequently observed in solutions of aromatic hydrocarbons, such as anthracene or pyrene. The potential energy surfaces in these systems frequently look similar to that shown in Figure 6a. Thus, the entire reaction sequence leads only to quenching of the excited monomer. The quenching will be seen in a reduced quantum yield of the monomer fluorescence and a monomer photoreaction. An illustrative example is
1-hydroxypyrene, which is a moderately strong photoacid in water. In the singlet excited states it readily transfers a proton to a suitable base such as acetate anion (analogous to the reaction shown in Scheme 3). But at higher concentrations of 1-hydroxypyrene, the quantum yield of the photoinduced proton transfer decreases because of the formation of the excimer, which is not as efficient as the proton donor. Exciplexes are typically more reactive, and provide examples for combined physical and chemical quenching (see Schemes 4 and 6).
Fluorescence self-quenching in aqueous solutions of dyes, such as fluorescein or eosin, has been known for more than 100 years. Several mechanisms involving collisional quenching, ground-state aggregation and energy transfer to the aggregates has been proposed to account for this phenomenon. In principle, quenching by the ground state could be observed for almost every excited species under conditions favoring the close proximity of two molecules. That is why it is often reported for systems with confined geometries such as those of surfactant assemblies. There exist many examples of self-quenching of the triplet state that plays a role in photochemistry. For example, the quenching of anthrone triplets by its ground state in benzene occurs with a rate constant close to 109 M-1s-1 and results in the formation of two radicals (Scheme 9). The photoreactivity of 10,10-dimethylanthrone differs dramatically, because methyl substituents prevent the reactive self-quenching.
Scheme 9
Scheme 9
Compounds with heavy atoms and paramagnetic species increase the rate of intersystem crossing. It has to be emphasized that such molecules enhance the efficiency both of the S1 --> T1 and T1 --> S0 transitions, and should be considered as quenchers both for singlets and triplets. The yields of photochemical reactions originating from the singlet excited state, as a rule, are adversely affected by these quenchers. In contrast, the efficiency of photoconversion from the triplet state is usually increased because the triplet lifetime remains sufficiently long in the presence of a quencher, and the overall effects is largely determined by an increase in the yield of the triplets. An example is given by the photoreaction of anthracene with 1,3-cyclohexadiene which mainly forms product A (Scheme 10). In the presence of methyl iodide (iodine is a heavy atom), the major product is compound B, which was also obtained in small quantities in the absence of the quencher. The results suggest that B is formed in a triplet-state reaction.
Scheme 10
Scheme 10
The most important paramagnetic species is molecular oxygen, which is known to be very efficient quencher of excited states. Quenching by O2 is particularly important for the triplet state because of its long lifetime (see Eq.(13)), so that even traces of oxygen may strongly affect photoreaction occurring through the triplet state. The ground state of O2 is a triplet state. The first singlet excited state is only 22 kcal mol-1 above the ground state. This energy corresponds to near-IR radiation with a wavenumber of 7882.4 cm-1 or wavelength of 1269 nm. Singlet oxygen is a reactive species interacting with a wide variety of substrates. It can be generated using dyes with a high triplet yield, such as rose bengal or methylene blue. As mentioned above this process belongs to the type of quenching that is called electronic energy transfer.
The outcome of an energy-transfer process is the quenching of the luminescence or photoreaction associated with the donor and the initiation of the luminescence or photoreaction characteristic of the energy acceptor. The subsequent reactions of the acceptor are said to be sensitized. Electronic energy transfer can be described by Scheme 7, where Q' has to be an excited-state species. Two general mechanisms of the energy transfer are distinguished: radiative and nonradiative. The radiative mechanism, often described as "trivial", is realized through the emission of light by the donor, and its absorption by the acceptor. Nonradiative energy transfer is a single-step process that requires the direct interaction of the donor and acceptor.
The specific variables of any photoreaction, as compared to thermal chemical processes, are the wavelength and intensity of excitation light. Wavelength dependence of the quantum yield or photoproduct composition may result from the occurrence of "hot" reactions or reactions from higher excited states (S2, T2, etc.). The latter processes have to be extremely fast to compete with internal conversion, which is typically accomplished within 1 ps. The presence of slowly inter-converting conformers or isomers with different absorption spectra may also cause wavelength-dependent photoreactions.
The intensity of excitation light is the key to multi-photon photochemistry (see legend to Figure 3). Because of the n-th power dependence of the absorption rate the photoreaction may become detectable only when the photon flux is above a certain threshold value. In one-photon photoreactions, primary processes are normally not affected by the light intensity. However, the overall reaction might be very sensitive to it, because relatively long-lived intermediates may come into play. These intermediates may absorb light and undergo photochemical reactions, or be involved in bimolecular reactions (e.g., triplet-triplet annihilation) that are strongly dependent on their concentration, and therefore light intensity.
6. Exciting World of Photoreactions
There exist a plethora of photoreactions practically for each class of chemical compounds. These reactions may be categorized according to chemical composition and structure. They may also be classified under different types by using theoretical models for the description of the excited state(s) or structure of the potential energy surface. However, for our introductory discussion it seems to be more appropriate just to consider some examples classified by general reaction types (Figure 9).
Figure 9
Figure 9. Multiple reaction pathways for electronically excited species.
Considering the high energies involved in electronic excitation, photoinduced dissociation may be expected as a typical reaction pathway. However, photodecomposition via dissociative pathway(s) is not so common in photochemistry, particularly for large molecules in solution. This can be easily understood if one recalls that electronic excitation is usually not localized in a particular vibrational mode, and primary products of the dissociation have a high probability to recombine, because of cage effects. In solution, two fragments of a molecule are trapped within a "cage" of solvent molecules, and undergo numerous collisions before they escape the cage. Dissociative processes play a much more important role in gas-phase photochemistry. The photodissociation of small molecules driven by the UV radiation is of profound importance for atmospheric photochemistry.
Photodecomposition of O2 and O3 (Scheme 11) may afford the products in different electronic states depending on the excitation wavelength. These processes participate in establishing peculiar profiles for atmospheric temperature and the solar radiation spectrum on the earth surface. The gas-phase photoionization (removal of an electron) also belongs to dissociation reactions. The reactions shown in Scheme 12 may occur in the upper atmosphere due to short-wavelength UV radiation from the Sun.
Scheme 11
Scheme 11
Scheme 12
Scheme 12
An important primary photoprocess of carbonyl compounds is alpha cleavage, also known as a Norrish Type I reaction (Scheme 13). Besides recombination, the acyl and the alkyl radicals formed in the primary reaction can undergo numerous secondary reactions that are responsible for the multitude of final products.
Scheme 13
Scheme 13
Rearrangements of electronically excited molecules present one of the most exciting chapters in photochemistry in the sense that they follow reaction pathways that are usually inaccessible for the ground state (activation barriers in the ground state are very high). The cis-trans isomerization of double bonds belongs to such reactions. The azobenzene reaction depicted in Scheme 1 provides an instructive example. Scheme 14 shows photoinduced rearrangements of stilbene that has been extensively studied. In addition to double bond isomerization, cis-stilbene undergoes also cyclization with a lower quantum yield to form dihydrophenanthrene. The cis-trans isomerization of stilbene occurs through rotation around the double bond. In the ground state this rotation encounters a large barrier, i.e., there is a maximum on the ground-state potential energy surface at the geometry corresponding to a twist angle of about 90o. In contrast, both the first singlet excited state and triplet state have a minimum approximately at the same geometry. The close proximity of the minimum and maximum facilitates a jump to the ground state (compare to path c in Figure 8). The cis-trans isomerization of azobenzene may proceeds not only through rotation, but also through nitrogen inversion, i.e. in-plane motion of the phenyl ring.
Scheme 14
Scheme 14
Two illuminating examples of photoinduced rearrangements of substituted benzaldehydes are presented in Scheme 15. Intramolecular hydrogen transfer in 2-hydroxybenzaldehyde is an extremely fast reaction in the singlet excited state. However, the process is completely reversed upon a jump to the ground state. Overall, no chemical conversion is observed and excitation energy is either dissipated as heat or emitted as light, but with a longer wavelength (see Figure 6a). This behavior is typical for aromatic carbonyl compounds with ortho-hydroxy groups, and they found application as UV protectors, in sunscreens for example. Molecules acting as UV protectors absorb light that is harmful for biological molecules, and convert light into heat or radiation that is biologically benign. In contrast, an intramolecular hydrogen transfer in
2-nitrobenzaldehyde initiates a sequence of the ground-state reactions that leads to 2-nitrosobenzoic acid. The latter molecule is a moderately strong acid, and dissociates in aqueous solutions so that the photochemistry of 2-nitrobenzaldehyde can be used to create a rapid pH-jump in solution. Many biological macromolecules, such as proteins and nucleic acids, show pH-dependent conformational changes. Those changes can be monitored in real time by using the light-induced
Scheme 15
Scheme 15
Scheme 9 gives an example of photoinduced abstraction. The two reactions shown in Schemes 3 and 6 can also be classified as abstraction reactions. Here, a proton or an electron is abstracted from the excited molecule by the ground-state species. These processes, often combined under the term "charge-transfer reactions", play an important role in many photoinduced processes. Excited-state electron transfer constitutes a decisive step in the entire process of photosynthesis. Hydrogen atom abstraction reactions are known for more than 100 years and belong to the most extensively studied photoprocesses. The reaction of benzophenone in the triplet excited state with isopropanol provides another example for this type of photoreactions (Scheme 16). The dimethylketyl radical produced transfers a hydrogen atom to benzophenone in the ground state to produce another diphenylketyl radical. It is interesting that only one photon is needed to convert two molecules of the reactant and the quantum yield of benzophenone decomposition has a limiting value of 2.
Scheme 16
Scheme 16
Intramolecular hydrogen abstraction is a common photoreaction of carbonyl compounds with a hydrogen atom attached to the fourth carbon atom (Scheme 17). The resulting diradical can form cycloalkanol or undergo C-C bond fission to give an alkene and enol. The latter is usually thermodynamically unfavorable and converts to a ketone. Intramolecular abstraction of a -hydrogen is known as a Norrish Type II process.
Scheme 17
Scheme 17
Photosubstitution reactions are well characterized for substituted aromatic compounds. An illustrative example is the photoreaction of m-nitroanisole with cyanide ion (Scheme 18). The mechanism involves a complex of the aromatic molecule in the triplet state with the nucleophile.
Scheme 18
Scheme 18
A photohydrolysis reaction in aqueous solution (substitution with OH-) was utilized to provide the rapid light-controlled release of biologically active molecules, such as aminoacids, nucleotides, etc. Biologically inert compounds affording such release upon photoirradiation are referred to as "caged" compounds. Two-photon photochemistry is of great interest for such studies, because one can utilize red light or IR radiation, which is not absorbed by biomolecules, and is biologically benign. The two-photon photohydrolysis of the glutamate ester of hydroxycoumarin (Scheme 19) is characterized by a reasonably high cross-section for two-photon absorption.
Scheme 19
Scheme 19
Addition reactions are quite common among electronically excited molecules. An example of cycloaddition occurring both from the singlet and triplet excited state is shown in Scheme 10. Photoinitiated cycloaddition reactions are of great importance for understanding the mutagenic effects of UV radiation. Two major photolesions produced in DNA by UV light are cyclobutane pyrimidine dimers (CPD) and pyrimidine(6-4)pyrimidone adducts (P64P) (Scheme 20). These lesions are thought to represent the predominant forms of premutagenic damage. Generally, the overall yield of P64P is substantially lower than that for CPD, but CPD was found to be less mutagenic than the P64P adduct. CPD is formed in cycloaddition involving excited thymine or cytosine and another pyrimidine nucleobase in the ground state. The proposed, but still not proven mechanism for the P64P formation includes an unstable intermediate with a 4-membered ring, which undergoes fast H-transfer and ring opening.
Scheme 20
Scheme 20
The addition of singlet oxygen to double bonds is well known. Because singlet oxygen can be generated photochemically via energy transfer, the entire reaction sequence, such as shown in Scheme 21, provides an example of sensitized addition photoreaction.
Scheme 21
Scheme 21
7. Supplemental Reading
Barltrop, J.A., Coyl, J.D. (1975) Excited states in organic chemistry, London ; New York: Wiley, 1975, 376 p.
Klessinger, M., Michl J. (1995) Excited states and photochemistry of organic molecules, New York: Wiley-VCH Publishers, 538 p.
Michl, J., Bonacic-Koutecky, V. (1990) Electronic aspects of organic photochemistry, New York: Wiley, 475 p.
Turro, N.J. (1991) Modern Molecular Photochemistry, Sausalito: University Science, 628 p.
Wayne, C.E., Wayne, R.P. (1996) Photochemistry, Oxford: Oxford University Press, 96 p.
[ TOP ] |
a32a1b3b91461d4b | You are currently browsing the tag archive for the ‘Super Bowl commercial’ tag.
Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation
-i u_t + \Delta u = |u|^2 u (1)
in two spatial dimensions, thus u is a function from {\Bbb R} \times {\Bbb T}^2 to {\Bbb C}. This equation has three important conserved quantities: the mass
M(u) = M(u(t)) := \int_{{\Bbb T}^2} |u(t,x)|^2\ dx
the momentum
\vec p(u) = \vec p(u(t)) = \int_{{\Bbb T}^2} \hbox{Im}( \nabla u(t,x) \overline{u(t,x)} )\ dx
and the energy
E(u) = E(u(t)) := \int_{{\Bbb T}^2} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{4} |u(t,x)|^4\ dx.
(These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth u_0: {\Bbb T}^2 \to {\Bbb C} there is a unique global smooth solution u: {\Bbb R} \times {\Bbb T}^2 \to {\Bbb C} to (1) with initial data u(0,x) = u_0(x), whose mass, momentum, and energy remain constant for all time.
However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time. In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity. This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically).
To illustrate how this can happen, let us normalise the torus as {\Bbb T}^2 = ({\Bbb R}/2\pi {\Bbb Z})^2. A simple example of a frequency cascade would be a scenario in which solution u(t,x) = u(t,x_1,x_2) starts off at a low frequency at time zero, e.g. u(0,x) = A e^{i x_1} for some constant amplitude A, and ends up at a high frequency at a later time T, e.g. u(T,x) = A e^{i N x_1} for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1). A more complicated example would be a solution supported on two low frequencies at time zero, e.g. u(0,x) = A e^{ix_1} + A e^{-ix_1}, and ends up at two high frequencies later, e.g. u(T,x) = A e^{iNx_1} + A e^{-iNx_1}. This scenario is consistent with conservation of mass and momentum, but not energy. Finally, consider the scenario which starts off at u(0,x) = A e^{i Nx_1} + A e^{iNx_2} and ends up at u(T,x) = A + A e^{i(N x_1 + N x_2)}. This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency \sqrt{2} N, with the other half of its mass at the zero frequency. More generally, given four frequencies n_1, n_2, n_3, n_4 \in {\Bbb Z}^2 which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies n_1, n_3 and propagates to frequencies n_2, n_4.
One way to measure a frequency cascade quantitatively is to use the Sobolev norms H^s({\Bbb T}^2) for s > 1; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large. (Note that mass and energy conservation ensure that the H^s({\Bbb T}^2) norms stay bounded for 0 \leq s \leq 1.) For instance, in the cascade from u(0,x) = A e^{i Nx_1} + A e^{iNx_2} to u(T,x) = A + A e^{i(N x_1 + N x_2)}, the H^s({\Bbb T}^2) norm is roughly 2^{1/2} A N^s at time zero and 2^{s/2} A N^s at time T, leading to a slight increase in that norm for s > 1. Numerical evidence then suggests the following
Conjecture. (Weak turbulence) There exist smooth solutions u(t,x) to (1) such that \|u(t)\|_{H^s({\Bbb T}^2)} goes to infinity as t \to \infty for any s > 1.
We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will):
Theorem. Given any \varepsilon > 0, K > 0, s > 1, there exists a smooth solution u(t,x) to (1) such that \|u(0)\|_{H^s({\Bbb T}^2)} \leq \epsilon and \|u(T)\|_{H^s({\Bbb T}^2)} > K for some time T.
This is in marked contrast to (1) in one spatial dimension {\Bbb T}, which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all H^s({\Bbb T}^2) norms bounded in time. It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture). Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data. (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio \|u(T)\|_{H^s({\Bbb T}^2)} / \|u(0)\|_{H^s({\Bbb T}^2)} can be made arbitrarily large when s > 1, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence.
Read the rest of this entry »
RSS Google+ feed
Get every new post delivered to your Inbox.
Join 3,310 other followers |
22390944cfa0777a |
Foundational questions of quantum information
April 4-5, 2012
Workshop "Foundational questions of quantum information"
Dates: April 4-5, 2012
Jointly organized by LARSIM and QuPa
Venue: Amphi Opale, 46 rue Barrault, Paris 13e
April 4
9:30-9:45 Coffee and Opening
9:45-10:45 Robert Raussendorf (University of British Columbia)
10:45-11:00 Coffee
11:00-12:00 Oscar Dahlsten (University of Oxford)
14:15-15:15 Matthew Pusey (Imperial College London)
15:15-16:15 Michel Bitbol (CREA, CNRS-Ecole Polytechnique)
16:15-16:45 Coffee
16:45-17:45 Virginie Lerays (LRI, Université Paris Sud)
April 5
9:30-9:45 Coffee
9:45-10:45 Damian Markham (LTCI, CNRS-Télécom ParisTech)
10:45-11:00 Coffee
11:00-12:00 Kavan Modi (University of Oxford and Centre for Quantum Technologies, National University of Singapore)
14:15-15:15 Giacomo Mauro d'Ariano (University of Pavia)
15:15-16:15 Caslav Brukner (University of Vienna)
16:15-16:45 Coffee
16:45-17:45 Alexei Grinbaum (LARSIM, CEA-Saclay)
Robert Raussendorf
"Symmetry constraints on temporal order in measurement-based quantum computation"
We discuss the interdependence of resource state, measurement setting and temporal order in measurement-based quantum computation. The possible temporal orders of measurement events are constrained by the principle that the randomness inherent in quantum measurement should not affect the outcome of the computation. We provide a classification for all temporal relations among measurement events compatible with a given initial quantum state and measurement setting, in terms of a matroid. Conversely, we show that classical processing relations necessary for turning the local measurement outcomes into computational output determine the resource state and measurement setting up to local equivalence. Further, we find a symmetry transformation related to local complementation that leaves the temporal relations invariant.
Oscar Dahlsten
"Tsirelson’s bound from a Generalised Data Processing Inequality"
The strength of quantum correlations is bounded from above by Tsirelson’s bound. We establish a connection between this bound and the fact that correlations between two systems cannot increase under local operations, a property known as the data processing inequality. More specifically, we consider arbitrary convex probabilistic theories. These can be equipped with an entropy measure that naturally generalizes the von Neumann entropy, as shown recently by Short and Wehner. We prove that if the data processing inequality holds with respect to this generalized entropy measure then the underlying theory necessarily respects Tsirelson’s bound. We moreover generalise this statement to any entropy measure satisfying certain minimal requirements. Based on arXiv:1108.4549.
Matthew Pusey
"Comparing two explanations for qubits"
I will discuss two long-standing realist models for qubits - one due to Bell and the other to Kochen and Specker. I will argue that the latter provides a much more compelling explanation of various quantum information phenomena, mainly thanks to the feature that multiple quantum states can apply to the same real state. Finally I will show that, on the other hand, it is precisely this feature that prevents the latter model from explaining a very particular phenomena. Based on arXiv:1111.3328.
Michel Bitbol
"Kant and quantum mechanics: a middle way between the ontic and epistemic approaches"
Instead of either formulating new metaphysical images of the so-called "quantum reality" or rejecting any metaphysical attempt in an empiricist spirit, the case of quantum mechanics might require a redefinition of metaphysics. The sought redefinition will be performed in the spirit of Kant, according to whom metaphysics is the discipline of the boundaries of human knowledge. This can be called a "reflective" conception of metaphysics. Along with this perspective, theoretical structures are neither ontic nor purely epistemic. They do not express exclusively the structure of reality out there, or the form of our own knowledge, but their active interface. Our understanding of the structure of quantum mechanics then works in two steps :
(1) The most basic structures of quantum mechanics are neither imposed onto us (by some pre-structured reality) nor arbitrary (just meant to "save the phenomena"), but made necessary by the general characteristics of our demand of knowledge.
(2) Yet, there can also be additional features of theoretical structures corresponding to special characteristics of our demand of knowledge, adapted to certain directions of research or to cultural prejudice. The "surplus structure" of some of the most popular interpretations of quantum mechanics will be understood this way.
Finally, it will be shown that some of the major "paradoxes" of quantum mechanics, such as the measurement problem, can easily be dissolved by way of this reflective attitude.
Virginie Lerays
"Detector efficiency and communication complexity"
In the standard setting of communication complexity, two players each have an input and they wish to compute some function of the joint inputs. This has been the object of much study in computer science and a wide variety of lower bound methods have been introduced to address the problem of showing lower bounds on communication. Physicists have considered a closely related scenario where two players share a predefined entangled state. Each is given a measurement as input, which they perform on their share of the system. The outcomes of the measurements follow a distribution which is predicted by quantum mechanics. The goal is to rule out the possibility that there is a classical explanation for the distribution, through loopholes such as communication or detector inefficiency. In an experimental setting, Bell inequalities are used to distinguish truly quantum from classical behavior.
Bell test and communication complexity are both measures of how far a distribution is from the set of local distributions (those requiring no communication), and one would expect that if a bell test shows a large violation for a distribution, it should require a lot of communication and vice versa.
We present a new lower bound technique for communication complexity based on the notion of detector inefficiency for the setting of simulating distributions, and show that it coincides with the best lower bound in communication complexity known until now. We show that it amounts to constructing an explicit Bell inequality. Joint work with Sophie Laplante and Jérémie Roland.
Damian Markham
"On non-linear extensions of quantum mechanics"
We present some observations on the restrictions imposed on non-linear extensions of quantum mechanics with respect to non-signaling. We see that non-signaling can be understood as imposing the destruction of correlations, a property noticed for closed time-like curves by Bennett et al, arising from the 'non-linearity trap'. We discuss in what sense such theories can still allow for 'local' cloning and state discrimination. Joint work with Julien Degorre.
Kavan Modi
"Entanglement distribution with quantum communication"
Two distant labs cannot increase the entanglement between them via classical communication. However, they can do so via quantum communication. Surprisingly, the communicated system need not be entangled with either / both of the labs, but it must be quantum correlated (as determined by quantum discord). We show that quantum discord that bounds the increase in the entanglement via quantum communication. Additionally, the bound also leads to subadditivity of entropy and gives an interpretation for negative conditional entropy.
Giacomo Mauro d'Ariano
"Physics from Informational Principles"
Recently quantum theory has been derived from six principles that are of purely informational nature. The "(epistemo)logical" nature of these principles makes them rock solid. We want now to take a pause of reflection about the general foundations of Physics, and re-examine how solid are principles as the Galilean relativity and the Einsteinian equivalence principle. Are they truly compelling? Why are they under dispute, and violations are considered? Following the route of the informational paradigm, I will suggest three new candidate principles, all of informational nature: 1) The Church–Turing–Deutsch principle, namely that theory must allow simulating any physical process by a universal finite computer (this implies that the information involved in any process is locally bounded); 2) topological locality of interaction; 3) topological homogeneity of interactions. These principles along with the six ones for Quantum Theory suggest a new foundation of Quantum Field Theory as Quantum Cellular Automata theory. I will show how this framework can actually provide an extension of Quantum Field Theory to include localized states and observables, whereas Galileo's and Einstein's covariance and other symmetries are only approximate, and to be recovered only in the field-limit, whereas their violation make the extended theory in-principle falsifiable. The new informational principles open totally unexpected routes and re-definitions of mechanical notions (as inertial mass, Planck constant, Hamiltonian, Dirac equation as free flow of information), Minkowsian space‐time as emergent, and an unexpected role for Majorana field in the solution of the so-called Feynman problem of simulating anti-commuting fields by the automaton.
Caslav Brukner
"Tests distinguishing between quantum and more general probabilistic theories"
The historical experience teaches us that every theory that was accepted at a certain time was later inevitably replaced by a deeper and more fundamental theory. There is no reason why quantum theory should be an exception in this respect. At present, quantum theory has been tested against very specific alternative theories, such as hidden variables, non-linear Schrödinger equations or the collapse models. The common feature of all of them is that they keep one or the other basic principle of the classical world intact. Yet, it is very unlikely that a post-quantum theory will be based on pre-quantum concepts. In contrast, it is likely that it will break not only principles of classical but also quantum physics. This gives us a motivation for the following research program: 1) To reconstruct quantum mechanics from a set of axioms. 2) To weaken the axioms and to look for broader structures. 3) To test quantum theory against them. Following this approach I will present two tests that can distinguish between quantum theory and more general probabilistic theories.
Alexei Grinbaum
"Quantum observers and Kolmogorov complexity"
Different observers do not have to agree on how they identify a quantum system. We explore a condition based on algorithmic complexity that allows a system to be described as an objective "element of reality". We also suggest an experimental test of the hypothesis that any system, even much smaller than a human being, can be a quantum mechanical observer.
Maj : 28/03/2012 (1899)
Retour en haut |
ed9228bbd4e6198b | Dismiss Notice
Join Physics Forums Today!
Difference between measurement and interaction
1. Jul 16, 2005 #1
After reading quite many popular science about QM, I still didn't get a real understanding of the difference between a so-called measurement and just an interaction (if there's actually any difference).
My understanding is that a measurement is an interaction "observed" in a way which allows to acquire information about it. The information itself does not need to be acquired by a sentient being, could just be recorded by a data recorder.
Just the fact that the information is there, and the possibility that at any eventual time it could be used, makes it a measurement and would for example destroy interference.
Just an interaction would be an interaction which is not recorded in any way, so that it will never be possible in the future to trace back information about what happened. In this case interference would not be destroyed.
Is this more or less correct? and if it is, does it not certainly and unavoidably put onto the table the famous role of consciousness in QM?
2. jcsd
3. Jul 16, 2005 #2
User Avatar
Staff Emeritus
Science Advisor
Gold Member
It is indeed the whole problem !
Indeed, that's a way to see things. The whole problem resides in what "happens to the system", and as you point out yourself, a "measurement" is nothing else but an interaction ; and the problem is of course that if you treat it as a "measurement" you need to apply a projection (process 1 according to von Neumann), while if you treat exactly the same operation as an interaction, you have a unitary time evolution (process 2 according to von Neumann).
It is even worse: if you treat whatever "records the information" also as a quantum system, you see that there is no *record* of any particular event, but just an entanglement of both state vectors (the one of the recording device, and the one of the system under study). And if you include YOURSELF in the 'recording system', you have nothing else but the many worlds interpretation of quantum theory !
That's what I also think. von Neumann and Wigner were also of that meaning. Then, a lot of people don't buy this. You're well on your way to get "entangled" into the interpretational problems of quantum theory :-)
4. Aug 9, 2005 #3
Possibly, some special kind of interaction deserves the name on measurement interaction. I really don't believe in such artificial projection mechanism. According to the work of G. Rempe, entanglement must be involved in the answer to this problem. As the system shares its information content with some leaky enviroment, i.e, with subsystems which can only be described termodynamically, the reduced density matrix experiences what we may call as a projection. But the information has not been exactly lost.
Best Regards
5. Aug 10, 2005 #4
I will say is not precise, but can help to you (i suspect you are not an expert).
Any interaction between two quantum systems may follow QM. Schrödinger equation is valid.
A measurement is not explained from QM. The process does not follow Schrödinger equation. There exists several proposals for new equations beyond QM: Ito-Schrödinger, Caldeira-Legget, Penrose gravity, Prigogine theory, etc.
All of those have well-defined problems.
Last edited by a moderator: Aug 10, 2005
Have something to add? |
5eb31ee6cb2c55a3 | Saturday, August 31, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Argumentation about de Broglie-Bohm pilot wave theory
Guest blog by Ilja Schmelzer, a right-wing anarchist and independent scientist
A nice summary of standard arguments against de Broglie-Bohm theory can be found at R. F. Streater's "Lost Causes in Theoretical Physics" website. Ulrich Mohrhoff [broken link, sorry] also combines the presentation of his position with an interesting rejection of pilot wave theory. These arguments I consider in a different file. Here, I consider the arguments proposed in several articles of Luboš Motl's blog "The reference frame": David Bohm born 90 years ago and Bohmists & segregation of primitive and contextual observables, Anti-quantum zeal and in off-topic responses of "Nonsense of the day: click the ball to change its color". Below, we refer to Luboš Motl simply as lumo (his nick in his blog).
Another argument (also with lumo's participation), related to Lorentz-invariance, I have considered at another place.
If you know other interesting pages critical of de Broglie - Bohm pilot wave theory, Nelsonian stochastics, non-local hidden variable theories in general, as well as ether theories, please tell me about them.
The most important thing: Measurement theory
The most important part of physics are, of course, experiments. Moreover, this is also the point where lumo is simply wrong, so it is worth to start with it.:
... it is not true that the de Broglie-Bohm theory gives the same predictions in general. It can be arranged to do so in the case of one spinless particle. But in the real quantum theories we find relevant today, such as quantum field theory, de Broglie-Bohm theory cannot be constructed to match probabilistic QFT exactly, and one can see that its very framework contradicts observable facts.
At another place, we find some hint where his misunderstanding is located:
Your equations about \(X\) are completely irrelevant for the measurement of the spin. The problem is not when one wants to measure \(X\). Indeed, the measurement of \(X\) might occur analogously to its measurement in the spinless case. The problem occurs when one actually wants to measure the spin itself.
The projection of the spin \(j_z\) is an observable that can have two values, in the spin \(1/2\) case, either \(+1/2\) or \(-1/2\). It is a basic and completely well-established feature of QM that one of these values must be measured if we measure it.
How is your 17th century deterministic theory supposed to predict this discrete value? Like with \(X\), it must already have a classical value for this quantity. Except that in this case, it has to be discrete, so it can't be described by any continuous equation. ...
Preemptively: you might also argue that any actual measurement of the spin reduces to a measurement of \(X\). But it's not true. I can design gadgets that either absorb or not absorb the electron depending on its \(j_z\). So they measure \(j_z\) directly. deBB theories of all kinds will inevitably fail, not being able to predict that with some probability, the electron is absorbed, and with others, they're not. This has nothing to do with \(X\) or some driving ways. It is about the probability of having the spin itself.
The last paragraph gives the hint: lumo has interpreted the claim that all measurements reduce to position measurements as "all measurements of the electron reduce to position measurements of the electron". If that would be true, I would concede that lumo's polemics against pilot wave theorists are justified. This was, by the way, the state of the art before Bohm's measurement theory appeared 1952. Thus, lumo's arguments illustrate in a nice way why de Broglie had given up pilot wave theory.
Once the question has been asked how the 17th century deterministic theory manages to predict discrete values, let's explain this story. As a 17th century theory, with real aristocratic origin, it leaves the hard work to servants (quantum operators), reserving for itself the final (and most important) decisions ;-).
First, there is some interaction of the wave function of the electron with the wave function of the measurement device. (There is of course also an equation for the position of the electron \(q_{el}\) – the \(X\) in lumo's text – but it is completely irrelevant, not only at this stage, but in the whole process.) The result of the measurement is, as usual, a wave function of type\[
|\psi\rangle = \alpha_1|{\rm up}\rangle|q_1\rangle + \alpha_2|{\rm down}\rangle|q_2\rangle
\] This exploitation of standard QT is not enough – now decoherence will be exploited in an equally shameless way. We leave it to decoherence considerations to decide which observables of the measurement device become amplified or macroscopic. Assume the quantum states \(|q_1\rangle, |q_2\rangle\) are decoherence-preferred. In this case, decoherence amplifies the microscopic measurement results \(|q_1\rangle, |q_2\rangle\) into classical, macroscopically different states \(|c_1\rangle, |c_2\rangle\). After finishing this hard job, it presents the following state:\[
|\psi\rangle = \alpha_1|{\rm up}\rangle|c_1\rangle + \alpha_2|{\rm down}\rangle|c_2\rangle
\] Now, everything is prepared, it remains to make the really important decision which of the wave packets is the best one ;-). At this moment a hidden variable enters the scene. But, surprise, it is not the hidden variable of the electron \(q_{el}\) (lumo's X), but that of the classical measurement device \(q_c\).
The job of \(q_c\) is not a really hard one. After driving around (no, being driven around by quantum guides) in an almost unpredictable way, it simply takes the wave packet prepared for him by the quantum operators at the point of arrival ;-). In other words, we simply have to put the actual value of \(q_c(t)\) into the full wave function \(|\Psi\rangle\) to obtain the (unnormalized) effective wave function:\[
\psi(q_e) = \Psi(q_e, q_c(t))
\]What we need for this scheme to work as an ideal quantum measurement is not much. We need that the two states of the macroscopic device \(|c_1\rangle, |c_2\rangle\) do not (significantly) overlap as functions of the hidden variable \(q_c\). In this case, whatever the value of \(q_c\), the result \(\psi(q_e)\) will be a unique choice between two effective wave functions, namely between \(|{\rm up}\rangle\) if \(q_c\) is in the support of \(|c_1\rangle\), and \(|{\rm down}\rangle\) otherwise. And we need the quantum equilibrium assumption for \(q_c\) to obtain the probabilities for these two choices as \(|\alpha_1|^2\) resp. \(|\alpha_2|^2\).
Thus, everything works as in quantum theory – Born rule as well as state preparation by measurement (only without any ill-defined wave function collapse or subdivision of the world into a classical and quantum part, or the equally ill-defined "subdivision of the world into systems" used in many worlds or other decoherence-based approaches).
But maybe one of the two assumptions we have used used are wrong? Given Valentini's subquantum H-theorem, together with the numerical results of Valentini and Westman, which show a remarkable relaxation to equilibrium already in the two-dimensional case in a quite short period of time (arXiv:quant-ph/0403034), there is not much hope for observations of non-equilibrium in our universe.
One can, of course, also doubt that macroscopically different states do not have a significant overlap in the hidden variables. Such doubts have been, for example, expressed by Wallace and Struyve for pilot wave field theories. See my paper "Overlaps in pilot wave field theories" at arXiv:0904.0764 about the solution of this problem.
About the zeros of the wave function
There is a second point where experiment is involved, with an easy solution:
How do we know that \(m=l_z/\hbar\) must be an integer? Well, it is because the wave function \(\psi(x,y,z)\) of the m-eigenstates depends on \(\phi\), the longitude (one of the spherical or axial coordinates), via the factor \(\exp(i\cdot m\cdot\phi)\) which must be single-valued. Only in terms of the whole \(\psi\), we have an argument.
However, when you rewrite the complex function \(\psi(r,\theta,\phi)\) in the polar form, as \(R\exp(iS)\), the condition for the single-valuedness of \(\psi\) becomes another condition for the single-valuedness of S up to integer multiples of \(2\pi\). If you write the exponential as \(\exp(iS/\hbar)\), the "action" called S here must be well-defined everywhere up to jumps that are multiples of \(h = 2\pi\hbar\).
That's a nice argument, and, because of this argument, today the original form of de Broglie's "pilot wave theory" is preferred in comparison with the "Bohmian mechanics" version proposed 1952 by Bohm. In pilot wave theory, the pilot wave is really a wave, and you can apply the original argument to show that these observables are quantized. In Bohm's second order version, this is different, and the quantization of certain observables becomes, indeed, problematic. This has been another reason for me (beyond history, see arXiv:quant-ph/0609184) to prefer the name "pilot wave theory" in comparison with "Bohmian mechanics".
More generally, something very singular seems to be happening near the \(R=0\) strings in the Bohmian model of space.
The "model of space" in pilot wave theory is a trivial one, nothing strange happens there if R = 0. The singularity of the velocity at these points is harmless – a simple rotor localized in a string, moreover, there is nothing in the place where velocity becomes undefined.
So even though the Bohmian mechanics stole the Schrödinger equation from quantum mechanics, the superficially innocent step of rewriting it in the polar form was enough to destroy a key consequence of quantum mechanics - the discreteness of many physical observables.
If there would be property rights for equations or functions, one could argue as well that Schrödinger has stolen the wave function from de Broglie's pilot wave theory. Fortunately, such nonsense does not exist in science. But there is a point worth to be mentioned: Without pilot wave theory, there would be no Schrödinger picture, and we would have to use the Heisenberg formalism all the time. And if some Bohm would have found the Schrödinger equation later, it would have been named, as well, an unnecessary superconstruction and banned from physics, for almost the same reasons.
About relativistic symmetry and the preferred frame
Last but not least, there are some claims that pilot wave theories will be unable to recover QFT predictions in the relativistic domain. Unfortunately for his argumentation, the equivalence theorem remains to be a theorem even in the relativistic domain – nothing used in it has any connection to the particular choice of spacetime symmetry. Thus, if the quantum theory has relativistic symmetry for it's observable predictions, the same holds for the observable predictions of pilot wave theory.
More concretely, it is inconsistent with modern physics in many ways, as we will see.
Special relativity combined with the entanglement experiments is the most obvious example. Bell's theorem proves that if a similar deterministic theory reproduces the high correlations observed in Nature (and predicted by conventional quantum mechanics), namely the correlations that violate the so-called Bell's inequalities, the objects in the theory must actually send physical superluminal signals.
But superluminal signals would look like signals sent backward in time in other inertial frames. It follows that at most one reference frame is able to give us a causal description of reality where causes precede their effects. At the fundamental level, basic rules of special relativity are inevitably violated with such a preferred inertial frame.
I was already afraid that lumo does not even understand that in a preferred frame everything is fine with causality. The introduction was, at least, the highly dramatic one which is typical for such crank cases.
I like the formulation "at most". Sounds as if we would really like to have more reference frames and are, now, very disturbed that at most one preferred frame is available ;-).
You might think that the experiments that have been made to check relativity simply rule out a fundamentally privileged reference frame. Well, the Bohmists still try to wave their hands and argue that they can avoid the contradictions with the verified consequences of relativity.
Who is hand waving here? Lumo might, of course, think that experiments rule out a hidden preferred frame. But it's his job, in this case, to point out which observations rule out such a preferred frame. As long as he fails to do it, I don't even have contradictions with any verified consequence of relativity to wave my hands.
I wonder whether they actually believe that there always exists a preferred reference frame, at least in principle, because such a belief sounds crazy to me (what is the hypothetical preferred slicing near a black hole, for example?).
I'm happy to answer this question: The preferred coordinates are harmonic. Given, additionally, the global CMBR frame, with time after big bang as the time coordinate, this prescription is already unique. For a corresponding theory of gravity, mathematically almost exactly GR on flat background in harmonic gauge, physically with preferred frame and ether interpretation, see my generalization of the Lorentz ether to gravity.
But it is possible to see that one can't get relativistic predictions of a Bohmian framework for all statistically measurable quantities at the same moment, not even in principle. If a theory violates the invariance under boosts "in principle", it is always possible to "amplify" the violation and see it macroscopically, in a statistically significant ensemble. If such a violation existed, we would have already seen it: almost certainly.
I would be interested to learn more about this mystical way to amplify high energy violations of Lorentz symmetry into the low energy domain, without access to the necessary high energies. As far, it is lumo who is waving his hands.
I know that there are some nice observations, which use the extremely large distances light has to travel for some astronomical observations, to obtain boundaries for a frequency dependence of the velocity of light. Some of the boundaries obtained in this and different ways suggest even that these Lorentz-violating effects are absent for distances below Planck length. But Planck length is merely the distance where quantum gravity becomes important. The fundamental distance where our continuous field theories start to fail may be different.
In proper quantum mechanics, locality holds. If one considers a Hamiltonian that respects the Lorentz symmetry - such as a Hamiltonian of a relativistic quantum field theory - the Lorentz symmetry is simply exact and it guarantees that signals never propagate faster than light.
In proper quantum mechanics, one can define the operators that generate the Poincaré group and rigorously derive their expected commutators. Also, it is exactly true that operators in space-like-separated regions exactly commute with each other. This fact is sufficient to show that the outcome of a measurement in spacetime point B is never correlated with a decision made at a space-like-separated spacetime point A.
These facts allow us to say that quantum field theory respects relativity and locality. The actual measurements can never reveal a correlation that would contradict these principles. And it is the actual measurements that decide whether a statement in physics is true or not. Bohmian mechanics is different because these principles are directly violated. You may try to construct your mechanistic model in such a way that it will approximately look like a local relativistic theory but it won't be one. Consequently, you won't be able to use these principles to constrain the possible form of your theory. Moreover, tension with tests of Lorentz invariance may arise at some moment.
First, there is no reason not to use some symmetry principles for one part of the theory which do not hold for another part of it. For example, the symplectic structure in the classical Hamilton formalism has another symmetry group – the group of all canonical transformations – than the whole theory including the Hamiltonian.
Then, to postulate a fundamental Poincare symmetry is, of course, a technically easy way if one wants to obtain a theory with Poincare symmetry. But what is the purpose of a postulated global Poincare symmetry in a situation where the observable symmetry is different, depends on the physics, as in general relativity? Whatever the representation of the \(g_{\mu\nu}(x)\) on the Minkowski background – it will (except for simple conformally trivial cases) have a different light cone almost everywhere. If the Minkowski background lightcone is the smaller one, one has somewhere to violate the background Poincare symmetry. It may be always the other way. But in this case, the axioms of the theory give only restrictions for the background Minkowski light cone, not for the physical light cone. Thus, tensions with the physical Lorentz invariance may arise in the same way, because the theory only looks like one which, in the particular point \(x\), has the Lorentz invariance for the metric \(g_{\mu\nu}(x)\). But really it is a theory with Lorentz invariance for a different metric \(\eta_{\mu\nu}\), with a larger light cone, thus, allows for superluminal information transfer relative to \(g_{\mu\nu}(x)\).
String theory, as far as I understand, obtains gravity as a spin two field on Minkowski background. This requires, as far as I understand, that this problem is solved in string theory. Fine. Means, it is a solvable one.
The contradiction between relativity and semi-viable Bohmian models (that violate Bell's inequalities, and they have to in order not to be ruled out by experiments) is a very profound problem of these models. It can't really be fixed.
Again, nice formulation. Sounds like poor Bohmians have tried hard not to violate Bell's inequalities and finally given up. "Semi-viable" is also a nice word. But the "very profound problem" remains hidden. (A nice place for problems in a hidden variable theory.;-))
Instead, I prefer to follow the weak suggestions one can obtain based on mathematical equivalence proofs. When I construct a pilot wave theory based on a relativistic QFT, it seems really hard to avoid the consequences of this theorem to violate Lorentz invariance. At least, I don't know how to manage this. We obtain a pilot wave theory which does not violate observable relativistic symmetries. Simply because there is an equivalence proof for observables.
Today, we have some more concrete reasons to know that the hidden-variable theories are misguided. Via Bell's theorem, hidden-variable theories would have to be dramatically non-local and the apparent occurrence of nearly exact locality and Lorentz invariance in the world we observe would have to be explained as an infinite collection of shocking coincidences.
I'm impressed by the verbal power of "dramatically nonlocal", even more by the "infinite collection of shocking coincidences". Sounds really impressive. But I would not name a nonlocality, which, because of an equivalence theorem, cannot be used even for information transfer, and can be observed only indirectly, via violations of Bell's inequality, a dramatical one. Instead, it seems to me the most non-dramatical one. As well, I would distinguish the simple and straightforward consequences of an equivalence theorem from an "infinite collection of shocking coincidences". Instead, I would be more surprised if an quantum equilibrium large distance low energy limit would not change anything in the symmetry group of a theory.
Last but not least, the Lorentz group is simply the invariance group of a quite prosaic wave equation, an equation we find almost everywhere in nature. And such, a wave equation (or it's linearization) usually defines also an effective (and in general curved) Lorentz metric, so that the wave equation becomes the harmonic equation of this Lorentz metric. As a consequence, for everything which follows such a wave equation we obtain local Lorentz symmetry. (See arXiv:0711.4416, arXiv:gr-qc/0505065 for overviews.)
To assume that a symmetry, which so often and for very different materials appears as an effective symmetry in condensed matter theory, is fundamental, is a hypothesis which seems quite unnatural for me.
... and the ether ...
The similarity with the luminiferous aether seems manifest. ...
I just don't think that this is a rationally sustainable belief. It's just another repetition of the old story of the luminiferous aether.
About the similarity with the aether I fully agree with lumo ;-)))). But what is irrational in the belief that there is an ether? I would like to hear some details. I would be really interesting to hear which of the beliefs expressed in my ether model for particle physics are not rationally sustainable.
Now, it seems we have finished the claims of empirical inadequacy. It's time to consider the metaphysical arguments.
About signs of the heavens
It is not surprising in any way that the new, Bohmian equation for \(X(t)\) can be written down: it is clearly always possible to rewrite the Schrödinger equation as one real equation for the squared absolute value (probability density) and one for the phase (resembling the classical Hamilton-Jacobi equation). And it is always possible to interpret the first equation as a Liouville equation and derive the equation for \(X(t)\) that it would follow from. There's no "sign of the heavens" here.
I think there are "signs of the heavens" here. First, the guiding equation for the velocity is a nice, simple, and local (in configuration space) equation. The derivation mentioned by lumo could as well lead to a dirty nonlocal one.
Then, the equation for the phase resembles the classical Hamilton-Jacobi equation, and for constant density becomes simply identical with it. Now, the same guiding equation is, as well, part of the classical Hamilton-Jacobi theory – a theory which was in no way related to the conservation law of the first derivation.
Now, Hamilton-Jacobi theory is really beautiful mathematics, it has all properties of "signs of the heavens", even if taken only alone. See arXiv:quant-ph/0210140 for an introduction. That one and the same simple law for velocity gives, on one hand, Hamilton-Jacobi theory in the classical limit, and, on the other hand, a Liouville equation, is, at least for me, a sufficiently strong hint from the mathematical heaven. In many worlds I have not seen any comparable signs of beauty.
And there is, of course, the really beautiful derivation of the whole quantum measurement formalism.
How to distinguish useful improvements from unnecessary superconstructions
The mechanistic models add a new layer of quantities, concepts, and assumptions.
Indeed, every new, more fundamental theory adds a new layer of quantities, concepts, and assumptions. So what?
[Einstein] called the picture an unnecessary superconstruction.
Appeal to authority does not count. And there is no reason to expect that the father of relativity would like a theory which violates his child. But how to distinguish unnecessary superconstructions from interesting more fundamental theories? Above add something to the old theory. But useful more fundamental theories allow to explain something else from the old theory: Some postulates of the old theory can be derived now. So, one has to compare what one has to add with what can be derived now.
This relation is quite nice for pilot wave theory: The new layer is, essentially, the configuration together with a single additional equation – the guiding equation for the configuration. What can be derived from this equation is, instead, the whole measurement theory of quantum mechanics, including the Born rule and the state preparation by measurement. Compared with the Copenhagen interpretation, the additional layer also replaces the "classical part" of this interpretation and removes the collapse from the theory.
These last two points have been a major motivation of other reinterpretations as well. In particular, for many worlds it seems to be the only aim. The interpretation I prefer to name "inconsistent histories" is focussed on this aim too. Thus, two things which have been obtained in pilot wave theory first, have been widely recognized today as important contributions to the foundations of quantum theory. One can object that pilot wave theory does not get rid of the classical part, but even extends it into the quantum domain. This depends on what one considers as problematic with the classical part: If the problem is the imprecision of this notion, the absence of well-defined rules for this part, then it is clearly solved in pilot wave theory. Anyway, pilot wave theory was the first interpretation with completely unitary dynamics for the wave function, without a collapse.
One can perhaps create classical mechanistic models that mimic the internal workings of quantum mechanics in many situations. For example, one can write a computer simulation. But you can't say that the details of such a program or Bohmian picture is justified as soon as you confirm the predictions of conventional quantum mechanics.
There is no necessity to justify every detail. The important point of the pilot wave interpretation is that to explain the observable facts there is no necessity to reject classical logic, realism, or to introduce many worlds, inconsistent histories, correlations without correlata or other quantum strangeness and mysticism. We have at least one simple, realistic, even deterministic, explanation of all observable facts. That's enough to reject quantum mystery. Why should we justify every detail of some particular realistic model? There may be several realistic models compatible with observation. I would expect this anyway, given large distance universality.
The mechanistic models add a new layer of quantities, concepts, and assumptions. They are not unique and they are not inevitable. The similarity with the luminiferous aether seems manifest. If they only reproduce the statistical predictions of quantum mechanics, you could never know which mechanistic model is the right one: it could be a computer simulation written by Oracle for Windows Vista, after all.
But what's the problem with this? Is Nature obliged to work with theories which can be inevitably reconstructed by internal creatures? You could never know? Big problem. Anyway, our theories are only guesses about Nature, and we can never know if they are really true. If you doubt, I recommend to read Popper. (I ignore here, for simplicity, the modern ways to recognize the truth of theories, like counting the number of papers written about them, or getting inspirations about the language in which God wrote the world.)
Moreover, science has developed lot's of criteria which allow to compare theories which do not make different predictions: Internal consistency, simplicity, explanatory power, symmetry, mathematical beauty. Lumo uses such arguments himself, thus, he is aware of their power. They are usually sufficient to rule out most of the competing models. And if there remain a few different theories, all in agreement with observation, this is not problematic at all – it is even useful: It allows to see the difference between the empirically established parts of these theories – these parts will be shared by all viable theories – and the remaining, metaphysical parts, which may be very different in the different theories. Thus, they serve as a useful tool to show the boundaries of what science can tell at a given moment.
For example, today the existence of pilot wave theory shows that almost all of the quantum strangeness, in particular the rejection of realism, "quantum logic", and the esoterics of many worlds, are in no way forced on us by any empirical evidence, but purely metaphysical choices of some particular interpretations.
What are the fundamental beables?
I could make things even harder for the Bohmian framework by looking into quantum field theory. What are the real, "primitive" properties in that case?
In the simplest case of a scalar field, the natural candidate for the "primitive property" or the "beable" is simply the field \(\phi(x)\). This is a very old idea, proposed already by Bohm. But the effective fields of the standard model are also bad candidates for really fundamental beables. They are, last but not least, only effective fields, not fundamental fields. In my opinion, one needs a more fundamental theory to find the true beables.
My proposal for such more fundamental beables can be found in my paper about the cell lattice model arXiv:0908.0591. Even if pilot wave theory is not mentioned at all in this paper, it is quite obvious that the canonical quantization proposal for fermion fields I have made there allows to apply the standard formalism of pilot wave theory to obtain a pilot wave version of this theory.
Problems with spin and with particle ontology in quantum field theories
A large part of lumo's arguments is directed against two particular versions of pilot wave theory – strangely, I don't like them too. The first one is the idea to describe particles with spin using only wave functions of particles with spin, but leaving the configuration without spin. In this case, the wave function is no longer a complex function on configuration space, but a function with values in some higher-dimensional Hilbert space. But, as a consequence, the very nice pilot wave way to obtain the classical limit via the Hamilton-Jacobi theory no longer works, and one would have to use the dirty old way based on wave packets to obtain some classical limit.
There are other examples of such pilot wave theories. First, this trick was used by Bell, who has proposed a pilot-wave-like field theory with beables for fermions, but not for bosons. Now, one can argue that this is already sufficient, and leave the bosons without beables. The reverse situation was a theory from Struyve and Westman for the electromagnetic field. Again, it has been argued that this is sufficient. And, for the purpose to obtain a realistic theory which is able to recover QFT predictions, it is. But I think that such pilot wave theories are sufficient only for one purpose: To be used as a quick and dirty existence proof for realistic theories in situations where some parts of the theory cause problems. For this purpose, they are indeed sufficient, if the part of the theory represented in the beables is large enough to distinguish all macroscopic states – a quite weak requirement. If one doubts that a theory without fermions, or without bosons, is sufficient for this, one should think about renormalization: If we use these incomplete theories to describe one type of the bare fields (for some energy), then all types of the dressed fields already depend on this single type.
The second type of theories I don't want to defend are theories with particle ontology in the domain of field theory. One reason is that semiclassical gravity shows nicely that fields are more fundamental, and the pilot wave beables have to be, of course, fundamental. Then, to handle variable particle numbers is a dirty job. There should be something more beautiful. Particles which pretend for a status of beables should be at least conserved.
Therefore, the parts of the argumentation where lumo attacks particle theories I can leave unanswered. Let's note only that a short look at the particle-based approach to field theory in arXiv:quant-ph/0303156 suggests that lumo's arguments don't hit this target as well. This version introduces stochastic jumps into the theory (showing, by the way, that pilot wave theorists are not preoccupied with determinism). But I can leave the comparison to the reader.
About the "segregation" among observables
Because experiments eventually measure some well-defined quantities, the likes of Bohm think that there must exist preferred observables - and operators - that also exist classically. They are classical to start with, they think. Positions of objects are an important example.
But the quantum mechanical founding fathers have known from the very beginning that this was a misconception. All Hermitean operators acting on a Hilbert space may be identified with some real classical observables and none of them is preferred.
I think it is a misconception to interpret pilot wave theory as preferring some observables. It is not an accident that Bell has even proposed another word, beables, for the configuration space variables in pilot wave theory. In particular, measurements of the beables play no special role at all, nor in the classical limit, nor everywhere else in pilot wave theory. To derive the measurement theory, we don't need them (this would be circular anyway). What we need are the actual values of the beables, not some results of observations. Indeed, let's assume for simplicity we consist of atoms, which are the beables of some simplified pilot wave theory. Then, a theory about our observations does not need anything about our observations of atoms – if we "observe" them at all, then only in a quite indirect way, and most people do not observe atoms at all. Therefore, observations of atoms cannot play any role in an explanation of our everyday observations. Of course, in these explanations atoms have to play a role, at least indirectly – as constituent parts of our brain cells. But these atoms inside our brain cells are nothing we observe, if we observe something in everyday life. Thus, we use only the atoms themself, not the observations of atoms, in such explanations of our observations.
Thus, as observables the beables play no special role – in particular, the theory of their measurements can be derived in the same way, without danger of circularity. In particular, their measurements have to be described by self-adjoint operators or POVMs as those of every other observable too. In this sense, there are no preferred observables in pilot wave theory.
And this construction is actually very unnatural because it picks \(X\) as a preferred observable in whose basis the wave vector should be (artificially) separated into the probability densities and phases
Configurations (I prefer "q" instead of "X", because "X" is associated with usual space, while "q" is associated with configuration space) play indeed a special role. But this is the same special role they play in the Lagrange formalism as well as in Hamilton-Jacobi theory. Above are very beautiful, useful approaches. I don't remember to have heard any objections that the Lagrange formalism is unnatural, because it picks "q" as a preferred observable. Instead, the Lagrange formalism is an extremely important tool in modern physics, in quantum field theory as well as in general relativity. Moreover, this "segregation" is a very natural one: If nothing changes, the configuration remains the same, while the velocities have to be zero. Instead, I have found the symmetry between such different things as position and momentum in the Hamilton equations (and, similarly, in the canonical approach to quantum theory) always strange and unnatural, (even if, because of its symmetry, beautiful).
So why lumo does not fight against segregation in the Lagrange formalism? The segregation is the same, the poor momentum variables are degraded to the role of "derivatives". (Or maybe he does? I have not checked. Anyway, the important role of the Lagrange formalism in modern science, which is based on exactly the same "segregation", is a fact which shows that there is nothing wrong with this particular segregation.)
In order to celebrate the Martin Luther King Jr Day, I will dedicate the rest of the text to a fight against the segregation of observables. :-) So my statement is very modest – that observables can't be segregated into the "real" primitive ones and the "fictitious" contextual ones – a fact that trivially rules out all theories (such as the Bohmian ones) that are forced to do so.
... I guess that you must agree that the "philosophical democracy" between all observables is pleasing and natural.
I see no reason at all to find such a "democracy" pleasing. You can observe a honest guy telling us the truth. As well you can observe how a liar is telling us lies. Above are observable. There may be even more symmetry between them. They may even make the same claims: "I have seen that he has stolen the money". That means, without segregation among observables, without destroying observable symmetry, we have to give them equal status. I don't plan to follow this idea, and will always prefer a segregation between truth and lies, even if this destroys some observable symmetries.
The segregation between contextual and non-contextual observables is less important, but is part of our everyday life as well. You can ask somebody about things he has not decided yet. He will think about them, possibly argue with you, and, maybe, give you an answer. This answer does not exist before you have started to argue with him, it is, therefore, contextual. Arguing with somebody else, he could have made a different decision. (Last but not least, this is one purpose of communication – to modify our decisions, if we hear good arguments to do this.) In other words, this answer will be contextual. But in a different situation, he has already decided about this question, and the answer was already part of reality of his thoughts when you have asked him. In this case, the answer is not contextual. Above answers we observe as results of complex verbal interactions, and they are, in this sense, on equal foot. Nonetheless, a realistic theory about his thoughts has to segregate between them. Without segregation, he should be or almighty, able to think and decide about all imaginable questions before you ask him, or completely dependent, deciding about nothing before you ask him.
In all these cases, the same "formalism" is used to obtain the results – communication in human language. Thus, that the same formalism – that of self-adjoint operators, or, more general, of POVM's – is used to describe the results of interactions in quantum theory is in no way an argument against this particular segregation.
Clearly, some quantities in the real world look more classical than others. But what are the rules of the game that separates them? The Bohmists assume that everything that "smells" like \(X\) or \(P\) is classical while other things are not. ...
Clearly, they want some quantities that often behave classically in classical limits.
Clearly not. The "segregation" in pilot wave theory is between configuration and momentum variables, and it is in no way related with one of them being "more classical". In classical situations, above behave classically, and the same segregation exists in classical theory too, in the Lagrange formalism as well as in Hamilton-Jacobi theory. There is no place in pilot wave theory where one has to care that something in the behaviour of the configuration is "classical": In the classical limit, it follows automatically, from the classical Hamilton-Jacobi equation, that everything behaves classically. For other questions this is simply irrelevant.
It is the many worlds community which is focussed around the classical limit. That's reasonable – they have a very hard job to construct something which at least sounds plausible (at least if one uses words like "contains" for a linear relation between some points in a Hilbert space, talks about "evolution" of branches without defining any evolution law, and applies decoherence techniques without explaining how to obtain the decomposition into systems one needs to apply them).
In order to simplify their imagination, the Bohmists imagined the existence of additional classical objects – the classical positions.
Simplification has, it seems, been removed from the aims of science. Ockham's razor is out, simple theories have to be rejected. The higher the dimension, the better.
But the objects are in no way additional. They have been part of the Copenhagen interpretation: Its classical part contains, in particular, all the measurement results. And Schrödinger's cat proves that a unitary wave function alone is not sufficient, that we need something else. Or some non-unitary collapse, or some particular configuration as in pilot wave theory. Something – be it the collapsed wave function, or some different entity – has to describe the reality we see: or the dead, or the living cat. Many worlds claims something different, but introduces, for this purpose, the "branches" – some sort of collapsed wave functions without collapse, or configurations without a guiding equation, which is claimed to be "contained" in the wave function. (How a decomposition of some vector into a linear combination of others defines a containment relation remains unclear. A concept where a function like \(\psi(q) = 42\) "contains" all possible universes has it's appropriate place in the Hitchhiker's Guide to the Galaxy, not in scientific journals.) The approach named "consistent histories" leaves us with many inconsistent histories, subdivided into families.
Theories with physical collapse need dirty and artificial non-unitary modifications of the Schrödinger equation. The branches of many worlds are, it seems, left today without any equations at all. (A very scientific approach, indeed. Time to rename it into "many words"). Only pilot wave theory gives us a nice, simple, and beautiful equation for this "additional" entity. Moreover, it allows, just for nothing, to derive the whole measurement formalism of quantum theory.
Imagination is completely irrelevant for these questions. I see, of course, no reason to object if a theory allows to simplify our imaginations too. Instead, I would count it as one additional advantage of a theory. But I recognize that this attitude is not shared by other scientists. And there are, indeed, good reasons to prefer theories which are complex and mystical. Imagine you are in a company of nice girls (or boys, whatever you prefer), and they ask you what you are doing. Isn't it much more impressive if you can tell them about curved spacetimes, large dimensions, a strange new quantum realism, or even quantum logic, many worlds and other strange quantum things? Compare this with the poor 17th century scientist, the fighter against any form of mystery, the classical loser in every popular mystery film. The choice is quite obvious.
About history
Louis de Broglie wrote these equations for the position of one particle, David Bohm generalized them to N particles.
Not correct, the configuration space version of pilot wave theory was presented by de Broglie already at the Solvay conference. See de Broglie, L., in “Electrons et Photons: Rapports et Discussions du Cinquieme Conseil de Physique”, ed. J. Bordet, Gauthier-Villars, Paris, 105 (1928), English translation: G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference”, Cambridge University Press, and arXiv:quant-ph/0609184 (2006)
I think that in analogous cases, we wouldn't be using the name of the "updater" for the final discovery.
After having read something about the history of this theory (I do not care that much about history), I use "pilot wave theory" instead of "Bohmian mechanics". But Bohm has a point too: de Broglie has broken his theory as not viable, being unable to develop the general measurement theory. This has been done by Bohm. Therefore, if I use names, I use now the combination "de Broglie-Bohm".
Of course that I have always known that Bell constructed his inequalities because he wanted to prove exactly the opposite than what he proved at the end. He was unhappy until the end of his life. Bad luck. Nature doesn't care if some people can't abandon their prejudices.
This sounds like lumo thinks that Bell has tried to prove, with his inequalities, that quantum mechanics is wrong. This does not sound very plausible. It is quite clear that he liked Bohmian mechanics, that he has seen it's nonlocality as an argument against it, and tried to remove this argument, by showing that this nonlocality is a necessary property of all hidden variable theories. About his bets before the experiments have been performed, there is the following quote: "In view of the general success of quantum mechanics, it is very hard for me to doubt the outcome of such experiments. However, I would prefer these experiments, in which the crucial concepts are very directly tested, to have been done and the results on record. Moreover, there is always the slim chance of an unexpected result, which would shake the world." (Freire, arXiv:quant-ph/0508180, p.20)
[arguing against "I've read that the Broglie-Bohm theory makes the same predictions that the normal quantum randomness theory makes but the latter was chosen because it was conceived first.":]
Concerning the first point, people can have various theories in the first run. But once they have all possible alternative theories, they can compare them.
Second, it is not true that the probabilistic interpretation was conceived "first". Quite on the contrary. Technically, it's true that de Broglie wrote his pilot wave theory in 1927, one year after Max Born proposed the probabilistic interpretation, but the very idea that the wave connected with the particle was "real" was studied for many years that preceded it. Both de Broglie (1924) and Schrödinger (1925) explicitly believed that the wave was real which is incorrect.
Given that de Broglie has given up pilot wave theory shortly after 1927, unable to find a viable measurement theory for other observables than position, one can say that pilot wave theory appeared in a viable form only 1952, with Bohm's measurement theory. At that time, the Copenhagen interpretation was already well-established (even if the label "Copenhagen interpretation" was coined only later). So there was an advantage of historical accident for the standard interpretation.
In 1952, Bohm wrote down a very straightforward multi-particle generalization of de Broglie's equations and added a very controversial version of "measurement theory". Is it a substantial improvement you expect from 25 years of progress?
Depends on how many people have worked on it during this time. In this case, most of these 25 years nobody has worked on it. In particular, de Broglie himself had broken it, because he was unable to find the "very controversial" measurement theory found later by Bohm. Bohm, who was 1927 only 10 years old, had not worked most of this time in this domain too. Thus, very few man-years have been sufficient to transform a theory broken by it's creator as not viable into a viable theory. I would name this a sufficiently efficient and substantial improvement.
The next important defender of this theory – again almost alone for a long time – was Bell. The results of his work in the foundations of quantum theory are also well-known. Despite their foundational character, they have caused a large experimental activity. Thus, also a quite efficient relation between man-years and results.
(Given that lumo has not understood the main point of Bohm's measurement theory, we can ignore the characterization of this theory as "very controversial").
About decoherence and the classical limit
Moreover, the question which of them will emerge as natural quantities in a classical limit cannot be answered a priori. Which observables like to behave classically? Well, it is those whose eigenstates decohere from each other.
The role of decoherence in the classical limit is largely overexaggerated, see the Hyperion discussion about this (Ballentine, Classicality without Decoherence: A Reply to Schlosshauer, Found Phys (2008) 38: 916-922, DOI 10.1007/s10701-008-9242-0, Schlosshauer, Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute, Found Phys (2008) 38: 796-803, DOI 10.1007/s10701-008-9237-x, arXiv:quant-ph/0605249, Wiebe and Ballentine, Phys. Rev. A 72:022109, 2005, also arXiv:quant-ph/0503170).
Essentially, you can measure every operator, together with every other, if the accuracy of the common measurement is below the boundaries of the uncertainty relations. And in the classical \(\hbar \to 0\) limit they all like to behave classically.
Everything in this real world is quantum while the classical intuition can only be an approximation, and it is a good approximation only if decoherence is fast enough i.e. if the interference between the different eigenstates is eliminated. If it is so, the quantum probabilities may be imagined to be ordinary classical probabilities and Bell's inequalities are restored.
So if you want to know whether a particular quantity may be imagined to be classical, you need to know how quickly its eigenvectors decohere from each other. And the answer depends on the dynamics. Decoherence is fast if the different eigenvectors are quickly able to leave their distinct fingerprints in the environment with which they must interact.
A nice description of the decoherence paradigm. The little dirty secret of decoherence is that it depends on some decomposition of the world into systems. Such a decomposition can be found, without problems, if we have some classical context as in the Copenhagen interpretation, or some well-defined configuration of the universe as in pilot wave theory, by considering an environment of the actual state of the universe. But without such a background structure you have nothing to start these decoherence considerations. The different systems we see around us – cats, for example – cannot be used for this purpose, at least not if we want to avoid circular reasoning. arXiv:0901.3262 The Hamilton operator, taken alone, is not enough to derive a decoherence-preferred basis uniquely.
Mechanistic models of state-of-the-art quantum theories are not available: it is partly because it's not really possible and it's not natural but it is also partly because the champions of Bohmian mechanics are simply not good enough physicists to be able to study state-of-the-art quantum theories. They're typically people with philosophical preconceptions who simply believe that the world has to respect their rules of "realism" or even "determinism".
I have a quite nice "mechanistic model" for the standard model of particle physics. One which essentially allows to compute the SM gauge group (as a maximal group which fulfills a few simple "mechanistic" axioms). How many more years (and how many more man-years) string theory needs to reach something comparable?
The idea of "philosophical preconceptions" is quite funny. My concept is quite pragmatical: If there is a simple way to do the things, use it. Simplicity is a good thing, independent of the age or the popularity of the particular concept. About determinism I don't care even today, in particular I have certain sympathies for Nelson's stochastics. And I have as well looked at non-realistic interpretations of quantum theory, like the concept I prefer to name "inconsistent histories". But I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. But pilot wave theory would be preferable even without it, simply for the beauty of the guiding equation.
Last but not least, some funny but unimportant polemics
The attempts to return physics to the 17th century deterministic picture of the Universe are archaic traces of bigotry of some people who will simply never be persuaded by any overwhelming evidence – both of experimental and theoretical character – if the evidence contradicts their predetermined beliefs how the world should work.
Well formulated. I like such polemics. Especially replacing the standard 19th century in such flames by 17th century is nice. But there is room for enhancement. In philosophy of science, I follow Popper, who likes to identify the origin of some of his ideas in Ancient Greece. I also prefer the economic system based on ideas of Adam Smith in comparison with much more modern ones developed by Lenin and Mao, so one can identify this sympathy for old ideas as deeply rooted in my personality. Indeed, I think there is nothing wrong with old ideas.
To describe pilot wavers as "predetermined" sounds really nice, but is, unfortunately, wrong. There are, of course, people who follow predetermined ideas. But these are the ideas they have learned in their youth. Where are the proponents of pilot wave ideas supposed to have learned it? What I was teached was quantum theory and Marxism-Leninism, not pilot wave theory and Adam Smith. And I remember, in particular, some uncritical fascination learning von Neumann's proof of impossibility of a classical picture. I have had nor a prejudice for 17th century determinism, nor any of the "bourgeois prejudices" the communists have liked to argue against.
It was not predetermination, but the power of arguments (in particular, of Bell's "speakable and unspeakable in quantum mechanics"), which has persuaded me to switch to pilot wave theory. And an important part of this argumentative power was the simple proof of equivalence between pilot wave theory and quantum theory. There simply is no experimental evidence against pilot wave theory.
And, indeed, the "experimental evidence" presented by lumo was (in his polarizer argument, and similar ones about spins) based on the common error not to take into account the measurement device, or (in his quantization argument) not applicable to de Broglie's version of pilot wave theory. About the theoretical evidence judge yourself.
But the very fact that the Bohmists actually don't work on the cutting-edge physics of spins, fields, quarks, renormalization, dualities, and strings is enough to lead us to a very different conclusion: they're just playing with fundamentally wrong toy models and by keeping their focus on the 1-particle spinless case, they want to hide the fact that their obsolete theory contradicts pretty much everything we know about the real world.
It is always fun to compare the "very facts" of such claims with reality. The one-particle spinless case has never been in the focus of my interest, except if this appears sufficient to show some serious problems of other interpretations ( arXiv:0901.3262, arXiv:0903.4657). The results of my work with spins, fields, and quarks I have already mentioned. And even renormalization is on my todo list, even if some other problems have, yet, higher priority for me.
I'm not sure that naming strings and dualities "cutting-edge physics" is justified. This is clearly a domain of research I leave to lumo – it may have a value as a nice exercise in mathematics, which is an important part of human culture, even if it has nothing to do with physics. Of course, one never knows – results of pure mathematicians, who have been proud of doing things which will never find an application, are applied today in cryptography. It would be a really nice joke if some result found by lumo would find a physical application in some hidden variable ether theory ;-).
Add to Digg this Add to reddit
snail feedback (44) :
reader Luboš Motl said...
Dear Ilja, thanks for this almost professionally constructed reply - with a nice formatting, formulae etc.
Unfortunately, almost no part of the content of this blog entry is correct. ;-)
Perhaps, a valid point is that the "pilot wave theory" is more accurate than "Bohmian mechanics". However, when you said that the original de Broglie theory is preferred to solve the non-single-valuedness problem of mine, I had to laugh out loud because a few paragraphs earlier, you wrote that this theory was abandoned by de Broglie because of another argument of mine, more or less.
Concerning some other points, it's amusing that you say that "decoherence solves everything" because decoherence only works in proper quantum mechanics. The pilot wave theory isn't quantum mechanics and indeed, the very main point of this theory is that it replaces the genuine dynamical quantum mechanism selecting the "preferred observable and bases" - decoherence - by something totally different, namely predetermined observables that also have classical values aside from the pilot wave that guides them.
So if you need decoherence in the pilot wave theory, it won't work and it will become yet another crushing argument against the pilot wave theory because decoherence is incompatible with the actual pilot-wave-based mechanisms that select what will be observed. Do you agree with that?
When you say it's just a "rotor", you don't actually show that the theory gives the right prediction - S is single-valued up to additive integer multiples of 2.pi. You don't show that because you can't - this correct constraint doesn't really follow from the pilot wave theory. Incidentally, the divergent velocity isn't harmless, either. It's experimentally more or less demonstrable that there's nothing special happening near the places where psi=0. In particular, the relativistic corrections don't get any stronger because of these points. In the pilot wave theory, as you admit, the "Bohmian trajectory's" velocity goes to infinity which does indicate that relativity should play an increased role there. But it doesn't.
reader Luboš Motl said...
Second part. I used the term "dramatically nonlocal" because the ability to influence remote regions belongs to the very basic built-in properties of the objects in the pilot wave theory. I mean that there doesn't exist any glimpse of an argument that these effects should be small - so they won't be small unless one tries to fine-tune everything. The pilot wave theory contains classical waves that are functions of several position vectors and the evolution equation directly guides the positions of particles depending on the immediate values of these multilocal objects anywhere in the configuration space. Those guiding waves are affected by other particles, e.g. those freshly created ones if you assume that the theory *is* able to produce new particles, which it's not, so there is a heavily, dramatically, lethally nonlocal action in both directions. The result must look like a completely generic nonlocal evolution, in contrast with all observations of the 20th century physics.
reader and said...
with the risk of sounding like LM 's echo: there is *nothing* valid about "boemian" pilot wave theory...
reader John H Duffield said...
Lubos, well done for offering this guest blog. I'm broadly in agreement with Ilja, and I think you should more closely into this subject and try to set aside your hostility. Einstein reintroduced an aether for GR , the optical Fourier transform is an analogy for wavefunction-wavefunction interaction, see work by Aephraim Steinberg et al and Jeff Lundeen et al re "wavefunction is real", check out Percy Hammond re electromagnetic geometry, look at The Other Meaning of Special Relativity by Robert Close, see , and , to think of the electron as a Dirac's belt standing-wave photon-field structure. Etc etc. There's elements of TQFT and even an underlying "stringiness" to this. Don't dismiss it all because somebody can't get the maths right.
reader Mephisto said...
I personally like Bohm. He was a nice person with great strength of character, a political victim of the McCarthy era. But from what I know about the Bohmian mechanics, I do not believe it to be true
Arguments against it
1) Bohm theorists believe that the quantum wave is real. It is easy in 1 particle case. But if you have N particles, you need a wave function in 3N+1 dimensions. Are these 3N+1 dimensional wave functions also real?
2) Spin. Lumo made the point: "If de Broglie and Bohm claim that a particle should also have a well-defined position and velocity, it should naturally have a well-defined z-projection of spin, too. But once you adopt such an assumption, you clearly break the rotational symmetry. Particles would only have classical projections of spin with respect to the z axis so the z axis is preferred and you can measure its direction, at least in principle, uncovering anisotropy of space. The rotational symmetry of a theory including spinors heavily depends on the probabilistic nature of quantum mechanics. If you give up the equal treatment of position and spin and decide to treat spin differently and give an electron well-defined binary-valued projections of spin with respect to all axes, you will also encounter problems. Bell's inequality will show you very sharply that the required dynamics is completely non-local but you will also have problems with the Lorentz invariance and the precise rules for the evolution of the discrete function of the direction. The probabilistic meaning of the spinorial wave functions is completely essential for us to be able to translate a physical arrangement to any convention, including an arbitrary choice of the z-axis."
Spin needs to be understood within the framework of relativistic quantum field theory. In QFT, every particle species is associated with a quantum field and the quantum field Lorentz transforms in a partiular way - we have spin 0,1,2 and spin 1/2,3/2 etc fields. It turns out that all these fields are related to representations of the Poincare group (Wigner classification). There is a deep connection between the relativistic symmetry of spacetime (Poincare group) and the spin of quantum fields (representations of the group). This connection is imho very elegant and powerful and the Bohmian mechanics is ugly mess in comparison.
3) My personal issue with QM. I agree that wave function is not real. Collapse of the wave function is just a change in our knowledge. Most misunderstandings of quantum theory come from incorrect use of language and the use of vaguely defined concepts like "local reality". Lumo says that in an entangled pair, the particles do not communicate in any way. I agree. It is the only meaningful way how to avoid terrible paradoxes with space-like sepated entangled particles. But I have issues with the following claim "The moon is not there if nobody is looking". Where and how does nature store information about the correlation of the particles (how does nature remember the correlation), if the particles DO NOT EXIST prior to measurement. Only the quantum fields of bubbling probabilities exist before the measurement. If the particle and its spin is created (comes into existence) by the act of measurement at detector A, how does nature know that other particle at detector B should be created in such a way that it is correlated. This in my oppinion seems to invalidate the claim that nothing exists prior to measurement. (position of the Copenhagen school)
reader Mephisto said...
And when discussing someone's theory, it is always best to go the to source
David Bohm - The de Broglie Pilot Wave Theory
reader Luboš Motl said...
Well, if and when the original theory doesn't work, it doesn't help one much to go to the original source.
reader Ilja said...
1.) In dBB, yes. I don't like this too, and I think it is possible to get rid of this, using dBB theory as a starting point. See arXiv:1103.3506
2.) As explained, I also prefer field-theoretic variants.
3.) To reject realism is of course consistent, as consistent as "God moves in mysterious ways". If you accept realism, you have to accept its nonlocal variant, given the violation of Bell's inequality. So if you want causality without causal loops, you need a hidden preferred frame.
reader lucretius said...
Bohm can be called a "victim of the McCarthy era" but he can hardly be called "an innocent victim". I no more sympathetic to communist victims of McCarthy during the Stalin period than I am to the 740 members of the British Union of Fascist who were interned in Britain from 1940 till the end of the war.
I would like to add that I find this discussion fascinating (thank you Lubos) although I don't want at this point in time to take clear sides. I agree, however, that there most people have psychological difficulties with the Copenhagen interpretation and that this is only natural. Unfortunately the Bohmian approach, ,does not seem to me to be significantly better in this respect (although I need to think more about it, when I find the time). Personally I still prefer to think of QM as a computing tool. In this sense the key issue would seem to me to be: does the Bohmian mechanics really enhance computation? It seems unlikely.
As for Mephisto's question about "where the information is stored" - clearly the information about the correlation needs to be "remembered" by Nature. It seems indeed strange that the information about correlation could be "remembered" if the correlated particles do not exist but one could also ask: where are the "laws of nature" themselves stored? I seems to me we can't expect intuitive ideas acquired from our daily experience to apply to these sort of matters.
reader Rehbock said...
Interesting piece. But the premise:
"... I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. "
seems flawed.
Are not experimental outcomes confirming always QM during last 100 years "good evidence", if needed, that nature is not required to share our primitive view of reality. Also, our personal subjective construction(s) of reality does not established realism as any of those glowing adjectives.Why should realism be so fundamental?
reader Ilja said...
Experimental outcomes of QM are in agreement with dBB theory, which is realistic, so are not a problem for realism. And one should, of course, distinguish our particular primitive realistic models from realism, that means, the general hypothesis that such a model (however complicate) exists in principle.
See my home page for some arguments in favour of realism.
One can, in principle, use rigorous positivism, we observe correlations, and have formulas to compute them, that's all, no idea why the formulas work. I don't think it is a good idea.
reader Luboš Motl said...
Dear Ilja, the probabilistic distribution of X for non-relativistic QM models for one or several spinless particles may be "emulated" in this "realistic" dBB pictures but that's far from enough to do physics today and all opinions that the theory agrees with more than that are flawed ideas based on wishful thinking, neverending promises, and lies.
The pilot wave theory can never deal with quantum field theory or any other relativistic theory. It's not just the absence of the Lorentz symmetry. It's also the existence of observables with discrete spectra that appear everywhere and that can't be given dBB "actual value supplements".
Moreover, dBB is inevitably incompatible with the particle production - creation and annihilation of pairs in QFT. This is also easy to see.
dBB also fails to account for the actual macroscopic quantum behavior of large systems, contradicts decoherence, and I am not even discussing the aesthetic flaws that show, to a person with a good physics intuition, that it is just a completely fabricated attempt to deny the important insights that the quantum revolution has made.
reader Ilja said...
About the Wallstrom objection
It does not indicate. The Bohmian trajectory is unobservable, but relativistic symmetry is about observables only. (For Bohmian field theories, which is what is preferable in the relativistic case, it is irrelevant anyway, because it is \(\dot{\phi}\) and not a velocity in space which becomes infinite.)
Again, if one starts with the wave function as being fundamental - as modern dBB or "pilot wave" theory does - this is as unproblematic as in quantum theory.
It becomes problematic only if one goes beyond standard dBB theory and prefers, instead, to consider R and S, or \(R^2\) and v, as fundamental. This is what I prefer. But I also have a way how to solve this problem, see arXiv:1101.5774. This approach also regularizes the infinity of the velocity.
It's ugly? Ok, I think it is a good idea to look for more beautiful interpretations. It seems, the difference is about the criteria for comparison. I think giving up realism is stupid, equivalent to "Nature moves in mysterious ways". With realism and loop-free causality we need a preferred frame. That's my starting point. The next inacceptable thing are infinities. I don't have anything against hidden variables. Ok, its not nice that they are hidden, so let's try to find them, for example by looking where they become very large or infinite - this may be the place where the theory is wrong and they become visible. Symmetry is something very important, but not as important as realism and finiteness. As simple and as symmetric as possible.
reader Ilja said...
The dBB scheme works for arbitrary configuration spaces Q, no need to restrict it to particles. The first example of a relativistic quantum field theory (EM) is already part of Bohm's paper. All what you need for observables with discrete spectra see Bohms original paper or the text here in the blog. The classical limit in dBB theory is much easier.
reader Justin Glick said...
What about having a non-local interaction without a preferred frame? e.g. preserving Einsteinian relativity.
reader Ilja said...
A realistic Einstein-causal theory cannot give the violations of Bell's inequality predicted by quantum theory. So this is rather hopeless.
(I also don't like that "local" is used instead of "Einstein-causal", but this is how "local" is used today.)
reader Justin Glick said...
And if you read what I wrote, I did not say the theory should be local, or "Einstein-causal" as you like to write. I wrote that it should 1. be non-local
2. preserve Einstein causality
Also, by Einstein causality I mean no preferred frame, but also Lorentz invariant.
Before you say this is impossible, let me remind you that before 1905 everybody in the world thought that
1. Inertial frames are equivalent
2. speed of light is frame independent
were incompatible.
reader Ilja said...
Sounds like I misunderstood you, but, whatever, I see no reasonable chance to make realism compatible with preservation of Einstein causality.
reader Ilja said...
What's wrong about a hidden preferred frame?
The most horrible point: It's been really known not to exist since 1887 when its inevitable prediction of the aether wind was falsified by Morley and Michelson. That's the end of the story. A theory without it had to be designed. Einstein shown that the Lorentz invariance was needed for every theory that avoids the pathological because falsified prediction of the aether wind. Sounds like the type of argument from "relativists" who have not even heard about the Lorentz interpretation, which has a hidden preferred frame. So, relativity 101, there have been two interpretations of the Lorentz-Einstein theory, the Minkowski interpretation, without ether but a spacetime, and the Lorentz interpretation, with absolute time, and an ether which distorts rulers and clocks, in such a way that one cannot measure absolute time, and so the preferred frame remains hidden. Above variants predict Lorentz symmetry for all observables, and the same result for Michelson Morley. So, the MMX does not falsify the Lorentz interpretation.
Ok, maybe lumo used a polemical way to point out that the Lorentz interpretation has a problem to explain why the preferred frame is hidden? That would be fine, because this is really an interesting problem. How to solve it? The next failure in lumos answer: Without an infinite amount of fine-tuning, you just can't get it. Really no other way? Just an idea: It is quite typical that the symmetry groups of a fundamental theory and its approximation are different. Fine, lumo even has some nice theory about this: Quite generally, the recipe for "partially valid" symmetries in particle physics goes in the opposite direction. They're preserved at short distances, in the fundamental equations, and broken at long distances where symmetry-breaking mechanisms become important.
Oh, really only in this direction? Ever heard about a lattice theory? The fundamental theory has a discrete symmetry, its large distance approximation, instead, a continuous symmetry group. A nice example is the silicon lattice. It, of course, has some preferred planes. But if one considers its mechanical properties in the large distance and the lowest order, these preferred planes become unobservable and we obtain rotational symmetry. So, the other direction exists too. Approximation means loss of information, and the result of a loss of distinguishing information may be an increase in symmetry.
Let's clarify: These are only simple common sense arguments, appropriate for a blog, to show where lumo's arguments fail. The problem remains: To explain why we have Lorentz symmetry for the observable effects. Fortunately, it has been solved in arXiv:gr-qc/0205035. In this paper, I have derived the Lagrangian of my theory from some simple first principles, and this gives, as a side effect, the Einstein equivalence principle, thus, local Lorentz symmetry. So, yes, there is a problem, but it is not unsolvable, as lumo claims with obviously weak arguments, but already solved.
Another rather trivial example of a higher symmetry obtained by approximation is equilibrium. In the simplest example of global thermodynamic equilibrium we obtain, instead of a lot of inhomogeneous non-equilibrium solutions, only homogeneous equilibrium solutions, thus, we obtain translational symmetry not present in non-equilibrium theory. Something similar happens in dBB theory. We start from a nonlocal theory, and consider quantum equilibrium. And the theory reduces effectively to quantum theory, with quite different symmetries - those of the Hamiltonian. In particular, if the Hamiltonian has the appropriate relativistic symmetry, the predictions about observables will show relativistic symmetry, and it becomes impossible to use the nonlocal fundamental features for information transfer.
reader and said...
Sorry, but No! I tried again but I still think LM is right... You just cannot change the place of the "hidden variable" and think you cured everything. You remain with the same problem explained for the spin 1/2 electron...
reader Ilja said...
Sorry, please explain, I don't understand your point.
Do you mean the choice of the beable, particles vs. fields? This changes a lot because you don't have to handle particle creation.
Do you think about how to handle Dirac particles in dBB field theory? A completely different and nontrivial question, there are various ideas about this. My own approach gives only pairs of Dirac fermions together with a massive scalar field, to be interpreted as electroweak doublet together with some dark matter. See arXiv:0908.0591 for how to reduce this to a simple scalar field with strange potential. How to handle a scalar field in dBB is well-known and simple.
reader Justin Glick said...
OK, let me clarify. I agree with what you just wrote. You can't have traditional notions of causality. But, relativity doesn't say that we can't modify notions of causality. It only says C is constant in all frames, and inertial frames are equivalent. Now, if we modify our traditional ideas about causality, then we can save Einsteinian relativity and also preserve realism. No preferred frame is necessary.
reader anna v said...
As an experimentalist , in particle physics, I hope I am a student of reality. Quantum mechanics is a beautiful self contained mathematical framework that works for all the known data and at the same time an intuition can be developed about how nature behaves in the microcosm, which helps in looking for new unexpected effects.
For an experimentalist, a new microcosm mathematical framework which gives the exact same measurable predictions is not interesting or relevant to reality, it is a mathematical game. Are there any predictions of this new mathematical framework, supposing that all of Lumo's objections are met, which diverge from the predictions of the standard QM mathematical framework? Is there an experiment that can show it up?
If not the adjective "real" cannot really be applied to mathematics, except if one is talking about the form of written formulae . In my books, "real" in physics means "measurable".
reader Mephisto said...
It sounds kind of boring to be an experimentalist - no matter how it works, why it works, what it means, I am happy if I can feed it with numbers and get predictions for my experiments. Fortunately not all experimentalists have this attitute. I read a book from Zeilinger (Einsteins Schleier). He is an experimentalist and he is interested in what it means. The quest for meaning of quantum mechanics was probably his driving motive for his career choice and his work.
There are various formulations and various interpretations of QM and every formulations gives you an unique perspective. Through every formulation you understand the underlying theory better. I remember Feynman talking about the same thing in one of his lectures (The Character of Physical Law)
Quantum mechanics is very interesting for philosophers. The questions of reality, ontology, knowledge etc. were always the traditional domain of philosophy and QM can tell us much about these things. Unfortunately, not many philosophers understand QM, since to understand it, you have spend years studying physics.
reader anna v said...
The various formulations of QM are all within the same framework/postulates. This proposal adds another level/ a meta level of complexity without giving any physics results different than the simpler in complexity levels, except philosophical preferences. I am interested in the physics not the philosophy.
reader Ilja said...
Fine. But this is simply a direction of research which I would not follow. I think there are a lot of other people following these directions, while I'm almost alone in the other direction,
Of course lumo is right if he argues that giving up some symmetry is not nice. I argue only that giving up realism or causality is even worse. But the point is not even what is worse, because it is clearly reasonable to look in different directions. I have found a quite nice one, with no competition, because to research in this direction is anathema. Quite comfortable, if one does not need a job. Interesting problems with reasonably simple solutions abound, because nobody looks for them.
reader lucretius said...
“Every formulation offers a unique perspective” sounds like a truism. Of course for philosophers such truisms can be interesting, especially if you agree with Wittgenstein that “philosophy leaves everything as it is”. But physics does not leave everything as it is: the point of physics is not to describe the same thing again in a new way but to discover new phenomena, explain things that have not been previously explained, suggest new experiments etc.
If you have two formulations of a theory that are formally equivalent (in the sense that they can be used to derive the same mathematical formulas in all areas of applicability of the theory) they may still differ in their convenience and effectiveness. Things that take simple form in one formulation may become complex and convoluted in the other. I think this is much more important to a physicist than the purely psychological comfort of being able to retain “realism”.
Philosophers, who generally don’t compute things or apply mathematics to resolve confusing physical puzzles (like the recent discussion of black-hole “firewall”) have different priorities but for most physicists the key issue should be: how good is bohmian mechanics compared with standard Copenhagen approach as a computational tool? Even if Lubos’s objections can be overcome, the record suggest that very few new phenomena have been discovered by means of bohemian mechanics and most of the work done within this formalism is “parasitic” on the standard QM formalism. The only area in which this is not true is, I think, quantum chemistry. It would be interesting to hear some suggest some explanation of this fact.
reader Luboš Motl said...
Dear Anna, I would personally not endorse the algorithm of theory selection that you propose - it's Occam's razor ad absurdum.
Of course that in the development of science, there are often moments in which the newer theory *is* or at least *looks* more complicated than the older one, but it must still be accepted and this necessity becomes more manifest later when further unification or addition of new sectors or applications arrives.
The only legitimate way to rule out a theory in science is falsification - a proof of incompatibility of theory's predictions either with themselves or with the empirical data. The pilot wave theory may be falsified in this way but if it couldn't, your vague philosophical observations of complexity wouldn't be a solid enough proof to abandon the framework.
reader Luboš Motl said...
Dear Ilja, this favorite verb of yours, "giving up", just doesn't belong to science. Your usage of it proves that you are not thinking about these things rationally, scientifically, impartially.
Science is not about "preserving" or "giving up" something. These labels mean nothing else than some bias, an emotional attachment to some belief. Science is about finding the truth about Nature.
Darwinism has to "give up" God, at least some previously believed essential parts of this construct. Heliocentrism has to give up the "natural" (blah blah, propaganda) assumption that the body we inhabit is the center of the Universe. A kinetic theory of heat "gives up" the idea (of phlogiston) that everything we can feel by our skin is a material with a particular atomic composition. And so on, and so on.
But it's right to "give up" these assumptions because they are simply wrong. The case of realism behind foundations of classical/quantum mechanics is *totally* analogous. One must "give up" - without any crying - the assumptions behind classical physics (and the pilot wave theory) because science has demonstrated them to be wrong. If you cry or whine, you're just not an honest scientist.
The real problem (one of very numerous problems) of the de Broglie theory isn't that it "gives up" a symmetry. It's that the theory gives wrong predictions for experiments that show that the symmetry is actually there - in some cases, an absolute contradiction that can't be fixed by any improvement; in other cases, a soft contradiction which means that the pilot wave theory has to be unacceptably fine-tuned or fudged to account for the observations. The first situation is a straight and immediate falsification of the theory; the latter is a gradual disfavoring of the theory that may become arbitrarily strong and urgent.
reader Ilja said...
For me, "possibly measurable in 500 years" means also "real".
The problem of infinite velocities in dBB theory near the zeros of the wave function suggests (if one assumes that there are no infinities in Nature) that there has to be some regularization. As a consequence, in the regularized subquantum theory there would be no point with exactly zero probability. See arXiv:1101.5774.
I think this is a general scheme, and a reason to consider different interpretations. Different interpretations may have different weak points, which suggest modifications (regularizations) of these interpretations, which are, then, already different theories making different predictions. Atomic theory has been, at the start, only an interpretation. Predictions came later.
From this point of view interpretations which propose hidden variables seem especially good ideas, because "hidden" in a normal situation does not mean "without problems". A preferred frame, even if hidden in the Solar system, becomes problematic if one considers solutions with causal loops, but possibly already in more harmless situations. Which? These may be the places where new physics appear.
So my theory of gravity arXiv:gr-qc/0205035 identifies such places as the big bang (replaced by a very rigid inflation with a big bounce, and an additional dark energy term which would shift the expansion toward a'=0) and black holes near the horizon, a place where according to GR nothing strange happens. Now there are discussions about firewalls at the same place.
By the way, I would not name QM (at least in the Copenhagen interpretation) self-contained.
reader Ilja said...
You have forgotten Bell's inequality. Bell was at that time almost the only proponent of dBB, so this suggests good output per man-year.
Technically, the classical limit is much easier in dBB - you don't have to consider wave packets, instead, already in a wide packet (rho close to const) the Bohmian trajectories are almost classical. This may explain why it may be useful in chemistry.
reader Mephisto said...
Looking at things from different perspectives can give you better understanding of the problem, can help you train your intuition and this can help you later to look for future research directions and advance physics.
Why study the Hamilton-Jacobi theory of classical mechanics? It gives you nothing new except of better understanding of classical mechanics. Later it unexpectredly helped Schrödinger to invent his equation
And the same applies to string theory. First various formulations were discovered. Later partly unified into M-theory. By studying the various perspectives (versions) of string theory, you gain a better understanding of the whole, of the structure underlying all of the versions. So even in physics, it always a good idea to study a problem from all available perspectives, because it helps you to understand a problem better and if you understand a problem better, you have a better chance of coming up with new solutions.
reader Ilja said...
Scientists are human beings with human errors and emotions, me too, not a problem as long as other scientists follow other emotions and make different errors. Giving up or preserving some principles are strategies for the search of new theories, and I insist that it is useful if different scientists follow different strategies. Most of them will fail, that's the risk.
Wrong predictions for experiments are not (yet) a problem of an interpretation with an equivalence theorem with QM.
Not having an explanation for an observable symmetry is, without doubt, a serious problem. See my other reply for how I propose to solve it.
How many man-years have been spend for versions of string theory unable to handle fermions? I doubt you think this was wrong. dBB theory is in a better situation now if we count open problems.
reader lucretius said...
I completely agree that it is worth to study a problem or a phenomenon from all available perspectives - I don't think many people would disagree with such a general statement. If (and that is a big if) bohmian mechanics is really capable of offering different (and correct) insights into quantum mechanics than by all means people should study it ( I don't think even Lubos would disagree with this conditional statement). However, I don't think that "preserving reality" alone provides sufficient justification - and that seemed to be a key element of Ilja's original argument.
reader Luboš Motl said...
Dear Ilja, nope, your opinion that a researcher's bias is "compensated" by other emotions of someone else is completely and fundamentally wrong. There is absolutely no reason why the "average emotions" of all the researchers should be close to the truth, why the errors caused by the emotions should "cancel".
The opinion that they cancel is precisely the idiotic meme that e.g Feynman beautifully attacked in his Judging Books By Their Covers:
Search for Emperor of China's nose.
A string theory without fermions was never argued to be a right description of phenomena that obviously do contain fermions - it would indeed be as preposterous as what you're doing.
reader Mephisto said...
I haven't studied the Bohmian mechanics enough to be able to make strict judgements.
From what I gathered, in the interpretation of quantum mechanics, we either need to give up locality or reality. Some interpretations give up reality (copenhagen), some locality (Bohm), some both. I personaly believe, that it is probably necessary to modify the concept of reality. The Bohmian mechanics is very non-local (the quantum potential spreads instantly ftl). But these FTL influences lead to time paradoxes. The preference for various interpretations is a problem of psychology - what you find more tolerable to give up.
reader Ilja said...
It is clearly nonsense to compose various contributions. Of course, not. Research directions which fail contribute nothing to the final results. But we don't know in advance which research directions will be successfull (with you as an exception for string theory, of course) and which will fail. If all scientists would follow the same strategy, there would be a much larger possibility that all would fail. If different scientists follow different strategies, most of them will fail, but there will be, with higher probability, some of them who make the correct choice.
The advantage of science is that it has a method to evaluate the final results of the work of different people in very different directions, following different strategies.
Not by nonsensical averaging, counting papers and taxpayer's money spend for them (here string theory wins), but by identifying the single one which was not a complete failure.
reader Ilja said...
No, dBB does not lead to time paradoxes, because it assumes a preferred frame. A hidden one, so no problem with relativistic predictions for observables.
Time paradoxes are a problem of GR, not of quantum theory or dBB.
reader Luboš Motl said...
Dear Ilja, the only problem is that some theories have already failed - theories containing any Lorentz-violating aether failed in 1887, for example.
reader Ilja said...
Correct. So what? That's what I have said - most theories have failed and will fail in future too. Nobody proposes to go back to pre-relativistic ether falsified 1887. What I propose in the direction of ether theory is a generalization of the Lorentz ether to gravity arXiv:gr-qc/0205035 which gives a metric theory of gravity with GR equations in a limit, and an ether model which gives fermions and gauge fields of the SM arXiv:0908.0591. It's something very different from old ether theory, which tried to explain only the EM field. What it shares with the old ether is the preferred frame of Lorentz and the attempt to use condensed matter models to explain the observable fields. I don't see a reason to reject these ideas in general forever only because the old ether has failed to explain the EM field.
reader Rehbock said...
In the cited paper you say "Giving up realism means giving up the search for realistic explanations of observable phenomena."
One can instead accept experimental evidence - 'observable phenomena' beautifully described by QM (leaving aside the aether) as better revealing true reality.
reader Ilja said...
The observable phenomena reveal nothing. Or at least not much. A correlates with B. This explains and reveals nothing. Is A cause of B, or B cause of A, or is there another cause C which causes A and B? This is what is interesting, and this is what is not revealed simply by observation.
reader Justin Glick said...
What if in entanglement, A and B both exert a mutual influence on each other? Then, there would be no causal paradoxes, and a complete symmetry of description which would save relativity.
reader Ilja said...
Feel free to try this way. I would not follow you, I think the preservation of classical causality is the better way. |
32b28f94380becdf | Quantum Field Theory
First published Thu Jun 22, 2006; substantive revision Thu Sep 27, 2012
Quantum Field Theory (QFT) is the mathematical and conceptual framework for contemporary elementary particle physics. In a rather informal sense QFT is the extension of quantum mechanics (QM), dealing with particles, over to fields, i.e. systems with an infinite number of degrees of freedom. (See the entry on quantum mechanics.) In the last few years QFT has become a more widely discussed topic in philosophy of science, with questions ranging from methodology and semantics to ontology. QFT taken seriously in its metaphysical implications seems to give a picture of the world which is at variance with central classical conceptions of particles and fields, and even with some features of QM.
The following sketches how QFT describes fundamental physics and what the status of QFT is among other theories of physics. Since there is a strong emphasis on those aspects of the theory that are particularly important for interpretive inquiries, it does not replace an introduction to QFT as such. One main group of target readers are philosophers who want to get a first impression of some issues that may be of interest for their own work, another target group are physicists who are interested in a philosophical view upon QFT.
1. What is QFT?
In contrast to many other physical theories there is no canonical definition of what QFT is. Instead one can formulate a number of totally different explications, all of which have their merits and limits. One reason for this diversity is the fact that QFT has grown successively in a very complex way. Another reason is that the interpretation of QFT is particularly obscure, so that even the spectrum of options is not clear. Possibly the best and most comprehensive understanding of QFT is gained by dwelling on its relation to other physical theories, foremost with respect to QM, but also with respect to classical electrodynamics, Special Relativity Theory (SRT) and Solid State Physics or more generally Statistical Physics. However, the connection between QFT and these theories is also complex and cannot be neatly described step by step.
If one thinks of QM as the modern theory of one particle (or, perhaps, a very few particles), one can then think of QFT as an extension of QM for analysis of systems with many particles—and therefore with a large number of degrees of freedom. In this respect going from QM to QFT is not inevitable but rather beneficial for pragmatic reasons. However, a general threshold is crossed when it comes to fields, like the electromagnetic field, which are not merely difficult but impossible to deal with in the frame of QM. Thus the transition from QM to QFT allows treatment of both particles and fields within a uniform theoretical framework. (As an aside, focusing on the number of particles, or degrees of freedom respectively, explains why the famous renormalization group methods can be applied in QFT as well as in Statistical Physics. The reason is simply that both disciplines study systems with a large or an infinite number of degrees of freedom, either because one deals with fields, as does QFT, or because one studies the thermodynamic limit, a very useful artifice in Statistical Physics.) Moreover, issues regarding the number of particles under consideration yield yet another reason why we need to extend QM. Neither QM nor its immediate relativistic extension with the Klein-Gordon and Dirac equations can describe systems with a variable number of particles. However, obviously this is essential for a theory that is supposed to describe scattering processes, where particles of one kind are destroyed while others are created.
One gets a very different kind of access to what QFT is when focusing on its relation to QM and SRT. One can say that QFT results from the successful reconciliation of QM and SRT. In order to understand the initial problem one has to realize that QM is not only in a potential conflict with SRT, more exactly: the locality postulate of SRT, because of the famous EPR correlations of entangled quantum systems. There is also a manifest contradiction between QM and SRT on the level of the dynamics. The Schrödinger equation, i.e. the fundamental law for the temporal evolution of the quantum mechanical state function, cannot possibly obey the relativistic requirement that all physical laws of nature be invariant under Lorentz transformations. The Klein-Gordon and Dirac equations, resulting from the search for relativistic analogues of the Schrödinger equation in the 1920s, do respect the requirement of Lorentz invariance. Nevertheless, ultimately they are not satisfactory because they do not permit a description of fields in a principled quantum-mechanical way.
Fortunately, for various phenomena it is legitimate to neglect the postulates of SRT, namely when the relevant velocities are small in relation to the speed of light and when the kinetic energies of the particles are small compared to their mass energies mc2. And this is the reason why non-relativistic QM, although it cannot be the correct theory in the end, has its empirical successes. But it can never be the appropriate framework for electromagnetic phenomena because electrodynamics, which prominently encompasses a description of the behavior of light, is already relativistically invariant and therefore incompatible with QM. Scattering experiments are another context in which QM fails. Since the involved particles are often accelerated almost up to the speed of light, relativistic effects can no longer be neglected. For that reason scattering experiments can only be correctly grasped by QFT.
Unfortunately, the catchy characterization of QFT as the successful merging of QM and SRT has its limits. On the one hand, as already mentioned above, there also is a relativistic QM, with the Klein-Gordon- and the Dirac-equation among their most famous results. On the other hand, and this may come as a surprise, it is possible to formulate a non-relativistic version of QFT (see Bain 2011). The nature of QFT thus cannot simply be that it reconciles QM with the requirement of relativistic invariance. Consequently, for a discriminating criterion it is more appropriate to say that only QFT, and not QM, allows describing systems with an infinite number of degrees of freedom, i.e. fields (and systems in the thermodynamic limit). According to this line of reasoning, QM would be the modern (as opposed to classical) theory of particles and QFT the modern theory of particles and fields. Unfortunately however, and this shall be the last turn, even this gloss is not untarnished. There is a widely discussed no-go theorem by Malament (1996) with the following proposed interpretation: Even the quantum mechanics of one single particle can only be consonant with the locality principle of special relativity theory in the framework of a field theory, such as QFT. Hence ultimately, the characterization of QFT, on the one hand, as the quantum physical description of systems with an infinite number of degrees of freedom, and on the other hand, as the only way of reconciling QM with special relativity theory, are intimately connected with one another.
theory diagram
Figure 1.
The diagram depicts the relations between different theories, where Non-Relativistic Quantum Field Theory is not a historical theory but rather an ex post construction that is illuminating for conceptual purposes. Theoretically, [(i), (ii), (iii)], [(ii), (i), (iii)] and [(ii), (iii), (i)] are three possible ways to get from Classical Mechanics to Relativistic Quantum Field Theory. But note that this is meant as a conceptual decomposition; history didn't go all these steps separately. On the one hand, by good luck, so to say, classical electrodynamics is relativistically invariant already, so that its successful quantization leads directly to Relativistic Quantum Field Theory. On the other hand, some would argue (e.g. Malament 1996) that the only way to reconcile QM and SRT is in terms of a field theory, so that (ii) and (iii) would coincide. Note that the steps (i), (ii) and (iii), i.e. quantization, transition to an infinite number of degrees of freedom, and reconciliation with SRT, are all ontologically relevant. In other words, by these steps the nature of the physical entities the theories talk about may change fundamentally. See Huggett 2003 for an alternative three-dimensional “map of theories”.
Further Reading on QFT and Philosophy of QFT. Mandl and Shaw (2010), Peskin and Schroeder (1995), Weinberg (1995) and Weinberg (1996) are standard textbooks on QFT. Teller (1995) and Auyang (1995) are the first systematic monographs on the philosophy of QFT. The anthologies Brown and Harré (1988), Cao (1999) and Kuhlmann et al. (2002) are anthologies with contributions by physicists and philosophers (of physics), where the last anthology has a focus on ontological issues. The literature on the philosophy of QFT has increased significantly in the last decade. Besides a number of separate papers there are two new monographs, Cao (2010) and Kuhlmann (2010), and one special issue (May 2011) of Studies in History and Philosophy of Modern Physics. Bain (2011), Huggett (2000) and Ruetsche (2002) provide article length discussions on a number of issues in the philosophy of QFT.
See also the following supplementary document:
The History of QFT.
2. The Basic Structure of the Conventional Formulation
2.1 The Lagrangian Formulation of QFT
The crucial step towards quantum field theory is in some respects analogous to the corresponding quantization in quantum mechanics, namely by imposing commutation relations, which leads to operator valued quantum fields. The starting point is the classical Lagrangian formulation of mechanics, which is a so-called analytical formulation as opposed to the standard version of Newtonian mechanics. A generalized notion of momentum (the conjugate or canonical momentum) is defined by setting p = ∂L/∂, where L is the Lagrange function L = TV (T is the kinetic energy and V the potential) and dq/dt. This definition can be motivated by looking at the special case of a Lagrange function with a potential V which depends only on the position so that (using Cartesian coordinates) ∂L/∂ = (∂/∂)(m2/2) = m = px. Under these conditions the generalized momentum coincides with the usual mechanical momentum. In classical Lagrangian field theory one associates with the given field φ a second field, namely the conjugate field
(3.1) π = ∂L/∂φ̇
where L is a Lagrangian density. The field φ and its conjugate field π are the direct analogues of the canonical coordinate q and the generalized (canonical or conjugate) momentum p in classical mechanics of point particles.
In both cases, QM and QFT, requiring that the canonical variables satisfy certain commutation relations implies that the basic quantities become operator valued. From a physical point of view this shift implies a restriction of possible measurement values for physical quantities some (but not all) of which can have their values only in discrete steps now. In QFT the canonical commutation relations for a field φ and the corresponding conjugate field π are
(3.2) [φ(x,t), π(y,t)] = 3(xy)
[φ(x,t), φ(y,t)] = [π(x,t), π(y,t)] = 0
which are equal-time commutation relations, i.e., the commutators always refer to fields at the same time. It is not obvious that the equal-time commutation relations are Lorentz invariant but one can formulate a manifestly covariant form of the canonical commutation relations. If the field to be quantized is not a bosonic field, like the Klein-Gordon field or the electromagnetic field, but a fermionic field, like the Dirac field for electrons one has to use anticommutation relations.
While there are close analogies between quantization in QM and in QFT there are also important differences. Whereas the commutation relations in QM refer to a quantum object with three degrees of freedom, so that one has a set of 15 equations, the commutation relations in QFT do in fact comprise an infinite number of equations, namely for each of the infinitely many space-time 4-tuples (x,t) there is a new set of commutation relations. This infinite number of degrees of freedom embodies the field character of QFT.
It is important to realize that the operator valued field φ(x,t) in QFT is not analogous to the wavefunction ψ(x,t) in QM, i.e., the quantum mechanical state in its position representation. While the wavefunction in QM is acted upon by observables/operators, in QFT it is the (operator valued) field itself which acts on the space of states. In a certain sense the single particle wave functions have been transformed, via their reinterpretation as operator valued quantum fields, into observables. This step is sometimes called ‘second quantization’ because the single particle wave equations in relativistic QM already came about by a quantization procedure, e.g., in the case of the Klein-Gordon equation by replacing position and momentum by the corresponding quantum mechanical operators. Afterwards the solutions to these single particle wave equations, which are states in relativistic QM, are considered as classical fields, which can be subjected to the canonical quantization procedure of QFT. The term ‘second quantization’ has often been criticized partly because it blurs the important fact that the single particle wave function φ in relativistic QM and the operator valued quantum field φ are fundamentally different kinds of entities despite their connection in the context of discovery.
In conclusion, it must be emphasized that both in QM and QFT states and observables are equally important. However, to some extent their roles are switched. While states in QM can have a concrete spatio-temporal meaning in terms of probabilities for position measurements, in QFT states are abstract entities and it is the quantum field operators that seem to allow for a spatio-temporal interpretation. See the section on the field interpretation of QFT for a critical discussion.
2.2 Interaction
Up to this point, the aim was to develop a free field theory. Doing so does not only neglect interaction with other particles (fields), it is even unrealistic for one free particle because it interacts with the field that it generates itself. For the description of interactions—such as scattering in particle colliders—we need certain extensions and modifications of the formalism. The immediate contact between scattering experiments and QFT is given by the scattering or S-matrix which contains all the relevant predictive information about, e.g., scattering cross sections. In order to calculate the S-matrix the interaction Hamiltonian is needed. The Hamiltonian can in turn be derived from the Lagrangian density by means of a Legendre transformation.
In order to discuss interactions one introduces a new representation, the interaction picture, which is an alternative to the Schrödinger and the Heisenberg picture. For the interaction picture one splits up the Hamiltonian, which is the generator of time-translations, into two parts H = H0 + Hint, where H0 describes the free system, i.e., without interaction, and gets absorbed in the definition of the fields and Hint is the interaction part of the Hamiltonian, or short the ‘interaction Hamiltonian’. Using the interaction picture is advantageous because the equations of motion as well as, under certain conditions, the commutation relations are the same for interacting fields as for free fields. Therefore, various results that were established for free fields can still be used in the case of interacting fields. The central instrument for the description of interaction is again the S-matrix, which expresses the connection between in and out states by specifying the transition amplitudes. In QED, for instance, a state |in⟩ describes one particular configuration of electrons, positrons and photons, i.e., it describes how many of these particles there are and which momenta, spins and polarizations they have before the interaction. The S-matrix supplies the probability that this state goes over to a particular |out⟩ state, e.g., that a particular counter responds after the interaction. Such probabilities can be checked in experiments.
The canonical formalism of QFT as introduced in the previous section is only applicable in the case of free fields since the inclusion of interaction leads to infinities (see the historical part). For this reason perturbation theory makes up a large part of most publications on QFT. The importance of perturbative methods is understandable realizing that they establish the immediate contact between theory and experiment. Although the techniques of perturbation theory have become ever more sophisticated it is somewhat disturbing that perturbative methods could not be avoided even in principle. One reason for this unease is that perturbation theory is felt to be rather a matter of (highly sophisticated) craftsmanship than of understanding nature. Accordingly, the corpus of perturbative methods plays a small role in the philosophical investigations of QFT. What does matter, however, is in which sense the consideration of interaction effects the general framework of QFT. An overview about perturbation theory is given in section 4.1 (“Perturbation Theory—Philosophy and Examples”) of Peskin & Schroeder (1995).
2.3 Gauge Invariance
Some theories are distinguished by being gauge invariant, which means that gauge transformations of certain terms do not change any observable quantities. Requiring gauge invariance provides an elegant and systematic way of introducing terms for interacting fields. Moreover, gauge invariance plays an important role in selecting theories. The prime example of an intrinsically gauge invariant theory is electrodynamics. In the potential formulation of Maxwell's equations one introduces the vector potential A and the scalar potential φ, which are linked to the magnetic field B(x,t) and the electric field E(x,t) by
(3.3) B = ∇ × A
E = −(∂A/∂t) − ∇φ
or covariantly
(3.4) Fμν = ∂μAν − ∂νAμ
where Fμν is the electromagnetic field tensor and Aμ = (φ, A) the 4-vector potential. The important point in the present context is that given the identification (3.3), or (3.4), there remains a certain flexibility or freedom in the choice of A and φ, or Aμ. In order to see that, consider the so-called gauge transformations
(3.5) A A − ∇ψ
φ φ + ∂χ/∂t
or covariantly
(3.6) AμAμ + ∂μχ
where χ is a scalar function (of space and time or of space-time) which can be chosen arbitrarily. Inserting the transformed potential(s) into equation(s) (3.3), or (3.4), one can see that the electric field E and the magnetic field B, or covariantly the electromagnetic field tensor Fμν, are not effected by a gauge transformation of the potential(s). Since only the electric field E and the magnetic field B, and quantities constructed from them, are observable, whereas the vector potential itself is not, nothing physical seems to be changed by a gauge transformation because it leaves E and B unaltered. Note that gauge invariance is a kind of symmetry that does not come about by space-time transformations.
In order to link the notion of gauge invariance to the Lagrangian formulation of QFT one needs a more general form of gauge transformations which applies to the field operator φ and which is supplied by
(3.7) φ eiΛφ
φ* eiΛφ*
where Λ is an arbitrary real constant. Equations (3.7) describe a global gauge transformation whereas a local gauge transformation
(3.8) φ(x) eiα(x)φ(x)
varies with x.
It turned out that requiring invariance under local gauge transformations supplies a systematic way for finding the equations describing fundamental interactions. For instance, starting with the Lagrangian for a free electron, the requirement of local gauge invariance can only be fulfilled by introducing additional terms, namely those for the electromagnetic field. Gauge invariance can be captured by certain symmetry groups: U(1) for electromagnetic, SU(2)⊗U(1) for electroweak and SU(3) for strong interaction. This is an important basis for unification programs, as is the analogy to general relativity where a local gauge symmetry is associated with the gravitational field. Moreover, it turned out that only gauge invariant quantum field theories are renormalizable. All this can be taken to show that a mathematically rich theory, with surplus structures, can be very valuable in the construction of theories.
Auyang (1995) emphasizes the general conceptual significance of invariance principles; Redhead (2002) and Martin (2002) focus specifically on gauge symmetries. Healey (2007) and Lyre (2004 and 2012) discuss the ontological significance of gauge theories, among other things concerning the Aharanov-Bohm effect and ontic structural realism.
2.4 Effective Field Theories and Renormalization
In the 1970s a program emerged in which the theories of the standard model of elementary particle physics are considered as effective field theories (EFTs) which have a common quantum field theoretical framework. EFTs describe relevant phenomena only in a certain domain since the Lagrangian contains only those terms that describe particles which are relevant for the respective range of energy. EFTs are inherently approximative and change with the range of energy considered. EFTs are only applicable on a certain energy scale, i.e., they only describe phenomena in a certain range of energy. Influences from higher energy processes contribute to average values but they cannot be described in detail. This procedure has no severe consequences since the details of low-energy theories are largely decoupled from higher energy processes. Both domains are only connected by altered coupling constants and the renormalization group describes how the coupling constants depend on the energy.
The main idea of EFTs is that theories, i.e., in particular the Lagrangians, depend on the energy of the phenomena which are analysed. The physics changes by switching to a different energy scale, e.g., new particles can be created if a certain energy threshold is exceeded. The dependence of theories on the energy scale distinguishes QFT from, e.g., Newton's theory of gravitation where the same law applies to an apple as well as to the moon. Nevertheless, laws from different energy scales are not completely independent of each other. A central aspect of considerations about this dependence are the consequences of higher energy processes on the low-energy scale.
On this background a new attitude towards renormalization developed in the 1970s, which revitalizes earlier ideas that divergences result from neglecting unknown processes of higher energies. Low-energy behavior is thus affected by higher energy processes. Since higher energies correspond to smaller distances this dependence is to be expected from an atomistic point of view. According to the reductionist program the dynamics of constituents on the microlevel should determine processes on the macrolevel, i.e., here the low-energy processes. However, as, for instance hydrodynamics shows, in practice theories from different levels are not quite as closely connected because a law which is applicable on the macrolevel can be largely independent of microlevel details. For this reason analogies with statistical mechanics play an important role in the discussion about EFTs. The basic idea of this new story about renormalization is that the influences of higher energy processes are localizable in a few structural properties which can be captured by an adjustment of parameters. “In this picture, the presence of infinities in quantum field theory is neither a disaster, nor an asset. It is simply a reminder of a practical limitation—we do not know what happens at distances much smaller than those we can look at directly” (Georgi 1989: 456). This new attitude supports the view that renormalization is the appropriate answer to the change of fundamental interactions when the QFT is applied to processes on different energy scales. The price one has to pay is that EFTs are only valid in a limited domain and should be considered as approximations to better theories on higher energy scales. This prompts the important question whether there is a last fundamental theory in this tower of EFTs which supersede each other with rising energies. Some people conjecture that this deeper theory could be a string theory, i.e., a theory which is not a field theory any more. Or should one ultimately expect from physics theories that they are only valid as approximations and in a limited domain? Hartmann (2001) and Castellani (2002) discuss the fate of reductionism vis-à-vis EFTs. Wallace (2011) and Fraser (2011) discuss what the successful application of renormalization methods in quantum statistical mechanics means for their role in QFT, reaching very different conclusions.
3. Beyond the Standard Model
The “standard model of elementary particle physics” is sometimes used almost synonymously with QFT. However, there is a crucial difference. While the standard model is a theory with a fixed ontology (understood in a prephilosophical sense), i.e. three fundamental forces and a certain number of elementary particles, QFT is rather a frame, the applicability of which is open. Thus while quantum chromodynamics (or ‘QED’) is a part of the standard model, it is an instance of a quantum field theory, or short “a quantum field theory” and not a part of QFT. This section deals with only some particularly important proposals that go beyond the standard model, but which do not necessarily break up the basic framework of QFT.
3.1 Quantum Gravity
The standard model of particle physics covers the electromagnetic, the weak and the strong interaction. However, the fourth fundamental force in nature, gravitation, has defied quantization so far. Although numerous attempts have been made in the last 80 years, and in particular very recently, there is no commonly accepted solution up to the present day. One basic problem is that the mass, length and time scales quantum gravity theories are dealing with are so extremely small that it is almost impossible to test the different proposals.
The most important extant versions of quantum gravity theories are canonical quantum gravity, loop theory and string theory. Canonical quantum gravity approaches leave the basic structure of QFT untouched and just extend the realm of QFT by quantizing gravity. Other approaches try to reconcile quantum theory and general relativity theory not by supplementing the reach of QFT but rather by changing QFT itself. String theory, for instance, proposes a completely new view concerning the most fundamental building blocks: It does not merely incorporate gravitation but it formulates a new theory that describes all four interactions in a unified way, namely in terms of strings (see next subsection).
While quantum gravity theories are very complicated and even more remote from classical thinking than QM, SRT and GRT, it is not so difficult to see why gravitation is far more difficult to deal with than the other three forces. Electromagnetic, weak and strong force all act in a given space-time. In contrast, gravitation is, according to GRT, not an interaction that takes place in time, but gravitational forces are identified with the curvature of space-time itself. Thus quantizing gravitation could amount to quantizing space-time, and it is not at all clear what that could mean. One controversial proposal is to deprive space-time of its fundamental status by showing how it “emerges ” in some non-spatio-temporal theory. The “emergence” of space-time then means that there are certain derived terms in the new theory that have some formal features commonly associated with space-time. See Kiefer (2007) for physical details, Rickles (2008) for an accessible and conceptually reflected introduction to quantum gravity and Wüthrich (2005) for a philosophical evaluation of the alleged need to quantize the gravitational field. Also, see the entry on quantum gravity.
3.2 String Theory
String theory is one of the most promising candidates for bridging the gap between QFT and general relativity theory by supplying a unified theory of all natural forces, including gravitation. The basic idea of string theory is not to take particles as fundamental objects but strings that are very small but extended in one dimension. This assumption has the pivotal consequence that strings interact on an extended distance and not at a point. This difference between string theory and standard QFT is essential because it is the reason why string theory also encompasses the gravitational force which is very difficult to deal with in the framework of QFT.
It is so hard to reconcile gravitation with QFT because the typical length scale of the gravitational force is very small, namely at Planck scale, so that the quantum field theoretical assumption of point-like interaction leads to untreatable infinities. To put it another way, gravitation becomes significant (in particular in comparison to strong interaction) exactly where QFT is most severely endangered by infinite quantities. The extended interaction of strings brings it about that such infinities can be avoided. In contrast to the entities in standard quantum physics strings are not characterized by quantum numbers but only by their geometrical and dynamical properties. Nevertheless, “macroscopically” strings look like quantum particles with quantum numbers. A basic geometrical distinction is the one between open strings, i.e., strings with two ends, and closed strings which are like bracelets. The central dynamical property of strings is their mode of excitation, i.e., how they vibrate.
Reservations about string theory are mostly due to the lack of testability since it seems that there are no empirical consequences which could be tested by the methods which are, at least up to now, available to us. The reason for this “problem” is that the length scale of strings is in the average the same as the one of quantum gravity, namely the Planck length of approximately 10−33 centimeters which lies far beyond the accessibility of feasible particle experiments. But there are also other peculiar features of string theory which might be hard to swallow. One of them is the fact that string theory implies that space-time has 10, 11 or even 26 dimensions. In order to explain the appearance of only four space-time dimensions string theory assumes that the other dimensions are somehow folded away or “compactified” so that they are no longer visible. An intuitive idea can be gained by thinking of a macaroni which is a tube, i.e., a two-dimensional piece of pasta rolled together, but which looks from the distance like a one-dimensional string.
Despite of the problems of string theory, physicists do not abandon this project, partly because many think that, among the numerous alternative proposals for reconciling quantum physics and general relativity theory, string theory is still the best candidate, with “loop quantum gravity” as its strongest rival (see the entry on quantum gravity). Correspondingly, string theory has also received some attention within the philosophy of physics community in recent years. Probably the first philosophical investigation of string theory is Weingard (2001) in Callender & Huggett (2001), an anthology with further related articles. Dawid (2003) (see Other Internet Resources below) argues that string theory has significant consequences for the philosophical debate about realism, namely that it speaks against the plausibility of anti-realistic positions. Also see Dawid (2009). Johansson and Matsubara (2011) assess string theory from various different methodological perspectives, reaching conclusions in disagreement with Dawid (2009). Standard introductory monographs on string theory are Polchinski (2000) and Kaku (1999). Greene (1999) is a very successful popular introduction. An interactive website with a nice elementary introduction is ‘Stringtheory.com’ (see the Other Internet Resources section below).
4. Axiomatic Reformulations of QFT
4.1 Deficiencies of the Conventional Formulation of QFT
From the 1930s onwards the problem of infinities as well as the potentially heuristic status of the Lagrangian formulation of QFT stimulated the search for reformulations in a concise and eventually axiomatic manner. A number of further aspects intensified the unease about the standard formulation of QFT. The first one is that quantities like total charge, total energy or total momentum of a field are unobservable since their measurement would have to take place in the whole universe. Accordingly, quantities which refer to infinitely extended regions of space-time should not appear among the observables of the theory as they do in the standard formulation of QFT. Another problematic feature of standard QFT is the idea that QFT is about field values at points of space-time. The mathematical aspect of the problem is that a field at a point, φ(x), is not an operator in a Hilbert space. The physical counterpart of the problem is that it would require an infinite amount of energy to measure a field at a point of space-time. One way to handle this situation—and one of the starting points for axiomatic reformulations of QFT—is not to consider fields at a point but instead fields which are smeared out in the vicinity of that point using certain functions, so-called test functions. The result is a smeared field φ(f) = φ(x)f(x)dx with supp(f) ⊂ O, where supp(f) is the support of the test function f and O is a bounded open region in Minkowski space-time.
The third important problem for standard QFT which prompted reformulations is the existence of inequivalent representations. In the context of quantum mechanics, Schrödinger, Dirac, Jordan and von Neumann realized that Heisenberg's matrix mechanics and Schrödinger's wave mechanics are just two (unitarily) equivalent representations of the same underlying abstract structure, i.e., an abstract Hilbert space H and linear operators acting on this space. In other words, we are merely dealing with two different ways for representing the same physical reality, and it is possible to switch between these different representations by means of a unitary transformation, i.e. an operation that is analogous to an innocuous rotation of the frame of reference. Representations of some given algebra or group are sets of mathematical objects, like numbers, rotations or more abstract transformations (e.g. differential operators) together with a binary operation (e.g. addition or multiplication) that combines any two elements of the algebra or group, such that the structure of the algebra or group to be represented is preserved. This means that the combination of any two elements in the representation space, say a and b, leads to a third element which corresponds to the element that results when you combine the elements corresponding to a and b in the algebra or group that is represented. In 1931 von Neumann gave a detailed proof (of a conjecture by Stone) that the canonical commutation relations (CCRs) for position coordinates and their conjugate momentum coordinates in configuration space fix the representation of these two sets of operators in Hilbert space up to unitary equivalence (von Neumann's uniqueness theorem). This means that the specification of the purely algebraic CCRs suffices to describe a particular physical system.
In quantum field theory, however, von Neumann's uniqueness theorem looses its validity since here one is dealing with an infinite number of degrees of freedom. Now one is confronted with a multitude of inequivalent irreducible representations of the CCRs and it is not obvious what this means physically and how one should cope with it. Since the troublesome inequivalent representations of the CCRs that arise in QFT are all irreducible their inequivalence is not due to the fact that some are reducible while others are not (a representation is reducible if there is an invariant subrepresentation, i.e. a subset which alone represent the CCRs already). Since inequivalent irreducible representations (short: IIRs) seem to describe different physical states of affairs it is no longer legitimate to simply choose the most convenient representation, just like choosing the most convenient frame of reference. The acuteness of this problem is not immediately clear, since prima facie it is possibly that all but one of the IIRs are physically irrelevant, i.e. mathematical artefacts of a redundant formalism. However, although apparently this applies to most of the available IIRs, it seems that a number of irreducible representations of the CCRs remain that are inequivalent and physically relevant.
4.2 Algebraic Approaches to QFT
According to the algebraic point of view algebras of observables rather than observables themselves in a particular representation should be taken as the basic entities in the mathematical description of quantum physics; thereby avoiding the above-mentioned problems from the outset. In standard QM the algebraic point of view in terms of C*-algebras makes no notable difference to the usual Hilbert space formulation since both formalisms are equivalent. However, in QFT this is no longer the case since the infinite number of degrees of freedom leads to unitarily inequivalent irreducible representations of a C*-algebra. Thus sticking to the usual Hilbert space formulation tacitly implies choosing one particular representation. The notion of C*-algebras, introduced abstractly by Gelfand and Neumark in 1943 and named this way by Segal in 1947, generalizes the notion of the algebra B(H) of all bounded operators on a Hilbert space H, which is also the most important example for a C*-algebra. In fact, it can be shown that any C*-algebra is isomorphic to a (norm-closed, self-adjoint) algebra of bounded operators on a Hilbert space. The boundedness (and self-adjointness) of the operators is the reason why C*-algebras are considered as ideal for representing physical observables. The 'C' indicates that one is dealing with a complex vector space and the '*' refers to the operation that maps an element A of an algebra to its involution (or adjoint) A*, which generalizes the conjugate complex of complex numbers to operators. This involution is needed in order to define the crucial norm property of C*-algebras, which is of central importance for the proof of the above isomorphism claim.
Another point where algebraic formulations are advantageous derives from the fact that two quantum fields are physically equivalent when they generate the same algebras of local observables. Such equivalent quantum field theories belong to the same so-called Borchers class which entails that they lead to the same S-matrix. As Haag (1996) stresses, fields are only an instrument in order to “coordinatize” observables, more precisely: sets of observables, with respect to different finite space-time regions. The choice of a particular field system is to a certain degree conventional, namely as long as it belongs to the same Borchers class. Thus it is more appropriate to consider these algebras, rather than quantum fields, as the fundamental entities in QFT.
A prominent attempt to axiomatise QFT is Wightman's field axiomatics from the early 1950s. Wightman imposed axioms on polynomial algebras P(O) of smeared fields, i.e., sums of products of smeared fields in finite space-time regions O. A crucial point of this approach is replacing the mapping x → φ(x) by OP(O). While the usage of unbounded field operators makes Wightman's approach mathematically cumbersome, Algebraic Quantum Field Theory (AQFT)—arguably the most successful attempt to reformulate QFT axiomatically—employs only bounded operators. AQFT originated in the late 1950s by the work of Haag and quickly advanced in collaboration with Araki and Kastler. AQFT itself exists in two versions, concrete AQFT (Haag-Araki) and abstract AQFT (Haag-Kastler, 1964). The concrete approach uses von Neumann algebras (or W*-algebras), the abstract one C*-algebras. The adjective ‘abstract’ refers to the fact that in this approach the algebras are characterized in an abstract fashion and not by explicitly using operators on a Hilbert space. In standard QFT, the CCRs together with the field equations can be used for the same purpose, i.e., an abstract characterization. One common aim of these axiomatizations of QFT is avoiding the usual approximations of standard QFT. However, trying to do this in a strictly axiomatic way, one only gets ‘reformulations’ which are not as rich as standard QFT. As Haag (1996) concedes, the “algebraic approach […] has given us a frame and a language not a theory”.
4.3 Basic Ideas of AQFT
One of the crucial ideas of AQFT is taking so-called nets of algebras as basic for the mathematical description of a quantum physical system. A decade earlier, Segal (1947) used a single C*-algebra—generated by all bounded operators—and dismissed the availability of inequivalent representations as irrelevant to physics. Against this approach Haag argued that inequivalent representations can be understood physically by realizing that the important physical information in a quantum field theory is not contained in individual algebras but in the net of algebras, i.e. in the mapping OA(O) from finite space-time regions to algebras of local observables. The crucial point is that it is not necessary to specify observables explicitly in order to fix physically meaningful quantities. The very way how algebras of local observables are linked to space-time regions is sufficient to supply observables with physical significance. It is the partition of the algebra Aloc of all local observables into subalgebras which contains physical information about the observables, i.e., it is the net structure of algebras which matters.
Physically the most important notion of AQFT is the principle of locality which has an external as well as an internal aspect. The external aspect is the fact that AQFT considers only observables connected with finite regions of space-time and not global observables like the total charge or the total energy momentum vector which refer to infinite space-time regions. This approach was motivated by the operationalistic view that QFT is a statistical theory about local measurement outcomes with all the experimental information coming from measurements in finite space-time regions. Accordingly everything is expressed in terms of local algebras of observables. The internal aspect of locality is that there is a constraint on the observables of such local algebras: All observables of a local algebra connected with a space-time region O are required to commute with all observables of another algebra which is associated with a space-time region O′ that is space-like separated from O. This principle of (Einstein) causality is the main relativistic ingredient of AQFT.
The basic structure upon which the assumptions or conditions of AQFT are imposed are local observables, i.e., self-adjoint elements in local (non-commutative) von Neumann-algebras, and physical states, which are identified as positive, linear, normalized functionals which map elements of local algebras to real numbers. States can thus be understood as assignments of expectation values to observables. One can group the assumptions of AQFT into relativistic axioms, such as locality and covariance, general physical assumptions, like isotony and spectrum condition, and finally technical assumptions which are closely related to the mathematical formulation.
As a reformulation of QFT, AQFT is expected to reproduce the main phenomena of QFT, in particular properties which are characteristic of it being a field theory, like the existence of antiparticles, internal quantum numbers, the relation of spin and statistics, etc. That this aim could not be achieved on a purely axiomatic basis is partly due to the fact that the connection between the respective key concepts of AQFT and QFT, i.e., observables and quantum fields, is not sufficiently clear. It turned out that the main link between observable algebras and quantum fields are superselection rules , which put restrictions on the set of all observables and allow for classification schemes in terms of permanent or essential properties.
Introductions to AQFT are provided by the monographs Haag (1996) and Horuzhy (1990) as well as the overview articles Haag & Kastler (1964), Roberts (1990) and Buchholz (1998). Streater & Wightman (1964) is an early pioneering monograph on axiomatic QFT. Bratteli & Robinson (1979) emphasize mathematical aspects.
4.4 AQFT and the Philosopher
In recent years, QFT has received a lot of attention in the philosophy of physics. Most philosophers who engage in that debate rest their considerations on AQFT; for instance, see Baker (2009), Baker & Halvorson (2010), Earman & Fraser (2006), Fraser (2008, 2009, 2011), Halvorson & Müger (2007), Kronz & Lupher (2005), Kuhlmann (2010a, 2010b), Lupher (2010), Rédei & Valente (2010) and Ruetsche (2002, 2003, 2006, 2011). While most philosophers of physics who are skeptical about this approach remained largely silent, Wallace (2006, 2011) launched an eloquent attack on the predominance of AQFT for foundational studies about QFT. To be sure, Wallace emphasizes, his critique is not directed against the use of algebraic methods, e.g. when studying inequivalent representations. Rather, he aims at AQFT as a physical theory, regarded as a rival to conventional QFT (CQFT). In his evaluation, viewed from the 21st century, one has to state that CQFT succeeded, while AQFT failed, so that “to be lured away from the Standard Model by [AQFT] is sheer madness” (Wallace 2011:124). So what may justify this drastic conclusion? On the one hand, Wallace points out that, the problem of ultraviolet divergences, which initiated the search for alternative approaches in the 1950s, was eventually solved in CQFT via the renormalization group techniques. On the other hand, AQFT never succeeded in finding realistic interacting quantum field theories in four dimensions (such as QED) that fit into their framework.
Fraser (2009, 2011) is most actively engaged in defending AQFT against Wallace's assault. She argues (2009) that consistency plays a central role in choosing between different formulations of QFT since they do not differ in their respective empirical success and AQFT fares better in this respect. Moreover, Fraser (2011) questions Wallace's crucial point in defense of CQFT, namely that the empirically successful application of renormalization group techniques in QFT removes all doubts about CQFT: The fact that renormalization in condensed matter physics and QFT are formally similar does not license Wallace's claim that there are also physical similarities concerning the freezing out of degrees of freedom at very small length scales. And if that physical analogy cannot be sustained, then the empirical success of renormalization in CQFT leaves the physical reasons for this success in the dark, in contrast to the case of condensed matter physics, where the physical basis for the empirical success of renormalization is intelligible, namely the fact that matter is discrete at atomic length scales. As a consequence, despite of the formal analogy with renormalization in condensed matter physics the empirical success of renormalization in CQFT does not, as Wallace claims, discredit the idea to work with arbitrarily small regions of spacetime, as it is done in AQFT.
Kuhlmann (2010b) also advocates AQFT as the prime object for foundational studies, focusing on ontological considerations. He argues that for matters of ontology AQFT is to be preferred over CQFT because, like ontology itself, AQFT strives for a clear separation of fundamental and derived entities and a parsimonious selection of basic assumptions. CQFT, on the other hand is a grown formalism that is very good for calculations but obscures foundational issues. Moreover, Kuhlmann contends that AQFT and CQFT should not be regarded as rival research programs. Nowadays at the very least, AQFT is not meant to replace CQFT, despite of the “kill it or cure it” slogan (Streater and Wightman 1964: 1, cited by Wallace 2011: 117). AQFT is suited and designed to illuminate the basic structure of QFT, but it is not and never will be the appropriate framework for the working physicist.
5. Philosophical Issues
5.1 Setting the Stage: Candidate Ontologies
Ontology is concerned with the most general features, entities and structures of being. One can pursue ontology in a very general sense or with respect to a particular theory or a particular part or aspect of the world. With respect to the ontology of QFT one is tempted to more or less dismiss ontological inquiries and to adopt the following straightforward view. There are two groups of fundamental fermionic matter constituents, two groups of bosonic force carriers and four (including gravitation) kinds of interactions. As satisfying as this answer might first appear, the ontological questions are, in a sense, not even touched. Saying that, for instance the down quark is a fundamental constituent of our material world is the starting point rather than the end of the (philosophical) search for an ontology of QFT. The main question is what kind of entity, e.g., the down quark is. The answer does not depend on whether we think of down quarks or muon neutrinos since the sought features are much more general than those ones which constitute the difference between down quarks or muon neutrinos. The relevant questions are of a different type. What are particles at all? Can quantum particles be legitimately understood as particles any more, even in the broadest sense, when we take, e.g., their localization properties into account? How can one spell out what a field is and can “quantum fields” in fact be understood as fields? Could it be more appropriate not to think of, e.g., quarks, as the most fundamental entities at all, but rather of properties or processes or events?
5.1.1 The Particle Interpretation
Many of the creators of QFT can be found in one of the two camps regarding the question whether particles or fields should be given priority in understanding QFT. While Dirac, the later Heisenberg, Feynman, and Wheeler opted in favor of particles, Pauli, the early Heisenberg, Tomonaga and Schwinger put fields first (see Landsman 1996). Today, there are a number of arguments which prepare the ground for a proper discussion beyond mere preferences. The Particle Concept
It seems almost impossible to talk about elementary particle physics, or QFT more generally, without thinking of particles which are accelerated and scattered in colliders. Nevertheless, it is this very interpretation which is confronted with the most fully developed counter-arguments. There still is the option to say that our classical concept of a particle is too narrow and that we have to loosen some of its constraints. After all, even in classical corpuscular theories of matter the concept of an (elementary) particle is not as unproblematic as one might expect. For instance, if the whole charge of a particle was contracted to a point, an infinite amount of energy would be stored in this particle since the repulsive forces become infinitely large when two charges with the same sign are brought together. The so-called self energy of a point particle is infinite.
Probably the most immediate trait of particles is their discreteness. Particles are countable or ‘aggregable’ entities in contrast to a liquid or a mass. Obviously this characteristic alone cannot constitute a sufficient condition for being a particle since there are other things which are countable as well without being particles, e.g., money or maxima and minima of the standing wave of a vibrating string. It seems that one also needs individuality, i.e., it must be possible to say that it is this or that particle which has been counted in order to account for the fundamental difference between ups and downs in a wave pattern and particles. Teller (1995) discusses a specific conception of individuality, primitive thisness, as well as other possible features of the particle concept in comparison to classical concepts of fields and waves, as well as in comparison to the concept of field quanta, which is the basis for the interpretation that Teller advocates. A critical discussion of Teller's reasoning can be found in Seibt (2002). Moreover, there is an extensive debate on individuality of quantum objects in quantum mechanical systems of ‘identical particles’. Since this discussion concerns QM in the first place, and not QFT, any further details shall be omitted here. French and Krause (2006) offer a detailed analysis of the historical, philosophical and mathematical aspects of the connection between quantum statistics, identity and individuality. See Dieks and Lubberdink (2011) for a critical assessment of the debate. Also consult the entry on quantum theory: identity and individuality.
There is still another feature which is commonly taken to be pivotal for the particle concept, namely that particles are localizable in space. While it is clear from classical physics already that the requirement of localizability need not refer to point-like localization, we will see that even localizability in an arbitrarily large but still finite region can be a strong condition for quantum particles. Bain (2011) argues that the classical notions of localizability and countability are inappropriate requirements for particles if one is considering a relativistic theory such as QFT.
Eventually, there are some potential ingredients of the particle concept which are explicitly opposed to the corresponding (and therefore opposite) features of the field concept. Whereas it is a core characteristic of a field that it is a system with an infinite number of degrees of freedom, the very opposite holds for particles. A particle can for instance be referred to by the specification of the coordinates x(t) that pertain, e.g., to its center of mass—presupposing impenetrability. A further feature of the particle concept is connected to the last point and again explicitly in opposition to the field concept. In a pure particle ontology the interaction between remote particles can only be understood as an action at a distance. In contrast to that, in a field ontology, or a combined ontology of particles and fields, local action is implemented by mediating fields. Finally, classical particles are massive and impenetrable, again in contrast to (classical) fields. Why QFT Seems to be About Particles
The easiest way to quantize the electromagnetic (or: radiation) field consists of two steps. First, one Fourier analyses the vector potential of the classical field into normal modes (using periodic boundary conditions) corresponding to an infinite but denumerable number of degrees of freedom. Second, since each mode is described independently by a harmonic oscillator equation, one can apply the harmonic oscillator treatment from non-relativistic quantum mechanics to each single mode. The result for the Hamiltonian of the radiation field is
(2.1) Hrad = k r ℏωk ( ar(kar(k) + 1/2 ),
where ar(k) and ar(k) are operators which satisfy the following commutation relations
(2.2) [ar(k), as(k′)] = δrsδkk′
[ar(k), as(k′)] = [ar(k), as(k′)] = 0.
with the index r labeling the polarisation. These commutation relations imply that one is dealing with a bosonic field.
The operators ar(k) and ar(k) have interesting physical interpretations as so-called particle creation and annihilation operators. In order to see this, one has to examine the eigenvalues of the operators
(2.3) Nr(k) = ar(kar(k)
which are the essential parts in Hrad. Due to the commutation relations (2.2) one finds that the eigenvalues of Nr(k) are the integers nr(k) = 0, 1, 2, … and the corresponding eigenfunctions (up to a normalisation factor) are
(2.4) |nr(k)⟩ = [ar(k)]nr(k)|0⟩
where the right hand side means that ar(k) operates nr(k) times on |0⟩, the state vector of the vacuum with no photons present. The interpretation of these results is parallel to the one of the harmonic oscillator. ar(k) is interpreted as the creation operator of a photon with momentum ℏk and energy ℏωk (and a polarisation which depends on r and k). That is, equation (2.4) can be understood in the following way. One ets a state with nr(k) photons of momentum ℏk and energy ℏωk when the creation operator ar(k) operates nr(k) times on the vacuum state |0⟩. Accordingly, Nr(k) is called the number operator and nr(k) the ‘occupation number’ of the mode that is specified by k and r, i.e., this mode is occupied by nr(k) photons. Note that Pauli's exclusion principle is not violated since it only applies to fermions and not to bosons like photons. The corresponding interpretation for the annihilation operator ar(k) is parallel: When it operates on a state with a given number of photons this number is lowered by one.
It is a widespread view that these results complete “the justification for interpreting N(k) as the number operator, and hence for the particle interpretation of the quantized theory” (Ryder 1996: 131). This is a rash judgement, however. For instance, the question of localizability is not even touched while it is certain that this is a pivotal criterion for something to be a particle. All that is established so far is that certain mathematical quantities in the formalism are discrete. However, countability is merely one feature of particles and not at all conclusive evidence for a particle interpretation of QFT yet . It is not clear at this stage whether we are in fact dealing with particles or with fundamentally different objects which only have this one feature of discreteness in common with particles.
Teller (1995) argues that the Fock space or “occupation number” representation does support a particle ontology in terms of field quanta since these can be counted or aggregated, although not numbered. The degree of excitation of a certain mode of the underlying field determines the number of objects, i.e. the particles in the sense of quanta. Labels for individual particles like in the Schrödinger many-particle formalism do not occur any more, which is the crucial deviation from the classical notion of particles. However, despite of this deviation, says Teller, quanta should be regarded as particles: Besides their countability another fact that supports seeing quanta as particles is that they have the same energies as classical particles. Teller has been criticized for drawing unduly far-reaching ontological conclusions from one particular representation, in particular since the Fock space representation cannot be appropriate in general because it is only valid for free particles (see, e.g., Fraser 2008). In order to avoid this problem Bain (2000) proposes an alternative quanta interpretation that rests on the notion of asymptotically free states in scattering theory. For a further discussion of the quanta interpretation see the subsection on inequivalent representations below.
The vacuum state |0⟩ is the energy ground state, i.e., the eigenstate of the energy operator with the lowest eigenvalue. It is a remarkable result in ordinary non-relativistic QM that the ground state energy of e.g., the harmonic oscillator is not zero in contrast to its analogue in classical mechanics. In addition to this, the relativistic vacuum of QFT has the even more striking feature that the expectation values for various quantities do not vanish, which prompts the question what it is that has these values or gives rise to them if the vacuum is taken to be the state with no particles present. If particles were the basic objects of QFT how can it be that there are physical phenomena even if nothing is there according to this very ontology? Eventually, studies of QFT in curved space-time indicate that the existence of a particle number operator might be a contingent property of the flat Minkowski space-time, because Poincaré symmetry is used to pick out a preferred representation of the canonical commutation relations which is equivalent to picking out a preferred vacuum state (see Wald 1994).
Before exploring whether other (potentially) necessary requirements for the applicability of the particle concept are fulfilled let us see what the alternatives are. Proceeding this way makes it easier to evaluate the force of the following arguments in a more balanced manner.
5.1.2 The Field Interpretation
Since various arguments seem to speak against a particle interpretation, the allegedly only alternative, namely a field interpretation, is often taken to be the appropriate ontology of QFT. So let us see what a physical field is and why QFT may be interpreted in this sense. A classical point particle can be described by its position x(t) and its momentum p(t), which change as the time t progresses. So there are six degrees of freedom for the motion of a point particle corresponding to the three coordinates of the particle's position and three more coordinates for its momentum. In the case of a classical field one has an independent value for each single point x in space, where this specification changes as time progresses. The field value φ can be a scalar quantity, like temperature, a vectorial one as for the electromagnetic field, or a tensor, such as the stress tensor for a crystal. A field is therefore specified by a time-dependent mapping from each point of space to a field value φ(x,t). Thus a field is a system with an infinite number of degrees of freedom, which may be restrained by some field equations. Whereas the intuitive notion of a field is that it is something transient and fundamentally different from matter, it can be shown that it is possible to ascribe energy and momentum to a pure field even in the absence of matter. This somewhat surprising fact shows how gradual the distinction between fields and matter can be.
The transition from a classical field theory to a quantum field theory is characterized by the occurrence of operator-valued quantum fields φ̂(x,t), and corresponding conjugate fields, for both of which certain canonical commutation relations hold. Thus there is an obvious formal analogy between classical and quantum fields: in both cases field values are attached to space-time points, where these values are specified by real numbers in the case of classical fields and operators in the case of quantum fields. That is, the mapping x ↦ φ̂(x,t) in QFT is analogous to the classical mapping x ↦ φ(x,t). Due to this formal analogy it appears to be beyond any doubt that QFT is a field theory.
But is a systematic association of certain mathematical terms with all points in space-time really enough to establish a field theory in a proper physical sense? Is it not essential for a physical field theory that some kind of real physical properties are allocated to space-time points? This requirement seems not fulfilled in QFT, however. Teller (1995: ch. 5) argues that the expression quantum field is only justified on a “perverse reading” of the notion of a field, since no definite physical values whatsoever are assigned to space-time points. Instead, quantum field operators represent the whole spectrum of possible values so that they rather have the status of observables (Teller: “determinables”) or general solutions. Only a specific configuration, i.e. an ascription of definite values to the field observables at all points in space, can count as a proper physical field.
There are at least four proposals for a field interpretation of QFT, all of which respect the fact that the operator-valuedness of quantum fields impedes their direct reading as physical fields.
(i) Teller (1995) argues that definite physical quantities emerge when not only the quantum field operators but also the state of the system is taken into account. More specifically, for a given state |ψ⟩ one can calculate the expectation values ⟨ψ|φ(x)|ψ⟩ which yields an ascription of definite physical values to all points x in space and thus a configuration of the operator-valued quantum field that may be seen as a proper physical field. The main problem with proposal (i), and possibly with (ii), too, is that an expectation value is the average value of a whole sequence of measurements, so that it does not qualify as the physical property of any actual single field system, no matter whether this property is a pre-existing (or categorical) value or a propensity (or disposition).
(ii) The vacuum expectation value or VEV interpretation, advocated by Wayne (2002), exploits a theorem by Wightman (1956). According to this reconstruction theorem all the information that is encoded in quantum field operators can be equivalently described by an infinite hierarchy of n-point vacuum expectation values, namely the expectation values of all products of quantum field operators at n (in general different) space-time points, calculated for the vacuum state. Since this collection of vacuum expectation values comprises only definite physical values it qualifies as a proper field configuration, and, Wayne argues, due to Wightman's theorem, so does the equivalent set of quantum field operators. Thus, and this is the upshot of Wayne's argument, an ascription of quantum field operators to all space-time points does by itself constitute a field configuration, namely for the vacuum state, even if this is not the actual state.
But this is also a problem for the VEV interpretation: While it shows nicely that much more information is encoded in the quantum field operators than just unspecifically what could be measured, it still does not yield anything like an actual field configuration. While this last requirement is likely to be too strong in a quantum theoretical context anyway, the next proposal may come at least somewhat closer to it.
(iii) In recent years the term wave functional interpretation has been established as the name for the default field interpretation of QFT. Correspondingly, it is the most widely discussed extant proposal; see, e.g., Huggett (2003), Halvorson and Müger (2007), Baker (2009) and Lupher (2010). In effect, it is not very different from proposal (i), and with further assumptions for (i) even identical. However, proposal (ii) phrases things differently and in a very appealing way. The basic idea is that quantized fields should be interpreted completely analogously to quantized one-particle states, just as both result analogously from imposing canonical commutation relations on the non-operator-valued classical quantities. In the case of a quantum mechanical particle, its state can be described by a wave function ψ(x), which maps positions to probability amplitudes, where |ψ(x)|2 can be interpreted as the probability for the particle to be measured at position x. For a field, the analogue of positions are classical field configurations φ(x), i.e. assignments of field values to points in space. And so, the analogy continues, just as a quantum particle is described by a wave function that maps positions to probabilities (or rather probability amplitudes) for the particle to be measured at x, quantum fields can be understood in terms of wave functionals ψ[φ(x)] that map functions to numbers, namely classical field configurations φ(x) to probability amplitudes, where |ψ[φ(x)]|2 can be interpreted as the probability for a given quantum field system to be found in configuration φ(x) when measured. Thus just as a quantum state in ordinary single-particle QM can be interpreted as a superposition of classical localized particle states, the state of a quantum field system, so says the wave functional approach, can be interpreted as a superposition of classical field configurations. And what superpositions mean depends on one's general interpretation of quantum probabilities (collapse with propensities, Bohmian hidden variables, branching Everettian many-worlds,…). In practice, however, QFT is hardly ever represented in wave functional space because usually there is little interest in measuring field configurations. Rather, one tries to measures ‘particle’ states and therefore works in Fock space.
(iv) For a modification of proposal (iii), indicated in Baker (2009: sec. 5) and explicitly formulated as an alternative interpretation by Lupher (2010), see the end of the section “Non-Localizability Theorems” below.
5.1.3 Ontic Structural Realism
The multitude of problems for particle as well as field interpretations prompted a number of alternative ontological approaches to QFT. Auyang (1995) and Dieks (2002) propose different versions of event ontologies. Seibt (2002) and Hättich (2004) defend process-ontological accounts of QFT, which are scrutinized in Kuhlmann (2002, 2010a: ch. 10). In recent years, however, ontic structural realism (OSR) has become the most fashionable ontological framework for modern physics. While so far the vast majority of studies concentrates on ordinary QM and General Relativity Theory, it seems to be commonly believed among advocates of OSR that their case is even stronger regarding QFT, in light of the paramount significance of symmetry groups (also see below)—hence the name group structural realism (Roberts 2010). Explicit arguments are few and far between, however.
One of the rare arguments in favor of OSR that deal specifically with QFT is due to Kantorovich (2003), who opts for a Platonic version of OSR; a position that is otherwise not very popular among OSRists. Kantorovich argues that directly after the big bang “the world was baryon-free, whereas the symmetry of grand unification existed as an abstract structure” (p. 673). Cao (1997b) points out that the best ontological access to QFT is gained by concentrating on structural properties rather than on any particular category of entities. Cao (2010) advocates a “constructive structural realism” on the basis of a detailed conceptual investigation of the formation of quantum chromodynamics. However, Kuhlmann (2011) shows that Cao's position has little to do with what is usually taken to be ontic structural realism, and that it is not even clear whether it should at least be rated as an epistemic variant of structural realism.
Lyre (2004) argues that the central significance of gauge theories in modern physics supports structural realism, and offers a case study concerning the U(1) gauge symmetry group, which characterizes QED. Recently Lyre (2012) has been advocating an intermediate form of OSR, which he calls “Extended OSR (ExtOSR)”, according to which there are not only relational structural properties but also structurally derived intrinsic properties, namely the invariants of structure: mass, spin, and charge. Lyre claims that only ExtOSR is in a position to account for gauge theories. Moreover, it can make sense of zero-value properties, such as the zero mass of photons. See the Section 4.2 (OSR and Quantum Field Theory) in the SEP entry on structural realism.
5.1.4 Trope Ontology
Kuhlmann (2010a) proposes a Dispositional Trope Ontology (DTO) as the most appropriate ontological reading of the basic structure of QFT, in particular in its algebraic formulation, AQFT. The term ‘trope’ refers to a conception of properties that breaks with tradition by regarding properties as particulars rather than repeatables (or ‘universals’). This new conception of properties permits analyzing objects as pure bundles of properties/tropes without excluding the possibility of having different objects with (qualitatively but not numerically) exactly the same properties. One of Kuhlmann's crucial points is that (A)QFT speaks in favor of a bundle conception of objects because the net structure of observable algebras alone (see section “Basic Ideas of AQFT” above) encodes the fundamental features of a given quantum field theory, e.g. its charge structure.
In the DTO approach, the essential properties/tropes of a trope bundle are then identified with the defining characteristics of a superselection sector, such as different kinds of charges, mass and spin. Since these properties cannot change by any state transition they guarantee the object's identity over time. Superselection sectors are inequivalent irreducible representations of the algebra of all quasi-local observables. While the essential properties/tropes of an object are permanent, its non-essential ones may change. Since we are dealing with quantum physical systems many properties are dispositions (or propensities); hence the name dispositional trope ontology.
A trope bundle is not individuated via spatio-temporal co-localization but because of the particularity of its constitutive tropes. Morganti (2009) also advocates a trope-ontological reading of QFT, which refers directly to the classification scheme of the Standard Model.
5.2 Did Wigner Define the Particle Concept?
Wigner's (1939) famous analysis of the Poincaré group is often assumed to provide a definition of elementary particles. The main idea of Wigner's approach is the supposition that each irreducible (projective) representation of the relevant space-time symmetry group yields the state space of one kind of elementary physical system, where the prime example is an elementary particle which has the more restrictive property of being structureless. The physical justification for linking up irreducible representations with elementary systems is the requirement that “there must be no relativistically invariant distinction between the various states of the system” (Newton & Wigner 1949). In other words the state space of an elementary system shall have no internal structure with respect to relativistic transformations. Put more technically, the state space of an elementary system must not contain any relativistically invariant subspaces, i.e., it must be the state space of an irreducible representation of the relevant invariance group. If the state space of an elementary system had relativistically invariant subspaces then it would be appropriate to associate these subspaces with elementary systems. The requirement that a state space has to be relativistically invariant means that starting from any of its states it must be possible to get to all the other states by superposition of those states which result from relativistic transformations of the state one started with. The main part of Wigner's analysis consists in finding and classifying all the irreducible representations of the Poincaré group. Doing that involves finding relativistically invariant quantities that serve to classify the irreducible representations. Wigner's pioneering identification of types of particles with irreducible unitary representations of the Poincaré group has been exemplary until the present, as it is emphasized, e.g., in Buchholz (1994). For an alternative perspective focusing on “Wigner's legacy” for ontic structural realism see Roberts (2011).
Regarding the question whether Wigner has supplied a definition of particles, one must say that although Wigner has in fact found a highly valuable and fruitful classification of particles, his analysis does not contribute very much to the question what a particle is and whether a given theory can be interpreted in terms of particles. What Wigner has given is rather a conditional answer. If relativistic quantum mechanics can be interpreted in terms of particles then the possible types of particles and their invariant properties can be determined via an analysis of the irreducible unitary representations of the Poincaré group. However, the question whether, and if yes in what sense, at least relativistic quantum mechanics can be interpreted as a particle theory at all is not addressed in Wigner's analysis. For this reason the discussion of the particle interpretation of QFT is not finished with Wigner's analysis as one might be tempted to say. For instance, the pivotal question of the localizability of particle states, to be discussed below, is still open. Moreover, once interactions are included, Wigner's classification is no longer applicable (see Bain 2000). Kuhlmann (2010a: sec. 8.1.2) offers an accessible introduction to Wigner's analysis and discusses its interpretive relevance.
5.3 Non-Localizability Theorems
The observed ‘particle traces’, e.g., on photographic plates of bubble chambers, seem to be a clear indication for the existence of particles. However, the theory which has been built on the basis of these scattering experiments, QFT, turns out to have considerable problems to account for the observed ‘particle trajectories’. Not only are sharp trajectories excluded by Heisenberg's uncertainty relations for position and momentum coordinates, which hold for non-relativistic quantum mechanics already. More advanced examinations in AQFT show that ‘quantum particles’ which behave according to the principles of relativity theory cannot be localized in any bounded region of space-time, no matter how large, a result which excludes even tube-like trajectories. It thus appears to be impossible that our world is composed of particles when we assume that localizability is a necessary ingredient of the particle concept. So far there is no single unquestioned argument against the possibility of a particle interpretation of QFT but the problems are piling up. Reeh & Schlieder, Hegerfeldt, Malament and Redhead all gained mathematical results, or formalized their interpretation, which prove that certain sets of assumptions, which are taken to be essential for the particle concept, lead to contradictions.
The Reeh-Schlieder theorem (1961) is a central result in AQFT. It asserts that acting on the vacuum state Ω with elements of the von Neumann observable algebra R(O) for open space-time region O, one can approximate as closely as one likes any state in Hilbertspace H, in particular one that is very different from the vacuum in some space-like separated region O′. The Reeh-Schlieder theorem is thus exploiting long distance correlations of the vacuum. Or one can express the result by saying that local measurements do not allow for a distinction between an N-particle state and the vacuum state. Redhead's (1995a) take on the Reeh-Schlieder theorem is that local measurements can never decide whether one observes an N-particle state, since a projection operator PΨ which corresponds to an N-particle state Ψ can never be an element of a local algebra R(O). Clifton & Halvorson (2001) discuss what this means for the issue of entanglement. Halvorson (2001) shows that an alternative “Newton-Wigner” localization scheme fails to evade the problem of localization posed by the Reeh-Schlieder theorem.
Malament (1996) formulates a no-go theorem to the effect that a relativistic quantum theory of a fixed number of particles predicts a zero probability for finding a particle in any spatial set, provided four conditions are satisfied, namely concerning translation covariance, energy, localizability and locality. The localizability condition is the essential ingredient of the particle concept: A particle—in contrast to a field—cannot be found in two disjoint spatial sets at the same time. The locality condition is the main relativistic part of Malament's assumptions. It requires that the statistics for measurements in one space-time region must not depend on whether or not a measurement has been performed in a space-like related second space-time region. Malament's proof has the weight of a no-go theorem provided that we accept his four conditions as natural assumptions for a particle interpretation. A relativistic quantum theory of a fixed number of particles, satisfying in particular the localizability and the locality condition, has to assume a world devoid of particles (or at least a world in which particles can never be detected) in order not to contradict itself. Malament's no-go theorem thus seems to show that there is no middle ground between QM and QFT, i.e., no theory which deals with a fixed number of particles (like in QM) and which is relativistic (like QFT) without running into the localizability problem of the no-go theorem. One is forced towards QFT which, as Malament is convinced, can only be understood as a field theory. Nevertheless, whether or not a particle interpretation of QFT is in fact ruled out by Malament's result is a point of debate. At least prima facie Malament's no-go theorem alone cannot supply a final answer since it assumes a fixed number of particles, an assumption that is not valid in the case of QFT.
The results about non-localizability which have been explored above may appear to be not very astonishing in the light of the following facts about ordinary QM: Quantum mechanical wave functions (in position representation) are usually smeared out over all ℜ3, so that everywhere in space there is a non-vanishing probability for finding a particle. This is even the case arbitrarily close after a sharp position measurement due to the instantaneous spreading of wave packets over all space. Note, however, that ordinary QM is non-relativistic. A conflict with SRT would thus not be very surprising although it is not yet clear whether the above-mentioned quantum mechanical phenomena can actually be exploited to allow for superluminal signalling. QFT, on the other side, has been designed to be in accordance with special relativity theory (SRT). The local behavior of phenomena is one of the leading principles upon which the theory was built. This makes non-localizability within the formalism of QFT a much severer problem for a particle interpretation.
Malament's reasoning has come under attack in Fleming & Butterfield (1999) and Busch (1999). Both argue to the effect that there are alternatives to Malament's conclusion. The main line of thought in both criticisms is that Malament's ‘mathematical result’ might just as well be interpreted as evidence that the assumed concept of a sharp localization operator is flawed and has to be modified either by allowing for unsharp localization (Busch 1999) or for so-called “hyperplane dependent localization” (Fleming & Butterfield 1999). In Saunders (1995) a different conclusion from Malament's (as well as from similar) results is drawn. Rather than granting Malament's four conditions and deriving a problem for a particle interpretation Saunders takes Malament's proof as further evidence that one can not hold on to all four conditions. According to Saunders it is the localizability condition which might not be a natural and necessary requirement on second thought. Stressing that “relativity requires the language of events, not of things” Saunders argues that the localizability condition loses its plausibility when it is applied to events: It makes no sense to postulate that the same event can not occur at two disjoint spatial sets at the same time. One can only require for the same kind of event not to occur at both places. For Saunders the particle interpretation as such is not at stake in Malament's argument. The question is rather whether QFT speaks about things at all. Saunders considers Malament's result to give a negative answer to this question. A kind of meta paper on Malament's theorem is Halvorson & Clifton (2002). Various objections to the choice of Malament's assumptions and his conclusion are considered and rebutted. Moreover, Halvorson and Clifton establish two further no-go theorems which preserve Malament's theorem by weakening tacit assumptions and showing that the general conclusion still holds. One thing seems to be clear. Since Malament's ‘mathematical result’ appears to allow for various different conclusions it cannot be taken as conclusive evidence against the tenability of a particle interpretation of QFT and the same applies to Redhead's interpretation of the Reeh-Schlieder theorem. For a more detailed exposition and comparison of the Reeh-Schlieder theorem and Malament's theorem see Kuhlmann (2010a: sec. 8.3).
Does the field interpretation also suffer from problems concerning non-localizability? In the section “Deficiencies of the Conventional Formulation of QFT” we already saw that, strictly speaking, field operators cannot be defined at points but need to be smeared out in the (finite and arbitrarily small) vicinity of points, giving rise to smeared field operators φˆ(f), which represent the weighted average field value in the respective region. This procedure leads to operator-valued distributions instead of operator-valued fields. The lack of field operators at points appears to be analogous to the lack of position operators in QFT, which troubles the particle interpretation. However, for position operators there is no remedy analogous to that for field operators: while even unsharply localized particle positions do not exist in QFT (see Halvorson and Clifton 2002, theorem 2), the existence of smeared field operators demonstrates that there are at least point-like field operators. On this basis Lupher (2010) proposes a “modified field ontology”.
5.4 Inequivalent Representations
The occurrence of inequivalent representations is a grave obstacle for interpreting QFT, which is increasingly rated as the single most important problem, that has no counterpart whatsoever in standard QM. As we saw in the section “Deficiencies of the Conventional Formulation of QFT”, the quantization of a theory with an infinite number of degrees of freedom, such as a field theory, leads to unitarily inequivalent representations (UIR) of the canonical commutation relations. It is highly controversial what the availability of UIRs means. One possible stance is to dismiss them as mathematical artifacts with no physical relevance. Ruetsche (2002) calls this “Hilbert Space Conservatism”. On the one hand, this view fits well to the fact that UIRs are hardly even mentioned in standard textbooks on QFT. On the other hand, this cannot be the last word because UIRs undoubtedly do real work in physics, e.g. in quantum statistical mechanics (see Ruetsche 2003) and in particular when it comes to spontaneous symmetry breaking.
The coexistence of UIRs can be readily understood looking at ferromagnetism (see Ruetsche 2006). At high temperatures the atomic dipoles in ferromagnetic substances fluctuate randomly. Below a certain temperature the atomic dipoles tend to align to each other in some direction. Since the basic laws governing this phenomenon are rotationally symmetrical, no direction is preferred. Thus once the dipoles have “chosen” one particular direction, the symmetry is broken. Since there is a different ground state for each direction of magnetization, one needs different Hilbert spaces—each containing a unique ground state—in order to describe symmetry breaking systems. Correspondingly, one has to employ inequivalent representations.
One important interpretive issue where UIRs play a crucial role is the Unruh effect: a uniformly accelerated observer in a Minkowski vacuum should detect a thermal bath of particles, the so-called Rindler quanta (Unruh 1976, Unruh & Wald 1984). A mere change of the reference frame thus seems to bring particles into being. Since the very existence of the basic entities of an ontology should be invariant under transformations of the referential frame the Unruh effect constitutes a severe challenge to a particle interpretation of QFT. Teller (1995: 110-113) tries to dispel this problem by pointing out that while the Minkowski vacuum has the definite value zero for the Minkowski number operator, the particle number is indefinite for the Rindler number operator, since one has a superposition of Rindler quanta states. This means that there are only propensities for detecting different numbers of Rindler quanta but no actual quanta. However, this move is problematic since it seems to suggest that quantum physical propensities in general don't need to be taken fully for real.
Clifton and Halvorson (2001b) argue, contra Teller, that it is inapproriate to give priority to either the Minkowski or the Rindler perspective. Both are needed for a complete picture. The Minkowski as well as the Rindler representation are true descriptions of the world, namely in terms of objective propensities. Arageorgis, Earman and Ruetsche (2003) argue that Minkowski and Rindler (or Fulling) quantization do not constitute a satisfactory case of physically relevant UIRs. First, there are good reasons to doubt that the Rindler vacuum is a physically realizable state. Second, the authors argue, the unitary inequivalence in question merely stems from the fact that one representation is reducible and the other one irreducible: The restriction of the Minkowski vacuum to a Rindler wedge, i.e. what the Minkowski observer says about the Rindler wedge, leads to a mixed state (a thermodynamic KMS state) and therefore a reducible representation, whereas the Rindler vacuum is a pure state and thus corresponds to an irreducible representation. Therefore, the Unruh effect does not cause distress for the particle interpretation—which the authors see to be fighting a losing battle anyhow—because Rindler quanta are not real and the unitary inequivalence of the representations in question has nothing specific to do with conflicting particle ascriptions.
The occurrence of UIRs is also at the core of an analysis by Fraser (2008). She restricts her analysis to inertial observers but compares the particle notion for free and interacting systems. Fraser argues, first, that the representations for free and interacting systems are unavoidably unitarily inequivalent, and second, that the representation for an interacting system does not have the minimal properties that are needed for any particle interpretation—e.g. Teller's (1995) quanta version—namely the countability condition (quanta are aggregable) and a relativistic energy condition. Note that for Fraser's negative conclusion about the tenability of the particle (or quanta) interpretation for QFT there is no need to assume localizability.
Bain (2000) has a diverging assessment of the fact that only asymptotically free states, i.e. states very long before or after a scattering interaction, have a Fock representation that allows for an interpretation in terms of countable quanta. For Bain, the occurrence of UIRs without a particle (or quanta) interpretation for intervening times, i.e. close to scattering experiments, is irrelevant because the data that are collected from those experiments always refer to systems with negligible interactions. Bain concludes that although the inclusion of interactions does in fact lead to the break-down of the alleged duality of particles and fields it does not undermine the notion of particles (or fields) as such.
Fraser (2008) rates this as an unsuccessful “last ditch” attempt to save a quanta interpretation of QFT because it is ad hoc and can't even show that at least something similar to the free field total number operator exists for finite times, i.e. between the asymptotically free states. Moreover, Fraser (2008) points out that, contrary to what some authors suggest, the main source of the impossibility to interpret interacting systems in terms of particles is not that many-particle states are inappropriately described in the Fock representation if one deals with interacting fields but rather that QFT obeys special relativity theory (also see Earman and Fraser (2006) on Haag's theorem). As Fraser concludes, “[F]or a free system, special relativity and the linear field equation conspire to produce a quanta interpretation.” In his reply Bain (2011) points out that the reason why there is no total number operator in interacting relativistic quantum field theories is that this would require an absolute space-time structure, which in turn is not an appropriate requirement.
Baker (2009) points out that the main arguments against the particle interpretation—concerning non-localizability (e.g. Malament 1996) and failure for interacting systems (Fraser 2008)—may also be directed against the wave functional version of the field interpretation (see field interpretation (iii) above). Mathematically, Baker's crucial point is that wave functional space is unitarily equivalent to Fock space, so that arguments against the particle interpretation that attack the choice of the Fock representation may carry over to the wave functional interpretation. First, a Minkowski and a Rindler observer may also detect different field configurations. Second, if the Fock space representation is not apt to describe interacting systems, then the unitarily equivalent wave functional representation is in no better situation: Interacting fields are unitarily inequivalent to free fields, too.
It is difficult to say how the availability of UIRs should be interpreted in general. Clifton and Halvorson (2001b) propose seeing this as a form of complementarity. Ruetsche (2003) advocates a “Swiss army approach”, according to which the availability of UIRs shows that physical possibilities in different degrees must be included into our ontology. However, both proposals are yet too sketchy and await further elaboration.
5.5 The Role of Symmetries
Symmetries play a central role in QFT. In order to characterize a special symmetry one has to specify transformations T and features that remain unchanged during these transformations: invariants I. Symmetries are thus pairs {T, I}. The basic idea is that the transformations change elements of the mathematical description (the Lagrangians for instance) whereas the empirical content of the theory is unchanged. There are space-time transformations and so-called internal transformations. Whereas space-time symmetries are universal, i. e., they are valid for all interactions, internal symmetries characterize special sorts of interaction (electromagnetic, weak or strong interaction). Symmetry transformations define properties of particles/quantum fields that are conserved if the symmetry is not broken. The invariance of a system defines a conservation law, e.g., if a system is invariant under translations the linear momentum is conserved, if it is invariant under rotation the angular momentum is conserved. Inner transformations, such as gauge transformations, are connected with more abstract properties.
Symmetries are not only defined for Lagrangians but they can also be found in empirical data and phenomenological descriptions. Symmetries can thus bridge the gap between descriptions which are close to empirical results (‘phenomenology’) and the more abstract general theory which is a most important reason for their heuristic force. If a conservation law is found one has some knowledge about the system even if details of the dynamics are unknown. The analysis of many high energy collision experiments led to the assumption of special conservation laws for abstract properties like baryon number or strangeness. Evaluating experiments in this way allowed for a classification of particles. This phenomenological classification was good enough to predict new particles which could be found in the experiments. Free places in the classification could be filled even if the dynamics of the theory (for example the Lagrangian of strong interaction) was yet unknown. As the history of QFT for strong interaction shows, symmetries found in the phenomenological description often lead to valuable constraints for the construction of the dynamical equations. Arguments from group theory played a decisive role in the unification of fundamental interactions. In addition, symmetries bring about substantial technical advantages. For example, by using gauge transformations one can bring the Lagrangian into a form which makes it easy to prove the renormalizability of the theory. See also the entry on symmetry and symmetry breaking.
In many cases symmetries are not only heuristically useful but supply some sort of ‘justification’ by being used in the beginning of a chain of explanation. To a remarkable degree the present theories of elementary particle interactions can be understood by deduction from general principles. Under these principles symmetry requirements play a crucial role in order to determine the Lagrangian. For example, the only Lorentz invariant and gauge invariant renormalizable Lagrangian for photons and electrons is precisely the original Dirac Lagrangian. In this way symmetry arguments acquire an explanatory power and help to minimize the unexplained basic assumptions of a theory. Heisenberg concludes that in order “to find the way to a real understanding of the spectrum of particles it will therefore be necessary to look for the fundamental symmetries and not for the fundamental particles.” (Blum et al. 1995: 507).
Since symmetry operations change the perspective of an observer but not the physics an analysis of the relevant symmetry group can yield very general information about those entities which are unchanged by transformations. Such an invariance under a symmetry group is a necessary (but not sufficient) requirement for something to belong to the ontology of the considered physical theory. Hermann Weyl propagated the idea that objectivity is associated with invariance (see, e.g., his authoritative work Weyl 1952: 132). Auyang (1995) stresses the connection between properties of physically relevant symmetry groups and ontological questions. Kosso argues that symmetries help to separate objective facts from the conventions of descriptions; see his article in Brading & Castellani (2003), an anthology containing numerous further philosophical studies about symmetries in physics.
Symmetries are typical examples of structures that show more continuity in scientific change than assumptions about objects. For that reason structural realists consider structures as “the best candidate for what is ‘true’ about a physical theory” (Redhead 1999: 34). Physical objects such as electrons are then taken to be similar to fiction that should not be taken seriously, in the end. In the epistemic variant of structural realism structure is all we know about nature whereas the objects which are related by structures might exist but they are not accessible to us. For the extreme ontic structural realist there is nothing but structures in the world (Ladyman 1998).
5.6 Taking Stock: Where do we Stand?
A particle interpretation of QFT answers most intuitively what happens in particle scattering experiments and why we seem to detect particle trajectories. Moreover, it would explain most naturally why particle talk appears almost unavoidable. However, the particle interpretation in particular is troubled by numerous serious problems. There are no-go theorems to the effect that, in a relativistic setting, quantum “particle” states cannot be localized in any finite region of space-time no matter how large it is. Besides localizability, another hard core requirement for the particle concept that seems to be violated in QFT is countability. First, many take the Unruh effect to indicate that the particle number is observer or context dependent. And second, interacting quantum field theories cannot be interpreted in terms of particles because their representations are unitarily inequivalent to Fock space (Haag's theorem), which is the only known way to represent countable entities in systems with an infinite number of degrees of freedom.
At first sight the field interpretation seems to be much better off, considering that a field is not a localized entity and that it may vary continuously—so no requirements for localizability and countability. Accordingly, the field interpretation is often taken to be implied by the failure of the particle interpretation. However, on closer scrutiny the field interpretation itself is not above reproach. To begin with, since “quantum fields” are operator valued it is not clear in which sense QFT should be describing physical fields, i.e. as ascribing physical properties to points in space. In order to get determinate physical properties, or even just probabilities, one needs a quantum state. However, since quantum states as such are not spatio-temporally defined, it is questionable whether field values calculated with their help can still be viewed as local properties. The second serious challenge is that the arguably strongest field interpretation—the wave functional version—may be hit by similar problems as the particle interpretation, since wave functional space is unitarily equivalent to Fock space.
The occurrence of unitarily inequivalent representations (UIRs), which first seemed to cause problems specifically for the particle interpretation but which appears to carry over to the field interpretation, may well be a severe obstacle for any ontological interpretation of QFT. However, it is controversial whether the two most prominent examples, namely the Unruh effect and Haag's theorem, really do cause the contended problems in the first place. Thus one of the crucial tasks for the philosophy of QFT is further unmasking the ontological significance of UIRs.
The two remaining contestants approach QFT in a way that breaks more radically with traditional ontologies than any of the proposed particle and field interpretations. Ontic Structural Realism (OSR) takes the paramount significance of symmetry groups to indicate that symmetry structures as such have an ontological primacy over objects. However, since most OSRists are decidedly against Platonism, it is not altogether clear how symmetry structures could be ontologically prior to objects if they only exist in concrete realizations, namely in those objects that exhibit these symmetries.
Dispositional Trope Ontology (DTO) deprives both particles and fields of their fundamental status, and proposes an ontology whose basic elements are properties understood as particulars, called ‘tropes’. One of the advantages of the DTO approach is its great generality concerning the nature of objects which it analyzes as bundles of (partly dispositional) properties/tropes: DTO is flexible enough to encompass both particle and field like features without being committed to either a particle or a field ontology.
In conclusion one has to recall that one reason why the ontological interpretation of QFT is so difficult is the fact that it is exceptionally unclear which parts of the formalism should be taken to represent anything physical in the first place. And it looks as if that problem will persist for quite some time.
• Auyang, S. Y., 1995, How is Quantum Field Theory Possible?, Oxford-New York: Oxford University Press.
• Bain, J., 2000, “Against particle/field duality: Asymptotic particle states and interpolating fields in interacting QFT (or: Who’s afraid of Haag’s theorem?)”, Erkenntnis, 53: 375–406.
• –––, 2011, “Quantum field theories in classical spacetimes and particles”, Studies in History and Philosophy of Modern Physics, 42: 98–106.
• Baker, D. J., 2009, “Against field interpretations of quantum field theory”, British Journal for the Philosophy of Science, 60: 585–609.
• Baker, D.J. and H. Halvorson, 2010, “Antimatter”, British Journal for the Philosophy of Science, 61: 93–121.
• Born, M., with W. Heisenberg, and P. Jordan, 1926, “Zur Quantenmechanik II”, Zeitschr. für Physik 35, 557.
• Brading, K. and E. Castellani (eds.), 2003, Symmetries in Physics: Philosophical Reflections, Cambridge: Cambridge University Press.
• Bratteli, O. and D. W. Robinson, 1979, Operator Algebras and Quantum Statistical Mechanics 1: C* and W*-Algebras, Symmetry Groups, Decomposition of States, New York et al.: Springer
• Brown, H. R. and R. Harré (eds.), 1988, Philosophical Foundations of Quantum Field Theory, Oxford: Clarendon Press.
• Buchholz, D., 1994, “On the manifestations of particles,” in R. N. Sen and A. Gersten, eds., Mathematical Physics Towards the 21st Century, Beer-Sheva: Ben-Gurion University Press.
• –––, 1998, “Current trends in axiomatic qantum field theory,” in P. Breitenlohner and D. Maison, eds, Quantum Field Theory. Proceedings of the Ringberg Workshop 1998, pp. 43-64, Berlin-Heidelberg: Springer.
• Busch, P., 1999, “Unsharp localization and causality in relativistic quantum theory,” Journal of Physics A: Mathematics General, 32: 6535.
• Butterfield, J. and H. Halvorson (eds.), 2004, Quantum Entanglements — Selected Papers — Rob Clifton, Oxford: Oxford University Press.
• Butterfield, J. and C. Pagonis (eds.), 1999, From Physics to Philosophy, Cambridge: Cambridge University Press.
• Callender, C. and N. Huggett (eds.), 2001, Physics Meets Philosophy at the Planck Scale, Cambridge: Cambridge University Press.
• Cao, T. Y., 1997a, Conceptual Developments of 20th Century Field Theories, Cambridge: Cambridge University Press.
• –––, 1997b, “Introduction: Conceptual issues in QFT,” in Cao 1997a, pp. 1-27.
• –––, (ed.), 1999, Conceptual Foundations of Quantum Field Theories, Cambridge: Cambridge University Press.
• –––, 2010, From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism, Cambridge: Cambridge University Press.
• Castellani, E., 2002, “Reductionism, emergence, and effective field theories,” Studies in History and Philosophy of Modern Physics, 33: 251-267.
• Clifton, R. (ed.), 1996, Perspectives on Quantum Reality: Non-Relativistic, Relativistic, and Field-Theoretic, Dordrecht et al.: Kluwer.
• Clifton, R. and H. Halvorson, 2001, “Entanglement and open systems in algebraic quantum field theory,” Studies in History and Philosophy of Modern Physics, 32: 1-31; reprinted in Butterfield & Halvorson 2004.
• Davies, P. (ed.), 1989, The New Physics, Cambridge: Cambridge University Press.
• Dawid, R., 2009, “On the conflicting assessments of string theory”, Philosophy of Science, 76: 984–996.
• Dieks, D., 2002, “Events and covariance in the interpretation of quantum field theory,” in Kuhlmann et al. 2002, pp. 215-234.
• Dieks, D. and A. Lubberdink, 2011, “How classical particles emerge from the quantum world”, Foundations of Physics, 41: 1051–1064.
• Dirac, P. A. M., 1927, “The quantum theory of emission and absorption of radiation,” Proceedings of the Royal Society of London, A 114: 243-256.
• Earman, John, 2011, “The Unruh effect for philosophers”, Studies In History and Philosophy of Modern Physics, 42: 81 – 97.
• Earman, J. and D. Fraser, 2006, “Haag’s theorem and its implications for the foundations of quantum field theory”, Erkenntnis, 64: 305–344.
• Fleming, G. N. and J. Butterfield, 1999, “Strange positions,” in Butterfield & Pagonis 1999, pp. 108-165.
• Fraser, D., 2008, “The fate of “particles” in quantum field theories with interactions”, Studies in History and Philosophy of Modern Physics, 39: 841–59.
• –––, 2009, “Quantum field theory: Underdetermination, inconsistency, and idealization”, Philosophy of Science, 76: 536–567.
• –––, 2011, “How to take particle physics seriously: A further defence of axiomatic quantum field theory”, Studies in History and Philosophy of Modern Physics, 42: 126–135.
• Georgi, H., 1989, “Effective quantum field theories,” in Davies 1989, pp. 446-457.
• Greene, B., 1999, The Elegant Universe. Superstrings, Hidden Dimensions and the Quest for the Ultimate Theory, New York: W. W. Norton and Company.
• Haag, R., 1996, Local Quantum Physics: Fields, Particles, Algebras, 2nd edition, Berlin et al.: Springer.
• Haag, R. and D. Kastler, 1964, “An algebraic approach to quantum field theory,” Journal of Mathematical Physics, 5: 848-861.
• Halvorson, H., 2001, “Reeh-schlieder defeats newton-wigner: On alternative localization schemes in relativistic quantum field theory”, Philosophy of Science, 68: 111–133.
• Halvorson, H. and R. Clifton, 2002, “No place for particles in relativistic quantum theories?” Philosophy of Science, 69: 1-28; reprinted in Butterfield and Halvorson 2004 and in Kuhlmann et al. 2002.
• Halvorson, H. and M. Müger , 2007, “Algebraic quantum field theory (with an appendix by Michael Müger)”, in Handbook of the Philosophy of Physics — Part A, Jeremy Butterfield and John Earman (eds.), Amsterdam: Elsevier, 731–922.
• Hartmann, S., 2001, “Effective field theories, reductionism, and explanation,” Studies in History and Philosophy of Modern Physics, 32: 267-304.
• Hättich, F., 2004, Quantum Processes — A Whiteheadian Interpretation of Quantum Field Theory, Münster: agenda Verlag.
• Healey, R., 2007, Gauging What’s Real: The Conceptual Foundations of Contemporary Gauge Theories, Oxford: Oxford University Press.
• Heisenberg, W. and W. Pauli, 1929, “Zur Quantendynamik der Wellenfelder,” Zeitschrift für Physik, 56: 1-61.
• Hoddeson, L., with L. Brown, M. Riordan, and M. Dresden (eds.), 1997, The Rise of the Standard Model: A History of Particle Physics from 1964 to 1979, Cambridge: Cambridge University Press.
• Horuzhy, S. S., 1990, Introduction to Algebraic Quantum Field Theory, 1st edition, Dordrecht et al.: Kluwer.
• Huggett, N., 2000, “Philosophical foundations of quantum field theory”, The British Journal for the Philosophy Science, 51: 617–637.
• –––, 2003, “Philosophical foundations of quantum field theory”, in Philosophy of Science Today, P. Clark and K. Hawley, eds., Oxford: Clarendon Press, 617?37.
• Johansson, L. G. and K. Matsubara, 2011, “String theory and general methodology: A mutual evaluation”, Studies in History and Philosophy of Modern Physics, 42: 199–210.
• Kaku, M., 1999, Introduction to Superstrings and M-Theory, New York: Springer.
• Kantorovich, A., 2003, “The priority of internal symmetries in particle physics”, Studies in History and Philosophy of Modern Physics, 34: 651–675.
• Kastler, D. (ed.), 1990, The Algebraic Theory of Superselection Sectors: Introduction and Recent Results, Singapore et al.: World Scientific.
• Kiefer, C., 2007, Quantum Gravity, Oxford: Oxford University Press. Second edition.
• Kronz, F. and T. Lupher, 2005, “Unitarily inequivalent representations in algebraic quantum theory”, International Journal of Theoretical Physics, 44: 1239–1258.
• Kuhlmann, M., 2010a, The Ultimate Constituents of the Material World – In Search of an Ontology for Fundamental Physics, Frankfurt: ontos Verlag.
• –––, 2010b, “Why conceptual rigour matters to philosophy: On the ontological significance of algebraic quantum field theory”, Foundations of Physics, 40: 1625–1637.
• –––, 2011, “Review of “From Current Algebra to Quantum Chromodynamics: A Case for Structural Realism” by T. Y. Cao”, Notre Dame Philosophical Reviews, available online.
• Kuhlmann, M. with H. Lyre and A. Wayne (eds.), 2002, Ontological Aspects of Quantum Field Theory, London: World Scientific Publishing.
• Ladyman, J., 1998, “What is structural realism?” Studies in History and Philosophy of Science, 29: 409-424.
• Landsman, N. P., 1996, “Local quantum physics,” Studies in History and Philosophy of Modern Physics, 27: 511-525.
• Lupher, T., 2010, “Not particles, not quite fields: An ontology for quantum field theory”, Humana Mente, 13: 155–173.
• Lyre, H., 2004, “Holism and structuralism in U(1) gauge theory,” Studies in History and Philosophy of Modern Physics, 35/4: 643-670.
• –––, 2012, “Structural invariants, structural kinds, structural laws”, in Probabilities, Laws, and Structures, Dordrecht: Springer, 179–191.
• Malament, D., 1996, “In defense of dogma: Why there cannot be a relativistic quantum mechanics of (localizable) particles,” in Clifton 1996, pp. 1-10.
• Mandl, F. and G. Shaw, 2010, Quantum Field Theory, Chichester (UK): John Wiley & Sons, second ed.
• Martin, C. A., 2002, “Gauge principles, gauge arguments and the logic of nature,” Philosophy of Science, 69/3: 221-234.
• Morganti, M., 2009, “Tropes and physics”, Grazer Philosophische Studien, 78: 185–205.
• Newton, T. D. and E. P. Wigner, 1949, “Localized states for elementary particles,” Reviews of Modern Physics, 21/3: 400-406.
• Peskin, M. E. and D. V. Schroeder, 1995, Introduction to Quantum Field Theory, Cambridge (MA): Perseus Books.
• Polchinski, J., 2000, String Theory, 2 volumes, Cambridge: Cambridge University Press.
• Redhead, M. L. G., 1995a, “More ado about nothing,” Foundations of Physics, 25: 123-137.
• –––, 1995b, “The vacuum in relativistic quantum field theory,” in Hull et al. 1994 (vol. 2), pp. 88-89.
• –––, 1999, “Quantum field theory and the philosopher,” in Cao 1999, pp. 34-40.
• –––, 2002, “The interpretation of gauge symmetry,” in Kuhlmann et al. 2002, pp. 281-301.
• Reeh, H. and S. Schlieder, 1961, “Bemerkungen zur Unitäräquivalenz von Lorentzinvarianten Feldern,” Nuovo Cimento, 22: 1051-1068.
• Rickles, D., 2008, “Quantum gravity: A primer for philosophers”, in The Ashgate Companion to Contemporary Philosophy of Physics, Dean Rickles (ed.), Aldershot: Ashgate, 262–382.
• Roberts, B. W., 2011, “Group structural realism”, The British Journal for the Philosophy Science, 62: 47?69.
• Roberts, J. E., 1990, “Lectures on algebraic quantum field theory,” in Kastler 1990, pp. 1-112.
• Ruetsche, L., 2002, “Interpreting quantum field theory”, Philosophy of Science, 69: 348–378.
• –––, 2003, “A matter of degree: Putting unitary equivalence to work,” Philosophy of Science, 70/5: 1329-1342.
• –––, 2006, “Johnny's so long at the ferromagnet”, Philosophy of Science, 73: 473–486.
• –––, 2011, “Why be normal?”, Studies in History and Philosophy of Modern Physics, 42: 107–115.
• Ryder, L. H., 1996, Quantum Field Theory, 2nd edition, Cambridge: Cambridge University Press.
• Saunders, S., 1995, “A dissolution of the problem of locality,” in Hull, M. F. D., Forbes, M., and Burian, R. M., eds., 1995, Proceedings of the Biennial Meeting of the Philosophy of Science Association: PSA 1994, East Lansing, MI: Philosophy of Science Association, vol. 2, pp. 88-98.
• Saunders, S. and H. R. Brown (eds.), 1991, The Philosophy of Vacuum, Oxford: Clarendon Press.
• Schweber, S. S., 1994, QED and the Men Who Made It,” Princeton: Princeton University Press.
• Segal, I. E., 1947, “Postulates for general quantum mechanics,” Annals of Mathematics, 48/4: 930-948.
• Seibt, J., 2002, “The matrix of ontological thinking: Heuristic preliminaries for an ontology of QFT,” in Kuhlmann et al. 2002, pp. 53-97.
• Streater, R. F. and A. S. Wightman, 1964, PCT, Spin and Statistics, and all that, New York: Benjamin.
• Teller, P., 1995, An Interpretive Introduction to Quantum Field Theory, Princeton: Princeton University Press.
• Unruh, W. G., 1976, “Notes on black hole evaporation,” Physical Review D, 14: 870-92.
• Unruh, W. G. and R. M. Wald, 1984, “What happens when an accelerating observer detects a Rindler particle?” Physical Review D, 29: 1047-1056.
• Wallace, D., 2006, “In defence of naiveté: The conceptual status of Lagrangian quantum field theory”, Synthese, 151: 33–80.
• –––, 2011, “Taking particle physics seriously: A critique of the algebraic approach to quantum field theory”, Studies in History and Philosophy of Modern Physics, 42: 116–125.
• Wayne, Andrew, 2002, “A naive view of the quantum field”, in Kuhlmann et al. 2002, 127–133.
• –––, 2008, “A trope-bundle ontology for field theory”, in The Ontology of Spacetime II, Dennis Dieks (ed.), Amsterdam: Elsevier, 1–15.
• Weinberg, S., 1995, The Quantum Theory of Fields – Foundations (Volume 1), Cambridge: Cambridge University Press.
• –––, 1996, The Quantum Theory of Fields – Modern Applications (Volume 2), Cambridge: Cambridge University Press.
• Weingard, R., 2001, “A philosopher looks at string theory,” in Callender & Huggett 2001, pp. 138-151.
• Weyl, H., 1952, Symmetry, Princeton: Princeton University Press.
• Wightman, A. S., 1956, “Quantum field theory in terms of vacuum expectation values”, Physical Review, 101: 860–66.
• Wigner, E. P., 1939, “On unitary representations of the inhomoneneous Lorentz group,” Annals of Mathematics, 40: 149-204.
Other Internet Resources
Copyright © 2012 by
Meinard Kuhlmann <meik@uni-bremen.de>
Please Read How You Can Help Keep the Encyclopedia Free |
a54d743faf112c3f | Singularity structure of Møller - Plesset perturbation theory
D. Z. Goodson and A. V. Sergeev
Møller-Plesset perturbation theory expresses the energy as a function E(z) of a perturbation parameter, z. This function contains singular points in the complex z-plane that affect the convergence of the perturbation series. A review is given of what is known in advance about the singularity structure of E(z) from functional analysis of the Schrödinger equation, and of techniques for empirically analyzing the singularity structure using large-order perturbation series. The physical significance of the singularities is discussed. They fall into two classes, which behave differently in response to changes in basis set or molecular geometry. One class consists of complex-conjugate square-root branch points that connect the ground state to a low-lying excited state. The other class consists of a critical point on the negative real $z$-axis, corresponding to an autoionization phenomenon. These two kinds of singularities are characterized and contrasted using quadratic summation approximants. A new classification scheme for Møller-Plesset perturbation series is proposed, based on the relative positions in the z-plane of the two classes of singularities. Possible applications of this singularity analysis to practical problems in quantum chemistry are described.
Text of the paper: PDF format, TeX file.
Results of work at UMassD
Designed by A. Sergeev. |
ae59e3c01f760827 | next up previous contents
Next: Some Analytically Soluble Problems Up: quantrev Previous: Linear Vector Spaces in Contents
Postulates of Quantum Mechanics
In this section, we will present six postulates of quantum mechanics. Again, we follow the presentation of McQuarrie [1], with the exception of postulate 6, which McQuarrie does not include. A few of the postulates have already been discussed in section 3.
Postulate 1. The state of a quantum mechanical system is completely specified by a function $\Psi({\bf r}, t)$ that depends on the coordinates of the particle(s) and on time. This function, called the wave function or state function, has the important property that $\Psi^{*}({\bf r}, t)
\Psi({\bf r}, t) d\tau$ is the probability that the particle lies in the volume element $d\tau$ located at ${\bf r}$ at time $t$.
The wavefunction must satisfy certain mathematical conditions because of this probabilistic interpretation. For the case of a single particle, the probability of finding it somewhere is 1, so that we have the normalization condition
\int_{-\infty}^{\infty} \Psi^{*}({\bf r}, t) \Psi({\bf r}, t) d\tau = 1
\end{displaymath} (110)
It is customary to also normalize many-particle wavefunctions to 1.2 The wavefunction must also be single-valued, continuous, and finite.
Postulate 2. To every observable in classical mechanics there corresponds a linear, Hermitian operator in quantum mechanics.
This postulate comes about because of the considerations raised in section 3.1.5: if we require that the expectation value of an operator $\hat{A}$ is real, then $\hat{A}$ must be a Hermitian operator. Some common operators occuring in quantum mechanics are collected in Table 1.
Table 1: Physical observables and their corresponding quantum operators (single particle)
Observable Observable Operator Operator
Name Symbol Symbol Operation
Position ${\bf r}$ $\hat{\bf r}$ Multiply by ${\bf r}$
Momentum ${\bf p}$ $\hat{\bf p}$ $-i \hbar \left(
\hat{i} \frac{\partial}{\partial x} +
\hat{j} \frac{\partial}{\partial y} +
\hat{k} \frac{\partial}{\partial z} \right)$
Kinetic energy $T$ $\hat{T}$ $- \frac{\hbar^2}{2m} \left(
Potential energy $V({\bf r})$ $\hat{V}({\bf r})$ Multiply by $V({\bf r})$
Total energy $E$ $\hat{H}$ $- \frac{\hbar^2}{2m} \left(
V({\bf r})$
Angular momentum $l_x$ $\hat{l}_x$ $ -i \hbar \left(
y \frac{\partial}{\partial z} - z \frac{\partial}{\partial y} \right)$
$l_y$ $\hat{l}_y$ $ -i \hbar \left(
z \frac{\partial}{\partial x} - x \frac{\partial}{\partial z} \right)$
$l_z$ $\hat{l}_z$ $ -i \hbar \left(
x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right)$
Postulate 3. In any measurement of the observable associated with operator $\hat{A}$, the only values that will ever be observed are the eigenvalues $a$, which satisfy the eigenvalue equation
\hat{A} \Psi = a \Psi
\end{displaymath} (111)
This postulate captures the central point of quantum mechanics--the values of dynamical variables can be quantized (although it is still possible to have a continuum of eigenvalues in the case of unbound states). If the system is in an eigenstate of $\hat{A}$ with eigenvalue $a$, then any measurement of the quantity $A$ will yield $a$.
Although measurements must always yield an eigenvalue, the state does not have to be an eigenstate of $\hat{A}$ initially. An arbitrary state can be expanded in the complete set of eigenvectors of $\hat{A}$ ( $\hat{A}
\Psi_i = a_i \Psi_i)$ as
\Psi = \sum_i^{n} c_i \Psi_i
\end{displaymath} (112)
where $n$ may go to infinity. In this case we only know that the measurement of $A$ will yield one of the values $a_i$, but we don't know which one. However, we do know the probability that eigenvalue $a_i$ will occur--it is the absolute value squared of the coefficient, $\vert c_i\vert^2$ (cf. section 3.1.4), leading to the fourth postulate below.
An important second half of the third postulate is that, after measurement of $\Psi$ yields some eigenvalue $a_i$, the wavefunction immediately ``collapses'' into the corresponding eigenstate $\Psi_i$ (in the case that $a_i$ is degenerate, then $\Psi$ becomes the projection of $\Psi$ onto the degenerate subspace). Thus, measurement affects the state of the system. This fact is used in many elaborate experimental tests of quantum mechanics.
Postulate 4. If a system is in a state described by a normalized wave function $\Psi$, then the average value of the observable corresponding to $\hat{A}$ is given by
<A> = \int_{-\infty}^{\infty} \Psi^{*} \hat{A} \Psi d\tau
\end{displaymath} (113)
Postulate 5. The wavefunction or state function of a system evolves in time according to the time-dependent Schrödinger equation
\hat{H} \Psi({\bf r}, t) = i \hbar \frac{\partial \Psi}{\partial t}
\end{displaymath} (114)
The central equation of quantum mechanics must be accepted as a postulate, as discussed in section 2.2.
Postulate 6. The total wavefunction must be antisymmetric with respect to the interchange of all coordinates of one fermion with those of another. Electronic spin must be included in this set of coordinates.
The Pauli exclusion principle is a direct result of this antisymmetry principle. We will later see that Slater determinants provide a convenient means of enforcing this property on electronic wavefunctions.
next up previous contents
David Sherrill 2006-08-15 |
6e25fe71235ef001 | Analytic Properties of Feynman Diagrams in Quantum Field
Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 7.91 MB
Downloadable formats: PDF
A fibre with a sharp change between the refractive index of the core fibre and the refractive index of the cladding is called a step index fibre. At that point though, it still wasn't proven, although the fact that everything is made up of ENERGY, was. It doesn't leave a "pattern" of any kind, just one little blotch. The quantum potential formulation of the de Broglie-Bohm theory is still fairly widely used.
Pages: 168
Publisher: Pergamon Pr; 1st edition (June 1986)
ISBN: 0080165443
Classical and Quantum Gravity: Theory, Analysis, and Applications (Physics Research and Technology)
Using the energy constant for light, it is now possible to complete de Broglie’s calculations and determine the rest mass of a single quantum of light , cited: Symmetries and Semiclassical download online download online. He presumed that the light wasn't really a continuous wave as everyone assumed, but perhaps could exist with only specific amounts, or "quanta," of energy. Planck didn't really believe this was true about light, in fact he later referred to this math gimmick as "an act of desperation." The wave particle duality exists to ensure that life would continue. If it didn't exist, black holes would eat up stars and eventually become 100% dark energy in the universe. The wave particle duality overcomes time and space as a conscious cosmic function of pro-reality and life. Instead, it is an oscillation of 0 and 1. Reality and non reality are in a dance moving with a probability of existence Letters on Wave Mechanics: download pdf Letters on Wave Mechanics:. It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism. Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck's quantum idea to problems in atomic physics. In the early 1900's, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford Selected Papers of Abdus Salam download pdf Is SAP true, in which case why prefer physical explanations to it, or is it false, in which case why ever apply it? It is precisely MW’s unfalsifiability that bothers some leading physicists such as Allen Guth (the inflationary universe theory), George Smoot (led the COBE effort: experimental verification of the inflationary universe) and Brian Greene (superstring theorist) Nonlinear Waves, Solitons and Chaos
Capillary action: rise of liquid in narrow tube due to surface tension. Carnot efficiency: ideal efficiency of heat engine or refrigerator working between two constant temperatures Gravitation and Spacetime (Second Edition) This is coincidentally equal to the speed of light in a vacuum, c = 3 × 108 m s−1. Furthermore, a measurement of the speed of a particular light beam yields the same answer regardless of the speed of the light source or the speed at which the measuring instrument is moving Euclidean Quantum Gravity on Manifolds with Boundary (Fundamental Theories of Physics) Euclidean Quantum Gravity on Manifolds. The reader may be aware from quantum theory of Schrodinger’s equation, which describes the probability of a particle (or system of particles) being in each possible state that it could be in. In Penrose’s example, the equation would be simple in that it would include just two probabilities: one giving the chance of finding the electron spinning ‘up’, and the other giving the probability of the electron spinning ‘down’ , source: Heavy Quarkonium Production Phenomenology and Automation of One-Loop Scattering Amplitude Computations (Springer Theses) Heavy Quarkonium Production.
Gyros, Clocks, Interferometers...: Testing Relativistic Gravity in Space (Lecture Notes in Physics)
Almost All About Waves
Principles and Applications of Wavelet Transform
Time-Harmonic Electromagnetic Fields (McGraw-Hill Texts in Electrical Engineering)
It is now time to define this concept more precisely. A quantum mechanical wave function is said to be invariant under some transformation if the transformed wave function is observationally indistinguishable from the original Scattering Theory If the universe is infinite, and there is infinitely more matter beyond the visible universe, that gravity would be balanced, and there would be no reason to postulate a cosmological constant at all. If that is true, then cosmological redshift may have something to do with the dissipation of energy as light waves move through the aether Lectures on Electromagnetic Theory: A Short Course for Engineers It has always been known that making observations affects a phenomenon, but the point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way, and the disturbance is necessary for the consistency of the viewpoint , e.g. Optics in Instruments: Applications in Biology and Medicine (ISTE) Whatever you think about and believe to be true regardless if those beliefs are based on "real truth" or "perceived truth" are what determines how your life will unfold. Quantum Physics has shown us that there exists no such thing as "untruth" only physical experiences in each area of our life which are formed based on our individual "perceptions" of truth , e.g. Field and Wave Electromagnetics (Addison-Wesley series in electrical engineering) read here. The two sound very much alike, but they are different Topological and Geometrical Methods in Field Theory, Turku, Finland, 26 May-1 June 1991 This resonance work energy was in addition to the thermal energy already inherent in the system as a result of its temperature. The amount of resonance work energy at the microscale is the resonance work variable, “rA”. In the solvent system example, individual elements in the system irradiated with resonant EM waves possessed greater energy than the elements in the thermal system ref.: Wave Propagation and download online
Probabilistic Methods in Quantum Field Theory and Quantum Gravity (NATO Science Series B: Physics)
Theory of Solitons in Inhomogeneous Media
Physics of Waves (Fundamentals of Physics)
Mechanics and Wave Motion
Radiation and Quantum Physics (Oxford physics series, 3)
Integrable Quantum Field Theories (Nato Science Series B:)
PCT, Spin & Statistics, and All That
Approximations and Numerical Methods for the Solution of Maxwell's Equations (The Institute of Mathematics and its Applications Conference Series, New Series)
Beyond Conventional Quantization
Strings, Branes and Gravity
Theory of Electromagnetic Waves: A Coordinate-Free Approach (Mcgraw Hill Series in Electrical and Computer Engineering)
Distributed Feedback Laser Diodes: Principles and Physical Modelling
Waves in Layered Media, 2nd Edition (Applied Mathematics and Mechanics, Vol. 16)
Quantum Field Theory and String Theory (Nato Science Series B:)
Physics of Solitons
Advances in Topological Quantum Field Theory: Proceedings of the NATO Adavanced Research Workshop on New Techniques in Topological Quantum Field ... 22 - 26 August 2001 (Nato Science Series II:)
Solitary Waves in Dispersive Complex Media: 149 (Springer Series in Solid-State Sciences)
Introduction To Nearshore Hydrodynamics (Advanced Series on Ocean Engineering (Paperback))
Influencing variables (temperature and surface electrodes, surface electrodes, distance between them, nature and concentration of the solution). Ionic conductivity of a solution, σ - Linear chain, branched or cyclic saturated and unsaturated Lecture Notes on read online From Schrödinger equation can be derived the fact that the average position varies according to the average momentum. This coincides with the classical setting of classical mechanics! Even though I can prove it mathematically, I have no understanding of the fundamental reason why Schrödinger equation links average position and average momentum Gauge Theory on Compact Surfaces (Memoirs of the American Mathematical Society) read here. And both would be solutions by superposition. So that's the end of the theorem because then these things are even or odd and have the same energy. So the solutions can be chosen to be even or odd under x. So if you've proven this, you've got it already. For bound states in one dimension, the solutions not anymore the word chosen Vibrations and Waves read online. IIT JEE 1980 - 2009 Transverse wave – Here, the elements of the disturbed media of the travelling wave, move perpendicular to the direction of the wave’s propagation. A particle at the crest / trough has zero velocity. The distance between two consecutive crests / troughs is equal to the wavelength of the wave. Therefore, the distance between a consecutive pair of crest / trough is half of the wave’s wavelength , source: Collected Papers on Wave Mechanics Collected Papers on Wave Mechanics. Frequency refers to the addition of time. Wave motion transfers energy from one point to another, which displace particles of the transmission medium–that is, with little or no associated mass transport. Waves consist, instead, of oscillations or vibrations (of a physical quantity), around almost fixed locations. Mechanical waves propagate through a medium, and the substance of this medium is deformed , source: The Quantum Theory of Fields, Volume 3: Supersymmetry by Steven Weinberg B01_0207 This is the Pauli exclusion principle. All particles with half-integer spin, including electrons, behave this way and are called fermions. For particles with integer spin, including photons, the wave function does not change sign. Such particles are called bosons. Electrons in an atom arrange themselves in shells because they are fermions, but light from a laser emerges in a single superintense beamessentially a single quantum statebecause light is composed of bosons Gauge Field Theories (Frontiers in Physics) Gauge Field Theories (Frontiers in. Interpretations of quantum mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results of experimental measurements. An important aspect is the relationship between the Schrödinger equation and wavefunction collapse , source: Wave Dynamics and Stability of read here I think the photos we used provided a good mix of the reality at Malibu when there's a good swell , e.g. Introduction to Mechanical Vibrations Quantum theory permits the quantitative understanding of molecules, of solids and liquids, and of conductors and semiconductors. It explains bizarre phenomena such as superconductivity and superfluidity, and exotic forms of matter such as the stuff of neutron stars and Bose-Einstein condensates, in which all the atoms in a gas behave like a single superatom , e.g. Ray and Wave Chaos in Ocean Acoustics - Chaos in Waveguides (CNC Series on Complexity, Nonlinearity, and Chaos) Ray and Wave Chaos in Ocean Acoustics -. Augustine’s classical philosophical argument that ‘the effect of the universe’s existence requires a suitable cause’ is unambiguously applicable here ref.: A Study Of Splashes read pdf |
c39a0f1d644e9415 | Dynamical billiards
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The Bunimovich stadium is a chaotic dynamical billiard
A billiard is a dynamical system in which a particle alternates between motion in a straight line and specular reflections from a boundary. When the particle hits the boundary it reflects from it without loss of speed. Billiard dynamical systems are Hamiltonian idealizations of the game of billiards, but where the region contained by the boundary can have shapes other than rectangular and even be multidimensional. Dynamical billiards may also be studied on non-Euclidean geometries; indeed, the very first studies of billiards established their ergodic motion on surfaces of constant negative curvature. The study of billiards which are kept out of a region, rather than being kept in a region, is known as outer billiard theory.
The motion of the particle in the billiard is a straight line, with constant energy, between reflections with the boundary (a geodesic if the Riemannian metric of the billiard table is not flat). All reflections are specular: the angle of incidence just before the collision is equal to the angle of reflection just after the collision. The sequence of reflections is described by the billiard map that completely characterizes the motion of the particle.
Billiards capture all the complexity of Hamiltonian systems, from integrability to chaotic motion, without the difficulties of integrating the equations of motion to determine its Poincaré map. Birkhoff showed that a billiard system with an elliptic table is integrable.
Equations of motion[edit]
The Hamiltonian for a particle of mass m moving freely without friction on a surface is:
where is a potential designed to be zero inside the region in which the particle can move, and infinity otherwise:
This form of the potential guarantees a specular reflection on the boundary. The kinetic term guarantees that the particle moves in a straight line, without any change in energy. If the particle is to move on a non-Euclidean manifold, then the Hamiltonian is replaced by:
where is the metric tensor at point . Because of the very simple structure of this Hamiltonian, the equations of motion for the particle, the Hamilton–Jacobi equations, are nothing other than the geodesic equations on the manifold: the particle moves along geodesics.
Notable billiards and billiard classes[edit]
Hadamard's billiards[edit]
Hadamard's billiards concern the motion of a free point particle on a surface of constant negative curvature, in particular, the simplest compact Riemann surface with negative curvature, a surface of genus 2 (a two-holed donut). The model is exactly solvable, and is given by the geodesic flow on the surface. It is the earliest example of deterministic chaos ever studied, having been introduced by Jacques Hadamard in 1898.
Artin's billiard[edit]
Main article: Artin billiard
Artin's billiard considers the free motion of a point particle on a surface of constant negative curvature, in particular, the simplest non-compact Riemann surface, a surface with one cusp. It is notable for being exactly solvable, and yet not only ergodic but also strongly mixing. It is an example of an Anosov system. This system was first studied by Emil Artin in 1924.
Dispersing and Semi-Dispersing billiards[edit]
Let M be complete smooth Riemannian manifold without boundary, maximal sectional curvature of which is not greater than K and with the injectivity radius . Consider a collection of n geodesically convex subsets (walls) , , such that their boundaries are smooth submanifolds of codimension one. Let , where denotes the interior of the set . The set will be called the billiard table. Consider now a particle that moves inside the set B with unit speed along a geodesic until it reaches one of the sets Bi (such an event is called a collision) where it reflects according to the law “the angle of incidence is equal to the angle of reflection” (if it reaches one of the sets , , the trajectory is not defined after that moment). Such dynamical system is called semi-dispersing billiard. If the walls are strictly convex, then the billiard is called dispersing. The naming is motivated by observation that a locally parallel beam of trajectories disperse after a collision with strictly convex part of a wall, but remain locally parallel after a collision with a flat section of a wall.
Dispersing boundary plays the same role for billiards as negative curvature does for geodesic flows causing the exponential instability of the dynamics. It is precisely this dispersing mechanism that gives dispersing billiards their strongest chaotic properties, as it was established by Yakov G. Sinai.[1] Namely, the billiards are ergodic, mixing, Bernoulli, having a positive Kolmogorov-Sinai entropy and an exponential decay of correlations.
Chaotic properties of general semi-dispersing billiards are not understood that well, however, those of one important type of semi-dispersing billiards, hard ball gas were studied in some details since 1975 (see next section).
General results of Dmitry Burago and Serge Ferleger [2] on the uniform estimation on the number of collisions in non-degenerate semi-dispersing billiards allow to establish finiteness of its topological entropy and no more than exponential growth of periodic trajectories.[3] In contrast, degenerate semi-dispersing billiards may have infinite topological entropy.[4]
Hard ball system[edit]
Lorentz gas[edit]
A trajectory in the Lorentz gas
The table of the Lorentz gas is a square with a disk removed from its center; the table is flat, having no curvature. The billiard arises from studying the behavior of two interacting disks bouncing inside a square, reflecting off the boundaries of the square and off each other. By eliminating the center of mass as a configuration variable, the dynamics of two interacting disks reduces to the dynamics in the Sinai billiard.
The billiard was introduced by Yakov G. Sinai as an example of an interacting Hamiltonian system that displays physical thermodynamic properties: all of its possible trajectories are ergodic and it has a positive Lyapunov exponent.
Sinai's great achievement with this model was to show that the classical Boltzmann–Gibbs ensemble for an ideal gas is essentially the maximally chaotic Hadamard billiards.
Bunimovich stadium[edit]
The table called the Bunimovich stadium is a rectangle capped by semicircles. Until it was introduced by Leonid Bunimovich, billiards with positive Lyapunov exponents were thought to need convex scatters, such as the disk in the Sinai billiard, to produce the exponential divergence of orbits. Bunimovich showed that by considering the orbits beyond the focusing point of a concave region it was possible to obtain exponential divergence.
Generalized billiards[edit]
Generalized billiards (GB) describe a motion of a mass point (a particle) inside a closed domain with the piece-wise smooth boundary . On the boundary the velocity of point is transformed as the particle underwent the action of generalized billiard law. GB were introduced by Lev D. Pustyl'nikov in the general case,[5] and, in the case when is a parallelepiped[6] in connection with the justification of the second law of thermodynamics. From the physical point of view, GB describe a gas consisting of finitely many particles moving in a vessel, while the walls of the vessel heat up or cool down. The essence of the generalization is the following. As the particle hits the boundary , its velocity transforms with the help of a given function , defined on the direct product (where is the real line, is a point of the boundary and is time), according to the following law. Suppose that the trajectory of the particle, which moves with the velocity , intersects at the point at time . Then at time the particle acquires the velocity , as if it underwent an elastic push from the infinitely-heavy plane , which is tangent to at the point , and at time moves along the normal to at with the velocity . We emphasize that the position of the boundary itself is fixed, while its action upon the particle is defined through the function .
We take the positive direction of motion of the plane to be towards the interior of . Thus if the derivative , then the particle accelerates after the impact.
If the velocity , acquired by the particle as the result of the above reflection law, is directed to the interior of the domain , then the particle will leave the boundary and continue moving in until the next collision with . If the velocity is directed towards the outside of , then the particle remains on at the point until at some time the interaction with the boundary will force the particle to leave it.
If the function does not depend on time ; i.e., , the generalized billiard coincides with the classical one.
This generalized reflection law is very natural. First, it reflects an obvious fact that the walls of the vessel with gas are motionless. Second the action of the wall on the particle is still the classical elastic push. In the essence, we consider infinitesimally moving boundaries with given velocities.
It is considered the reflection from the boundary both in the framework of classical mechanics (Newtonian case) and the theory of relativity (relativistic case).
Main results: in the Newtonian case the energy of particle is bounded, the Gibbs entropy is a constant,[6][7][8] (in Notes) and in relativistic case the energy of particle, the Gibbs entropy, the entropy with respect to the phase volume grow to infinity,[6][8] (in Notes), references to generalized billiards.
Quantum chaos[edit]
The quantum version of the billiards is readily studied in several ways. The classical Hamiltonian for the billiards, given above, is replaced by the stationary-state Schrödinger equation or, more precisely,
where is the Laplacian. The potential that is infinite outside the region but zero inside it translates to the Dirichlet boundary conditions:
As usual, the wavefunctions are taken to be orthonormal:
Curiously, the free-field Schrödinger equation is the same as the Helmholtz equation,
This implies that two and three-dimensional quantum billiards can be modelled by the classical resonance modes of a radar cavity of a given shape, thus opening a door to experimental verification. (The study of radar cavity modes must be limited to the transverse magnetic (TM) modes, as these are the ones obeying the Dirichlet boundary conditions).
The semi-classical limit corresponds to which can be seen to be equivalent to , the mass increasing so that it behaves classically.
As a general statement, one may say that whenever the classical equations of motion are integrable (e.g. rectangular or circular billiard tables), then the quantum-mechanical version of the billiards is completely solvable. When the classical system is chaotic, then the quantum system is generally not exactly solvable, and presents numerous difficulties in its quantization and evaluation. The general study of chaotic quantum systems is known as quantum chaos.
A particularly striking example of scarring on an elliptical table is given by the observation of the so-called quantum mirage.
The most practical application of theory of quantum billiards is related with double-clad fibers. In such a fiber laser, the small core with low numerical aperture confines the signal, and the wide cladding confines the multi-mode pump. In the paraxial approximation, the complex field of pump in the cladding behaves like a wave function in the quantum billiard. The modes of the cladding with scarring may avoid the core, and symmetrical configurations enhance this effect. The chaotic fibers[9] provide good coupling; in the first approximation, such a fiber can be described with the same equations as an idealized billiard. The coupling is especially poor in fibers with circular symmetry while the spiral-shaped fiber—with the core close to the chunk of the spiral—shows good coupling properties. The small spiral deformation forces all the scars to be coupled with the core.[10] In microwave ovens the stadium-like shape of the cavity is selected so that the microwave spread uniformly in the entire region of the cavity and the food will get heated uniformly.[citation needed]
See also[edit]
1. ^ http://www.mathunion.org/ICM/ICM1990.1/Main/icm1990.1.0249.0260.ocr.pdf
2. ^ Burago, D.; Ferleger, S.; Kononenko, A. (1 January 1998). "Uniform Estimates on the Number of Collisions in Semi-Dispersing Billiards". Annals of Mathematics. 147 (3): 695–708. doi:10.2307/120962. JSTOR 120962 – via JSTOR.
3. ^ Burago, D.; Ferleger, S. (26 May 1997). "Topological Entropy Of Semi-Dispersing Billiards". Ergodic Theory and Dynamical Systems. 18 (4): 791. doi:10.1017/S0143385798108246.
4. ^ Burago, D. (1 February 2006). "Semi-dispersing billiards of infinite topological entropy". Ergodic Theory and Dynamical Systems. 26 (1): 45–52. doi:10.1017/S0143385704001002 – via Cambridge Journals Online.
5. ^ Pustyl'nikov, L. D. (1999). "The law of entropy increase and generalized billiards". Russian Mathematical Surveys. 54 (3): 650–651. Bibcode:1999RuMaS..54..650P. doi:10.1070/rm1999v054n03abeh000168.
6. ^ a b c Pustyl'nikov, L. D. (1995). "Poincaré models, rogorous justification of the second law of thermodynamics from mechanics, and the Fermi acceleration mechanism". Russian Mathematical Surveys. 50 (1): 145–189. Bibcode:1995RuMaS..50..145P. doi:10.1070/rm1995v050n01abeh001663.
7. ^ Pustyl'nikov, L. D. (2005). "Generalized Newtonian periodic billiards in a ball". UMN. 60 (2): 171–172. English translation in Russian Mathematical Surveys, 60(2), pp. 365-366 (2005).
8. ^ a b Deryabin, Mikhail V.; Pustyl'nikov, Lev D. (2007). "Nonequilibrium Gas and Generalized Billiards". Journal of Statistical Physics. 126 (1): 117–132. Bibcode:2007JSP...126..117D. doi:10.1007/s10955-006-9250-4.
10. ^ Kouznetsov, D.; Moloney, J.V. (2004). "Boundary behavior of modes of Dirichlet Laplacian". Journal of Modern Optics. 51 (13): 1955–1962. Bibcode:2004JMOp...51.1955K. doi:10.1080/09500340408232504.
11. ^ B. D. Lubachevsky and F. H. Stillinger, Geometric properties of random disk packings, J. Statistical Physics 60 (1990), 561-583 http://www.princeton.edu/~fhs/geodisk/geodisk.pdf
Sinai's billiards[edit]
• Sinai, Ya. G. (1963). "[On the foundations of the ergodic hypothesis for a dynamical system of statistical mechanics]". Doklady Akademii Nauk SSSR (in Russian). 153 (6): 1261–1264. (in English, Sov. Math Dokl. 4 (1963) pp. 1818–1822).
• Ya. G. Sinai, "Dynamical Systems with Elastic Reflections", Russian Mathematical Surveys, 25, (1970) pp. 137–191.
• V. I. Arnold and A. Avez, Théorie ergodique des systèms dynamiques, (1967), Gauthier-Villars, Paris. (English edition: Benjamin-Cummings, Reading, Mass. 1968). (Provides discussion and references for Sinai's billiards.)
• D. Heitmann, J.P. Kotthaus, "The Spectroscopy of Quantum Dot Arrays", Physics Today (1993) pp. 56–63. (Provides a review of experimental tests of quantum versions of Sinai's billiards realized as nano-scale (mesoscopic) structures on silicon wafers.)
• S. Sridhar and W. T. Lu, "Sinai Billiards, Ruelle Zeta-functions and Ruelle Resonances: Microwave Experiments", (2002) Journal of Statistical Physics, Vol. 108 Nos. 5/6, pp. 755–766.
• Linas Vepstas, Sinai's Billiards, (2001). (Provides ray-traced images of Sinai's billiards in three-dimensional space. These images provide a graphic, intuitive demonstration of the strong ergodicity of the system.)
• N. Chernov and R. Markarian, "Chaotic Billiards", 2006, Mathematical survey and monographs nº 127, AMS.
Strange billiards[edit]
• T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A28, page 5033ff, 1995. PDF-Document
Bunimovich stadium[edit]
Generalized billiards[edit]
• M. V. Deryabin and L. D. Pustyl'nikov, "Generalized relativistic billiards", Reg. and Chaotic Dyn. 8(3), pp. 283–296 (2003).
• M. V. Deryabin and L. D. Pustyl'nikov, "On Generalized Relativistic Billiards in External Force Fields", Letters in Mathematical Physics, 63(3), pp. 195–207 (2003).
• M. V. Deryabin and L. D. Pustyl'nikov, "Exponential attractors in generalized relativistic billiards", Comm. Math. Phys. 248(3), pp. 527–552 (2004).
External links[edit] |
f4439085e8c27867 | adiabatic approximation
Show Summary Details
Quick Reference
An approximation used in quantum mechanics when the time dependence of parameters, such as the internuclear distance between atoms in a molecule, is slowly varying. This approximation means that the solution of the Schrödinger equation at one time goes continuously over to the solution at a later time. It was formulated by Max Born and the Soviet physicist Vladimir Alexandrovich Fock (1898–1974) in 1928. The Born-Oppenheimer approximation is an example of the adiabatic approximation.
Subjects: Chemistry — Physics.
Reference entries
|
b88ddaca92a29a3f | Sign up ×
I have a differential equation of the form $$a y'' + b y/x = E y$$ (The origin is a 1D Schrödinger equation for a potential of the form $-1/x$). I am only interested in the ground state energy, i.e. the lowest order solution.
Is there a good, systematic way to tackle this? I used a lot of hand waving:
I said that for $x \rightarrow \infty$, the potential term is negligible and the equation is a simple homogeneous 2nd order ODE with constant coefficients, which has solution $e^{-kx}$ for some $k$. So as an overall ansatz I choose $$f(x)e^{-kx}$$, which yields $$a (f'' - 2k f' + k^2 f) + b f/x = E f$$.
I then argue -- that is where the hand-waving occurs -- that the ground state would have a polynomial of the lowest possible order for $f$. A constant (order $0$) is not possible, since then nothing cancels the $1/x$ in the equation, so I try the ansatz $f(x) = x$. With that, I can indeed solve the equation and obtain conditions for $k$ and $E$:
$$-2ka + b = 0$$ $$ak^2 = E$$
This allows me to solve for $k$ and $E$.
But is there a better, more rigorous way?
share|cite|improve this question
There's always the Frobenius route... which you can use to derive the solutions in terms of confluent hypergeometric functions. – J. M. Aug 7 '11 at 17:10
Since I'm a physicist and not a mathematician, would you briefly outline that route? – Lagerbaer Aug 7 '11 at 17:12
This should be a quick review... it's also in Arfken and Weber. (FWIW, I ain't a mathematician either... :) ) – J. M. Aug 7 '11 at 17:26
Ah, okay. So what that method does is writing $f(x)$ as a power series in $x$, which generates recursive equations for the coefficients. If I set a cut-off for the degree of the polynomial, this should then reproduce my result. – Lagerbaer Aug 7 '11 at 19:59
1 Answer 1
up vote 1 down vote accepted
Let's assume that $a=1$ for simplicity. You can then, either use a CAS to solve this differential equation, or notice that it is a differential equation for a confluent hypergeometric functions $_1F_1(x)$ and $U(x)$.
Specifically the general solution to equation $y'' + \frac{b}{x} y = \mathcal{E}^2 y$ is
$$ y(x) = x e^{-x \mathcal{E}} \left( c_1 {}_1F_1(1 - \frac{b}{2\mathcal{E}}, 2, 2 x \mathcal{E}) + c_2 U( 1 - \frac{b}{2\mathcal{E}}, 2, 2 x \mathcal{E} ) \right) $$
Now, you could look up the asymptotic behavior of each independent solution (here and here) and choose indeterminates and the energy to satisfy needed boundary conditions.
You will find that $c_1$ must vanish due to decay at infinity, while $c_2$ is arbitrary. Behavior at the origin demands that $1 - \frac{b}{2\mathcal{E}}$ be a non-positive integer, giving you the spectrum. In that case the Tricomi function would degenerate into a polynomial.
share|cite|improve this answer
Your Answer
|
c0bdad411aa12cb9 | Sunday, October 12, 2014
Mind control
Here's a pre-edited version of my piece for the Observer today, with a little bit more stuff still in it and some links. This was a great topic to research, and a bit disconcerting at times too.
Be careful what you wish for. That’s what Joel, played by Jim Carrey, discovers in Charlie Kaufmann’s 2004 film Eternal Sunshine of the Spotless Mind, when he asks a memory-erasure company Lacuna Inc. to excise the recollections of a painful breakup from his mind. While the procedure is happening, Joel realizes that he doesn’t want every happy memory of the relationship to vanish, and seeks desperately to hold on to a few fragments.
The movie offers a metaphor for how we are defined by our memories, how poignant is both their recall and their loss, and how unreliable they can be. So what if Lacuna’s process is implausible? Just enjoy the allegory.
Except that selective memory erasure isn’t implausible at all. It’s already happening.
Researchers and clinicians are now using drugs to suppress the emotional impact of traumatic memories. They have been able to implant false memories in flies and mice, so that innocuous environments or smells seem to be “remembered” as threatening. They are showing that memory is not like an old celluloid film, fixed but fading; it is constantly being changed and updated, and can be edited and falsified with alarming ease.
“I see a world where we can reactivate any kind of memory we like, or erase unwanted memories”, says neuroscientist Steve Ramirez of the Massachusetts Institute of Technology. “I even see a world where editing memories is something of a reality. We’re living in a time where it’s possible to pluck questions from the tree of science fiction and ground them in experimental reality.” So be careful what you wish for.
But while it’s easy to weave capabilities like this into dystopian narratives, most of which the movies have already supplied – the authoritarian memory-manipulation of Total Recall, the mind-reading police state of Minority Report, the dream espionage of Inception – research on the manipulation of memory could offer tremendous benefits. Already, people suffering from post-traumatic stress disorder (PTSD), such as soldiers or victims of violent crime, have found relief from the pain of their dark memories through drugs that suppress the emotional associations. And the more we understand about how memories are stored and recalled, the closer we get to treatments for neurodegenerative conditions such as Alzheimer’s and other forms of dementia.
So there are good motivations for exploring the plasticity of memory – how it can be altered or erased. And while there are valid concerns about potential abuses, they aren’t so very different from those that any biomedical advance accrues. What seems more fundamentally unsettling, but also astonishing, about this work is what it tells us about us: how we construct our identity from our experience, and how our recollections of that experience can deceive us. The research, says Ramirez, has taught him “how unstable our identity can be.”
Best forgotten
Your whole being depends on memory in ways you probably take for granted. You see a tree, and recognize it as a tree, and know it is called “tree” and that it is a plant that grows. You know your language, your name, your loved ones. Few things are more devastating, to the individual and those close to them, than the loss of these everyday facts. As the memories fade, the person seems to fade with them. Christopher Nolan’s film Memento echoes the case of Henry Molaison, who, after a brain operation for epilepsy in the 1950s, lost the ability to record short-term memories. Each day his carers had to introduce themselves to him anew.
Molaison’s surgery removed a part of his brain called the hippocampus, giving a clue that this region is involved in short-term memory. Yet he remembered events and facts learnt long ago, and could be taught new ones, indicating that long-term memory is stored somewhere else. Using computer analogies for the brain is risky, but it’s reasonable here to compare our short-term memory with a computer’s ephemeral working memory or RAM, and the long-term memory with the hard drive that holds information more durably. While short-term memory is associated with the hippocampus, long-term memory is more distributed throughout the cortex. Some information is stored long-term, such as facts and events we experience repeatedly or that have an emotional association; other items vanish within hours. If you look up the phone number of a plumber, you’ll probably have forgotten it by tomorrow, but you may remember the phone number of your family home from childhood.
What exactly do we remember? Recall isn’t total – you might retain the key aspects of a significant event but not what day of the week it was, or what you were wearing, or exactly what was said. Your memories are a mixed bag: facts, feelings, sights, smells. Ramirez points out that, while Eternal Sunshine implies that all these features of a memory are bundled up and stored in specific neurons in a single location in the brain, in fact it’s now clear that different aspects are stored in different locations. The “facts”, sometimes called episodic memory, are filed in one place, the feelings in another (generally in a brain region called the amygdala). All the same, those components of the memory do each have specific addresses in the vast network of our billions of neurons. What’s more, these fragments remain linked and can be recalled together, so that the event we reconstruct in our heads is seamless, if incomplete. “Memory feels very cohesive, but in reality it’s a reconstructive process”, says Ramirez.
Given all this filtering and parceling out, it’s not surprising that memory is imperfect. “The fidelity of memory is very poor”, says psychologist Alain Brunet of McGill University in Montreal. “We think we remember exactly what happens, but research demonstrates that this is a fallacy.” It’s our need for a coherent narrative that misleads us: the brain elaborates and fills in gaps, and we can’t easily distinguish the “truth” from the invention. You don’t need fancy technologies to mess with memory – just telling someone they experienced something they didn’t, or showing them digitally manipulated photos, can be enough to seed a false conviction. That, much more than intentional falsehood, is why eye-witness accounts may be so unreliable and contradictory.
It gets worse. One of the most extraordinary findings of modern neuroscience, reported in 2000 by neurobiologist Joseph LeDoux and his colleagues at New York University, is that each time you remember something, you have to rebuild the memory again. LeDoux’s team reported that when rats were conditioned to associate a particular sound with mild electric shocks, so that they showed a “freezing” fear response when they heard the sound subsequently, this association could be broken by infusing the animals’ amygdala with a drug called anisomycin. The sound then no longer provoked fear – but only if the drug was administered within an hour or so of the memory being evoked. Anisomycin disrupts biochemical processes that create proteins, and the researchers figured that this protein manufacture was essential for restoring a memory after it has arisen. This is called reconsolidation: it starts a few minutes after recall, and takes a few hours to complete.
So those security questions asking you for the name of your first pet are even more bothersome than you thought, because each time you have to call up the answer (sorry if I just made you do it again), your brain then has to write the memory back into long-term storage. A computer analogy is again helpful. When we work on a file, the computer makes a copy of the stored version and we work on that – if the power is cut, we still have the original. But as Brunet explains, “When we remember something, we bring up the original file.” If we don’t write it back into the memory, it’s gone.
This rewriting process can, like repeated photocopying, degrade the memory a little. But LeDoux’s work showed that it also offers a window for manipulating the memory. When we call it up, we have the opportunity to change it. LeDoux found that a drug called propranolol can weaken the emotional impact of a memory without affecting the episodic content. This means that the effect of painful recollections causing PTSD can be softened. Propranolol is already known to be safe in humans: it is a beta blocker used to treat hypertension, and (tellingly) also to combat anxiety, because it blocks the action of the stress hormone epinephrine in the amygdala. A team at Harvard Medical School has recently discovered that xenon, the inert gas used as an anaesthetic, can also weaken the reconsolidation of fear memories in rats. An advantage of xenon over propranolol is that it gets in and out of the brain very quickly, taking about three minutes each way. If it works well for humans, says Edward Meloni of the Harvard team, “we envisage that patients could self-administer xenon immediately after experiencing a spontaneous intrusive traumatic memory, such as awakening from a nightmare.” The timing of the drug relative to reactivation of the trauma memory may, he says, be critical for blocking the reconsolidation process.
These techniques are now finding clinical use. Brunet uses propranolol to treat people with PTSD, including soldiers returned from active combat, rape victims and people who have suffered car crashes. “It’s amazingly simple,” he says. They give the patients a pill containing propranolol, and then about an hour later “we evoke the memory by having patients write it down and then read it out.” That’s often not easy for them, he says – but they manage it. The patients are then asked to continue reading the script regularly over the next several weeks. Gradually they find that its emotional impact fades, even though the facts are recalled clearly.
“After three or four weeks”, says Brunet, “our patients say things like ‘I feel like I’m smiling inside, because I feel like I’m reading someone else’s script – I’m no longer personally gripped by it.’” They might feel empathy with the descriptions of the terrible things that happened to this person – but that person no longer feels like them. No “talking cure” could do that so quickly and effectively, while conventional drug therapies only suppress the symptoms. “Psychiatry hasn’t cured a single patient in sixty years”, Brunet says.
These cases are extreme, but aren’t even difficult memories (perhaps especially those) part of what makes us who we are? Should we really want to get rid of them? Brunet is confident about giving these treatments to patients who are struggling with memories so awful that life becomes a torment. “We haven’t had a single person say ‘I miss those memories’”, he says. After all, there’s nothing unnatural about forgetting. “We are in part the sum of our memories, and it’s important to keep them”, Brunet says. “But forgetting is part of the human makeup too. We’re built to forget.”
Yet it’s not exactly forgetting. While propranolol and xenon can modify a memory by dampening its emotional impact, the memory remains: PTSD patients still recall “what happened”, and even the emotions are only reduced, not eliminated. We don’t yet really understand what it means to truly forget something. Is it ever really gone or just impossible to recall? And what happens when we learn to overcome fearful memories – say, letting go of a childhood fear of dogs as we figure that they’re mostly quite friendly? “Forgetting is fairly ill-defined”, says neuroscientist Scott Waddell at the University of Oxford. “Is there some interfering process that out-competes the original memory, or does the original memory disappear altogether?” Some research on flies suggests that forgetting isn’t just a matter of decay but an active process in which the old memory is taken apart. Animal experiments have also revealed the spontaneous re-emergence of memories after they were apparently eliminated by re-training, suggesting that memories don’t vanish but are just pushed aside. “It’s really not clear what is going on”, Waddell admits.
Looking into a fly’s head
That’s not so surprising, though, because it’s not fully understood how memory works in the first place. Waddell is trying to figure that out – by training fruit flies and literally looking into their brains. What makes flies so useful is that it’s easy to breed genetically modified strains, so that the role of specific genes in brain activity can be studied by manipulating or silencing them. And the fruit fly is big and complex enough to show sophisticated behavior, such as learning to associate a particular odour with a reward like sugar, while being simple enough to comprehend – it has around 100,000 neurons, compared to our many billions.
What’s more, a fruit fly’s brain is transparent enough to look right through it under the microscope, so that one can watch neural processing while the fly is alive. By attaching fluorescent molecules to particular neurons, Waddell can identify the neural circuitry linked to a particular memory. In his lab in Oxford he showed me an image of a real fly’s brain: a haze of bluish-coloured neurons, with bright green spots and filaments that are, in effect, a snapshot of a memory. The memory might be along the lines of “Ah, that smell – the last time I followed it, it led to something tasty.”
How do you find the relevant neurons among thousands of others? The key is that when neurons get active to form a memory, they advertise their state of busyness. They produce specific proteins, which can be tagged with other light-emitting proteins by genetic engineering of the respective genes. One approach is to inject benign viruses that stitch the light-emission genes right next to the gene for the protein you want to tag; another is to engineer particular cells to produce a foreign protein to which the fluorescent tags will bind. When these neurons get to work forming a memory, they light up. Ramirez compares it to the way lights in the windows of an office block at night betray the location of workers inside.
This ability to identify and target individual memories has enabled researchers like Waddell and Ramirez to manipulate them experimentally in, well, mind-boggling ways. Rather than just watching memories form by fluorescent tagging, they can use tags that act as light-activated switches to turn gene activity on or off with laser light directed down an optical fibre into the brain. This technique, called optogenetics, is driving a revolution in neuroscience, Ramirez says, because it gives researchers highly selective control over neural activity – enabling them in effect to stimulate or suppress particular thoughts and memories.
Waddell’s lab is not a good place to bring a banana for lunch. The fly store is packed with shelves of glass bottles, each full of flies feasting on a lump of sugar at the bottom. Every bottle is carefully labeled to identify the genetic strain of the insects it contains: which genes have been modified. But surely they get out from time to time, I wonder – and as if on cue, a fly buzzes past. Is that a problem? “They don’t survive for long on the outside,” Waddell reassures me.
Having spent the summer cursing the plague of flies gathering around the compost bin in the kitchen, I’m given fresh respect for these creatures when I inspect one under the microscope and see the bejeweled splendor of its red eyes. It’s only sleeping: you can anaesthetize fruit flies with a puff of carbon dioxide. That’s important for mapping neurons to memories in the microscope, because there’s not much going on in the mind of a dead fly.
These brain maps are now pretty comprehensive. We know, for example, which subset of neurons (about 2,000 in all) is involved in learning to recognize odours, and which neurons can give those smells good or bad associations. And thanks to optogenetics, researchers have been able to switch on some of these “aversive” neurons while flies smell a particular odour, so that they avoid it even though they have actually experienced nothing bad (such as shock treatment) in its presence – in other words, you might say, to stimulate a fictitious false memory. For a fly, it’s not obvious that we can call this “fear”, Waddell says, but “it’s certainly something they don’t like”. In the same way, by using molecular switches that are flipped with heat rather than light, Waddell and his colleagues were able to give flies good vibes about a particular smell. Flies display these preferences by choosing to go in particular directions when they are placed in little plastic mazes, some of them masterfully engineered with little gear-operated gates courtesy of the lab’s 3D printer.
Ramirez, working in a team at MIT led by Susumu Tonegawa, has practiced similar deceptions on mice. In an experiment in 2012 they created a fear memory in a mouse by putting it in a chamber where it experienced mild electric shocks to the feet. While this memory was being laid down, the researchers used optogenetic methods to make the corresponding neurons, located in the hippocampus, switchable with light. Then they put the mouse in a different chamber, where it seemed perfectly at ease. But when they reactivated the fear memory with light, the mouse froze: suddenly it had bad feelings about this place.
That’s not exactly implanting a false memory, however, but just reactivating a true one. To genuinely falsify a recollection, the researchers devised a more elaborate experiment. First, they placed a mouse in a chamber and labeled the neurons that recorded the memory of that place with optogenetic switches. Then the mouse was put in a different chamber and given mild shocks – but while these were delivered, the memory of the first chamber was triggered using light. When the mouse was then put back in the first chamber it froze. Its memory insisted, now without any artificial prompting, that the first chamber was a nasty place, even though nothing untoward had ever happened there. It is not too much to say that a false reality had been directly written into the mouse’s brain.
You must remember this
The problem with memory is often not so much that we totally forget something or recall it wrongly, but that we simply can’t find it even though we know it’s in there somewhere. What triggers memory recall? Why does a fly only seem to recall a food-related odour when it is hungry? Why do we feel fear only if we’re in actual danger, and not all the time? Indeed, it is the breakdown of these normal cues that produces PTSD, where the fear response gets triggered in inappropriate situations.
A good memory is largely about mastering this triggering process. Participants in memory competitions that involve memorizing long sequences of arbitrary numbers are advised to “hook” the information onto easily recalled images. A patient named Solomon Shereshevsky, studied in the early twentieth century by the neuropsychologist Alexander Luria, exploited his condition of synaesthesia – the crosstalk between different sensory experiences such as sound and colour – to tag information with colours, images, sounds or tastes so that he seemed able to remember everything he heard or read. Cases like this show that there is nothing implausible about Jorge Luis Borges’ fictional character Funes the Memorious, who forgets not the slightest detail of his life. We don’t forget because we run out of brain space, even if it sometimes feels like that.
Rather than constructing a complex system of mnemonics, perhaps it is possible simply to boost the strength of the memory as it is imprinted. “We know that emotionally arousing situations are more likely to be remembered than mundane ones”, LeDoux has explained. “A big part of the reason is that in significant situations chemicals called neuromodulators are released, and they enhance the memory storage process.” So memory sticks when the brain is aroused: emotional associations will do it, but so might exercise, or certain drugs. And because of reconsolidation, it seems possible to enhance memory after it has already been laid down. LeDoux has found that a chemical called isoproterenol has the opposite effect from propranolol on reconsolidation of memory in rats, making fear memories even stronger as they are rewritten into long-term storage in the amygdala. If it works for humans too, he speculates that the drug might help people who have “sluggish” memories.
Couldn’t we all do with a bit of that, though? Ramirez regards chemical memory enhancement as perfectly feasible in principle, and in fact there is already some evidence that caffeine can enhance long-term memory. But then what is considered fair play? No one quibbles about students going into an exam buoyed up by an espresso, but where do we draw the line?
Mind control
It’s hard to come up with extrapolations of these discoveries that are too far-fetched to be ruled out. You can tick off the movies one by one. The memory erasure of Eternal Sunshine is happening right now to some degree. And although so far we know only how to implant a false memory if it has actually been experienced in another context, as our understanding of the molecular and cellular encoding of memory improves Ramirez thinks it might be feasible to construct memories “from the ground up”, as in Total Recall or the implanted childhood recollections of the replicant Rachael in Blade Runner. As Rachael so poignantly found out, that’s the way to fake a whole identity.
If we know which neurons are associated with a particular memory, we can look into a brain and know what a person is thinking about, just by seeing which neurons are active: we can mind-read, as in Minority Report. “With sufficiently good technology you could do that”, Ramirez affirms. “It’s just a problem of technical limitations.” By the same token, we might reconstruct or intervene in dreams, as in Inception (Ramirez and colleagues called their false-memory experiment Project Inception). Decoding the thought processes of dreams is “a very trendy area, and one people are quite excited about”, says Waddell.
How about chips implanted in the brain to control neural activity, Matrix-style? Theodore Berger of the University of Southern California has implanted microchips in rats’ brains that can duplicate the role of the hippocampus in forming long-term memories, recording the neural signals involved and then playing them back. His most recent research shows that the same technique of mimicking neural signals seems to work in rhesus monkeys. The US Defense Advanced Research Projects Agency (DARPA) has two such memory-prosthesis projects afoot. One, called SUBNETS, aims to develop wireless implant devices that could treat PTSD and other combat-related disorders. The other, called RAM (Restoring Active Memories), seeks to restore memories lost through brain injury that are needed for specialized motor skills, such as how to drive a car or operate machinery. The details are under wraps, however, and it’s not clear how feasible it will be to record and replay specific memories. LeDoux professes that he can’t imagine how it could work, given that long-term memories aren’t stored in a single location. To stimulate all the right sites, says Waddell, “you’d have to make sure that your implantation was extremely specific – and I can’t see that happening.”
Ramirez says that it’s precisely because the future possibilities are so remarkable, and perhaps so unsettling, that “we’re starting this conversation today so that down the line we have the appropriate infrastructure.” Are we wise enough to know what we want to forget, to remember, or to think we remember? Do we risk blanking out formative, instructive and precious experiences, or finding ourselves one day being told, as Deckard tells Rachael in Blade Runner, “those aren’t your memories – they’re someone else’s”?
“The problems are not with the current research, but with the question of what we might be able to do in 10-15 years,” says Brunet. It’s one thing to bring in legislation to restrict abuses, just as we do for other biomedical technologies. But the hardest arguments might be about not what we prohibit but what we allow. Should individuals be allowed to edit their own memories or have false ones implanted? Ramirez is upbeat, but insists that the ethical choices are not for scientists alone to thrash out. “We all have some really big decisions ahead of us,” he says.
Thursday, October 09, 2014
Do we tell the right stories about evolution?
A tale of many electrons
In what I hope might be a timely occasion with Nobel-fever in the air, here is my leader for the latest issue of Nature Materials. This past decision was a nice one for physics, condensed matter and materials – although curiously it was a chemistry prize.
Density functional theory, invented half a century ago, now supplies one of the most convenient and popular shortcuts for dealing with systems of many electrons. It was born in a fertile period when theoretical physics stretched from abstruse quantum field theory to practical electrical engineering.
It’s often pointed out that quantum theory is not just a source of counter-intuitive mystery but also an extraordinarily effective intellectual foundation for engineering. It supplies the theoretical basis for the transistor and superconductor, for understanding molecular interactions relevant from mineralogy to biology, and for describing the basic properties of all matter, from superhard alloys to high-energy plasmas. But popular accounts of quantum physics rarely pay more than lip service to this utilitarian virtue – there is little discussion of what it took to turn the ideas of Bohr, Heisenberg and Schrödinger into a theory that works at an everyday level.
One of the milestones in that endeavour occurred 50 years ago, when Pierre Hohenberg and Walter Kohn published a paper [1] that laid the foundations of density functional theory (DFT). This provided a tool for transforming the fiendishly complicated Schrödinger equation of a many-body system such as the atomic lattice of a solid into a mathematically tractable problem that enables the prediction of properties such as structure and electrical conductivity. The milieu in which this advance was formulated was rich and fertile, and from the distance of five decades it is hard not to idealize it as a golden age in which scientists could still see through the walls that now threaten to isolate disciplines. Kohn, exiled from his native Austria as a young Jewish boy during the Nazi era and educated in Canada, was located at the heart of this nexus. Schooled in quantum physics by Julian Schwinger at Harvard amidst peers including Philip Anderson, Rolf Landauer and Joaquin Luttinger, he was also familiar with the challenges of tangible materials systems such as semiconductors and alloys. In the mid-1950s Kohn worked as a consultant at Bell Labs, where the work of John Bardeen, Walter Brattain and William Shockley on transistors a few years earlier had generated a focus on the solid-state theory of semiconductors. And his ground-breaking paper with Hohenberg came from research on alloys at the Ecole Normale Supérieure in Paris, hosted by Philippe Nozières.
Now that DFT is so familiar a technique, used not only to understand electronic structures of molecules and materials but also as a semi-classical approach for studying the atomic structures of fluids, it is easy to forget what a bold hypothesis its inception required. In principle one may write the electron density n(r) of an N-electron system as the integral over space of the N-electron wavefunction, and then to use this to calculate the total energy of the system as a functional of n(r) and the potential energy v(r) of each electron interacting with all the fixed nuclei. (A functional here is a “function of a function” – the energy is a function of the function v(r), say.) Then one could do the calculation by invoking some approximation for the N-electron wavefunction. But Kohn inverted the idea: what if you didn’t start from the complicated N-body wavefunction, but just from the spatially varying electron density n(r)? That’s to say, maybe the external potential v(r), and thus the total energy (for the ground state of the system), depend only on the equilibrium n(r)? Then, that density function is all you needed to know. As Andrew Zangwill puts it in a recent commentary on Kohn’s career [2], “This was a deep question. Walter realized he wasn’t doing alloy theory any more.”
Kohn figured out a proof of this remarkable conjecture, but it seemed so simple that he couldn’t believe it hadn’t been noticed before. So he asked Hohenberg, a post-doc in Nozières’ lab, to help. Together the pair formulated a rigorous proof of the conjecture for the case of an inhomogeneous electron gas; since their 1964 paper, several other proofs have been found. That paper was formal and understated to the point of desiccation, and one needed to pay it close attention to see how remarkable the result was. The initial response was muted, and Hohenberg moved subsequently into other areas, such as hydrodynamics, phase transitions and pattern formation.
Kohn, however, went on to develop the idea into a practical method for calculating the electronic ground states of molecules and solids, working in particular with Hong Kong-born postdoc Lu-Jeu Sham. Their crucial paper3 was much more explicit about the potential of this approach as an approximation for calculating real materials properties of solids, such as cohesive energies and elastic constants, from quantum principles. It is now one of the most highly cited papers in all of physics, but was an example of a “sleeper”: still the community took some time to wake up to what was on offer. Not until the work of John Pople in the early 1990s did chemists begin to appreciate that DFT could offer a simple and convenient way to calculate electronic structures. It was that work which led to the 1998 Nobel prize in chemistry for Pople and Kohn – incongruous for someone so immersed in physics.
Zangwill argues that DFT defies the common belief that important theories reflect the Zeitgeist: it was an idea that was not in the air at all in the 1960s, and, says Zangwill, “might be unknown today if Kohn had not created it in the mid-1960s.” Clearly that’s impossible to prove. But there’s no mistaking the debt that materials and molecular sciences owe to Kohn’s insight, and so if Zangwill is right, all the more reason to ask if we still create the right sort of environments for such fertile ideas to germinate.
1. Hohenberg, P. & Kohn, W. Phys. Rev. 136, B864-871 (1964).
2. Zangwill, A., (2014).
3. Kohn, W. & Sham, L. J. Phys. Rev. 140, A1133-1138 (1965).
Wednesday, October 08, 2014
The moment of uncertainty
As part of a feature section in the October issue of La Recherche on uncertainty, I interviewed Robert Crease, historian and philosopher of science at Stony Brook University, New York, on the cultural impact of Heisenberg’s principle. It turned out that Robert had just written a book looking at this very issue – in fact, at the cultural reception of quantum theory in general. It’s called The Quantum Moment, is coauthored by Alfred Scharff Goldhaber, and is a great read – I have written a mini-review for the next (November) issue of Prospect. Here’s the interview, which otherwise appears only in French in La Recherche. Since Robert has such a great way with words, it was one of the easiest I’ve ever done.
What led Heisenberg to formulate the uncertainty principle? Was it something that fell out of the formalism in mathematical terms?
That’s a rather dramatic story. The uncertainty principle emerged in exchange of letters between Heisenberg and Pauli, and fell out of the work that Heisenberg had done on quantum theory the previous year, called matrix mechanics. In autumn 1926, he and Pauli were corresponding about how to understand its implications. Heisenberg insisted that the only way to understand it involved junking classical concepts such as position and momentum in the quantum world. In February 1927 he visited Niels Bohr in Copenhagen. Bohr usually helped Heisenberg to think, but this time the visit didn’t have the usual effect. They grew frustrated, and Bohr abandoned Heisenberg to go skiing. One night, walking by himself in the park behind Bohr’s institute, Heisenberg had an insight. He wrote to Pauli: “One will always find that all thought experiments have this property: when a quantity p is pinned down to within an accuracy characterized by the average error p, then... q can only be given at the same time to within an accuracy characterized by the average error q1 ≈ h/p1.” That’s the uncertainty principle. But like many equations, including E = mc2 and Maxwell’s equations, its first appearance is not in its now-famous form. Anyway, Heisenberg sent off a paper on his idea that was published in May.
How did Heisenberg interpret it in physical terms?
He didn’t, really; at the time he kept claiming that the uncertainty principle couldn’t be interpreted in physical terms, and simply reflected the fact that the subatomic world could not be visualized. Newtonian mechanics is visualizable: each thing in it occupies a particular place at a particular time. Heisenberg thought the attempt to construct a visualizable solution for quantum mechanics might lead to trouble, and so he advised paying attention only to the mathematics. Michael Frayn captures this side of Heisenberg well in his play Copenhagen. When the Bohr character charges that Heisenberg doesn't pay attention to the sense of what he’s doing so long as the mathematics works out, the Heisenberg character indignantly responds, "Mathematics is sense. That's what sense is".
Was Heisenberg disturbed by the implications of what he was doing?
No. Both he and Bohr were excited about what they had discovered. From the very beginning they realized that it had profound philosophical implications, and were thrilled to be able to explore them. Almost immediately both began thinking and writing about the epistemological implications of the uncertainty principle.
Was anyone besides Heisenberg and Bohr troubled?
The reaction was mixed. Arthur Eddington, an astronomer and science communicator, was thrilled, saying that the epistemological implications of the uncertainty principle heralded a new unification of science, religion, and the arts. The Harvard physicist Percy Bridgman was deeply disturbed, writing that “the bottom has dropped clean out” of the world. He was terrified about its impact on the public. Once the implications sink in, he wrote, it would “let loose a veritable intellectual spree of licentious and debauched thinking.”
Did physicists all share the same view of the epistemological implications of quantum mechanics?
No, they came up with several different ways to interpret it. As the science historian Don Howard has shown, the notion that the physics community of the day shared a common view, one they called the “Copenhagen interpretation,” is a myth promoted in the 1950s by Heisenberg for his own selfish reasons.
How much did the public pay attention to quantum theory before the uncertainty principle?
Not much. Newspapers and magazines treated it as something of interest because it excited physicists, but as far too complicated to explain to the public. Even philosophers didn’t see quantum physics as posing particularly interesting or significant philosophical problems. The uncertainty principle’s appearance in 1927 changed that. Suddenly, quantum mechanics was not just another scientific theory – it showed that the quantum world works very differently from the everyday world.
How did the uncertainty principle get communicated to a broader public?
It took about a year. In August 1927, Heisenberg, who was not yet a celebrity, gave a talk at a meeting of the British Association for the Advancement of Science, but it sailed way over the heads of journalists. The New York Times’s science reporter said trying to explain it to the public was like “trying to tell an Eskimo what the French language is like without talking French.” Then came a piece of luck. Eddington devoted a section to the uncertainty principle in his book The Nature of the Physical World, published in 1928. He was a terrific explainer, and his imagery and language were very influential.
How did the public react?
Immediately and enthusiastically. A few days after October 29, 1929, the New York Times, tongue-in-cheek, invoked the uncertainty principle as the explanation for the stock market crash.
And today?
Heisenberg and his principle still feature in popular culture. In fact, thanks to the uncertainty principle, I think I’d argue that Heisenberg has made an even greater impact on popular culture than Einstein. In the American television drama series Breaking Bad, 'Heisenberg' is the pseudonym of the protagonist, a high school chemistry teacher who manufactures and sells the illegal drug crystal methamphetamine. The religious poet Christian Wiman, in his recent book about facing cancer, writes that "to feel enduring love like a stroke of pure luck" amid "the havoc of chance" makes God "the ultimate Uncertainty Principle." In The Ascent of Man, the Polish-British scientist Jacob Bronowski calls the uncertainty principle the Principle of Tolerance. There’s even an entire genre of uncertainty principle jokes. A police officer pulls Heisenberg over and says, "Did you know that you were going 90 miles an hour?" Heisenberg says, "Thanks. Now I'm lost."
Has the uncertainty principle been used for serious philosophical purposes?
Yes. Already in 1929, John Dewey wrote about it to promote his ideas about pragmatism, and in particular his thoughts about the untenability of what he called the “spectator theory of knowledge.” The literary critic George Steiner has used the uncertainty principle to describe the process of literary criticism – how it involves transforming the “object” – that is, text – interpreted, and delivers it differently to the generation that follows. More recently, the Slovene philosopher Slavoj Žižek has devoted attention to the philosophical implications of the uncertainty principle.
Some popular culture uses of the uncertainty principle are off the wall. How do you tell meaningful uses from the bogus ones?
It’s not easy. Popular culture often uses scientific terms in ways that are pretentious, erroneous, wacky, or unverifiable. It’s nonsense to apply the uncertainty principle to medicines or self-help issues, for instance. But how is that different from Steiner using it to describe the process of literary criticism?
Outside of physics, has our knowledge that uncertainty is a feature of the subatomic world, and the uses that it has been put by writers and philosophers, helped to change our worldview in any way?
I think so. The contemporary world does not always feel smooth, continuous, and law-governed, like the Newtonian World. Our world instead often feels jittery, discontinuous, and irrational. That has sometimes prompted writers to appeal to quantum imagery and language to describe it. John Updike’s characters, for instance, sometimes appeal to the uncertainty principle, while Updike himself did so in speaking of the contemporary world as full of “gaps, inconsistencies, warps, and bubbles in the surface of circumstance.” Updike and other writers and poets have found this imagery metaphorically apt.
The historians Betty Dobbs and Margaret Jacob have remarked that the Newtonian Moment provided “the material and mental universe – industrial and scientific – in which most Westerners and some non-Westerners now live, one aptly described as modernity.” But that universe is changing. Quantum theory showed that at a more fundamental level the world is not Newtonian at all, but governed by notions such as chance, probability, and uncertainty.
Robert Crease’s book (with Alfred S. Goldhaber) The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught Us to Love Uncertainty will be published by Norton in October 2014.
Uncertain about uncertainty
This is the English version of the cover article (in French) of the latest issue of La Recherche (October). It’s accompanied by an interview that I conducted with Robert Crease about the cultural impact of the uncertainty principle, which I’ll post next.
If there’s one thing most people know about quantum physics, it’s that it is uncertain. There’s a fuzziness about the quantum world that prevents us from knowing everything about it with absolute detail and clarity. Almost 90 years ago, the German physicist Werner Heisenberg pointed this out in his famous Uncertainty Principle. Yet over the few years there has been heated debate among physicists about just what Heisenberg meant, and whether he was correct. The latest experiments seem to indicate that one version of the Uncertainty Principle presented by Heisenberg might be quite wrong, and that we can get a sharper picture of quantum reality than he thought.
In 1927 Heisenberg argued that we can’t measure all the attributes of a quantum particle at the same time and as accurately as we like [1]. In particular, the more we try to pin down a particle’s exact location, the less accurately we can measure its speed, and vice versa. There’s a precise limit to this certainty, Heisenberg said. If the uncertainty is position is denoted Δx, and the uncertainty in momentum (mass times velocity) is Δp, then their product ΔxΔp can be no smaller than ½h, where h [read this as h bar] is the fundamental constant called Planck’s constant, which sets the scale of the ‘granularity’ of the quantum world – the size of the ‘chunks’ into which energy is divided.
Where does this uncertainty come from? Heisenberg’s reasoning was mathematical, but he felt he needed to give some intuitive explanation too. For something as small and delicate as a quantum particle, he suggested, it is virtually impossible to make a measurement without disturbing and altering what we’re trying to measure. It we “look” at an electron by bouncing a photon of light off it in a microscope, that collision will change the path of the electron. The more we try to reduce the intrinsic inaccuracy or “error” of the measurement, say by using a brighter beam of photons, the more we create a disturbance. According to Heisenberg, error (Δe) and disturbance (Δd) are also related by an uncertainty principle in which ΔeΔd can’t be smaller than ½h.
The American physicist Earle Hesse Kennard showed very soon after Heisenberg’s original publication that in fact his thought experiment is superfluous to the issue of uncertainty in quantum theory. The restriction on precise knowledge of both speed and position is an intrinsic property of quantum particles, not a consequence of the limitations of experiments. All the same, might Heisenberg’s “experimental” version of the Uncertainty Principle – his relationship between error and disturbance – still be true?
“When we explain the Uncertainty Principle, especially to non-physicists,” says physicist Aephraim Steinberg of the University of Toronto in Canada, “we tend to describe the Heisenberg microscope thought experiment.” But he says that, while everyone agrees that measurements disturb systems, many physicists no longer think that Heisenberg’s equation relating Δe and Δd describes that process adequately.
Japanese physicist Masanao Ozawa of Nagoya University was one of the first to question Heisenberg. In 2003 he argued that it should be possible to defeat the apparent limit on error and disturbance [2]. Ozawa was motivated by a debate that began in the 1980s on the accuracy of measurements of gravity waves, the ripples in spacetime predicted by Einstein’s theory of general relativity and expected to be produced by violent astrophysical events such as those involving black holes. No one has yet detected a gravity wave, but the techniques proposed to do so entail measuring the very small distortions in space that will occur when such a wave passes by. These disturbances are so tiny – fractions of the size of atoms – that at first glance the Uncertainty Principle would seem to determine if they are feasible at all. In other words, the accuracy demanded in some modern experiments like this means that this question of how measurement disturbs the system has real, practical ramifications.
In 1983 Horace Yuen of Northwestern University in Illinois suggested that, if gravity-wave measurement were done in a way that barely disturbed the detection system at all, the apparently fundamental limit on accuracy dictated by Heisenberg’s error-disturbance relation could be beaten. Others disputed that idea, but Ozawa defended it. This led him to reconsider the general question of how experimental error is related to the degree of disturbance it involves, and in his 2003 paper he proposed a new relationship between these two quantities in which two other terms were added to the equation. In other words, ΔeΔd + A + Bh/2, so that ΔeΔd itself could be smaller than h/2 without violating the limit..
Last year, Cyril Branciard of the University of Queensland in Australia (now at the CNRS Institut Néel at Grenoble) tightened up Ozawa’s new uncertainty equation [3]. “I asked whether all values of Δe and Δd that satisfy his relation are allowed, or whether there could be some values that are nevertheless still forbidden by quantum theory”, Branciard explains. “I showed that there are actually more values that are forbidden. In other words, Ozawa's relation is ‘too weak’.”
But Ozawa’s relationship had by then already been shown to give an adequate account of uncertainty for most purposes, since in 2012 it was put to the test experimentally by two teams [4,5]. Steinberg and his coworkers in Toronto figured out how to measure the quantities in Ozawa’s equation for photons of infrared laser light travelling along optical fibres and being sensed by detectors. They used a way of detecting the photons that perturbed their state as little as possible, and found that indeed they could exceed the relationship between precision and disturbance proposed by Heisenberg but not that of Ozawa. Meanwhile, Ozawa himself teamed up with a team at the Vienna University of Technology led by Yuji Hasegawa, who made measurements on the quantum properties of a beam of neutrons passing through a series of detectors. They too found that the measurements could violate the Heisenberg limit but not Ozawa’s.
Very recent experiments have confirmed that conclusion with still greater accuracy, verifying Branciard’s relationships too [6,7]. Branciard himself was a collaborator on one of those studies, and he says that “experimentally we could get very close indeed to the bounds imposed by my relations.”
Doesn’t this prove that Heisenberg was wrong about how error is connected to disturbance in experimental measurements? Not necessarily. Last year, a team of European researchers claimed to have a theoretical proof that in fact this version of Heisenberg’s Uncertainty Principle is correct after all [8]. They argued that Ozawa’s theory, and the experiments testing it, were using the wrong definitions of error. So they might be correct in their own terms, but weren’t really saying anything about Heisenberg’s error-disturbance principle. As team member Paul Busch of the University of York in England puts it, “Ozawa effectively proposed a wrong relationship between his own definitions of error and disturbance, wrongly ascribed it to Heisenberg, then showed how to fix it.”
So Heisenberg was correct after all in the limits he set on the tradeoff, argues Busch: “if the error is kept small, the disturbance must be large.”
Who is right? It seems to depend on exactly how you pose the question. What, after all, does measurement error mean? If you make a single measurement, there will be some random error that reflects the limits on the accuracy of your technique. But that’s why experimentalists typically make many measurements on the same system, so that you average out some of the randomness. Yet surely, some argue, the whole spirit of Heisenberg’s original argument was about making measurements of different properties on a particular, single quantum object, not averages for a whole bunch of such objects?
It now seems that Heisenberg’s limit on how small the combined uncertainty can be for error and disturbance holds true if you think about averages of many measurements, but that Ozawa’s smaller limit applies if you think about particular quantum states. In the first case you’re effectively measuring something like the “disturbing power” of a specific instrument; in the second case you’re quantifying how much we can know about an individual state. So whether Heisenberg was right or not depends on what you think he meant (and perhaps on whether you think he even recognized the difference).
As Steinberg explains, Busch and colleagues “are really asking how much a particular measuring apparatus is capable of disturbing a system, and they show that they get an equation that looks like the familiar Heisenberg form. We think it is also interesting to ask, as Ozawa did, how much the measuring apparatus disturbs one particular system. Then the less restrictive Ozawa-Branciard relations apply.”
Branciard agrees with Steinberg that this isn’t a question of who’s right and who’s wrong, but just a matter of how you make your definitions. “The two approaches simply address different questions. They each argue that the problem they address was probably the one Heisenberg had in mind. But Heisenberg was simply not clear enough on what he had in mind, and it is always dangerous to put words in someone else's mouth. I believe both questions are interesting and worth studying.”
There’s a broader moral to be drawn, for the debate has highlighted how quantum theory is no longer perceived to reveal an intrinsic fuzziness in the microscopic world. Rather, what the theory can tell you depends on what exactly you want to know and how you intend to find out about it. It suggests that “quantum uncertainty” isn’t some kind of resolution limit, like the point at which objects in a microscope look blurry, but is to some degree chosen by the experimenter. This fits well with the emerging view of quantum theory as, at root, a theory about information and how to access it. In fact, recent theoretical work by Ozawa and his collaborators turns the error-disturbance relationship into a question about the cost of gaining information about one property of a quantum system on the other properties of that system [9]. It’s a little like saying that you begin with a box that you know is red and think weighs one kilogram – but if you want to check that weight exactly, you weaken the link to redness, so that you can’t any longer be sure that the box you’re weighing is a red one. The weight and the colour start to become independent pieces of information about the box.
If this seems hard to intuit, that’s just a reflection of how interpretations of quantum theory are starting to change. It appears to be telling us that what we can know about the world depends on how we ask. To that extent, then, we choose what kind of a world we observe.
The issue isn’t just academic, since an approach to quantum theory in which quantum states are considered to encode information is now starting to produce useful technologies, such as quantum cryptography and the first prototype quantum computers. “Deriving uncertainty relations for error-disturbance or for joint measurement scenarios using information-theoretical definitions of errors and disturbance has a great potential to be useful for proving the security of cryptographic protocols, or other information-processing applications”, says Branciard. “This is a very interesting and timely line of research.”
3. C. Branciard, Proc. Natl. Acad. Sci. U.S.A. 110, 6742 (2013).
4. J. Erhart, S. Sponar, G. Sulyok, G. Badurek, M. Ozawa & Y. Hasegawa, Nat. Phys. 8, 185 (2012).
5. L. A. Rozema, A. Darabi, D. H. Mahler, A. Hayat, Y. Soudagar & A. M. Steinberg, Phys. Rev. Lett. 109, 100404 (2012).
6. F. Kandea, S.-Y. Baek, M. Ozawa & K. Edamatsu, Phys. Rev Lett. 112, 020402 (2014).
7. M. Ringbauer, D. N. Biggerstaff, M. A. Broome, A. Fedrizzi, C. Branciard & A. G. White, Phys. Rev. Lett. 112, 020401 (2014).
9. F. Buscemi, M. J. W. Hall, M. Ozawa & M. W. Wilde, Phys. Rev. Lett. 112, 050401 (2014).
Tuesday, October 07, 2014
Waiting for the green (and blue) light
This was intended as a "first response" to the Nobel announcement this morning, destined for the Prospect blog. But as it can take a little while for things to appear there, here it is anyway while the news is still ringing in the air. I'm delighted by the choice.
Did you notice when traffic lights began to change colour? The green “go” light once was once a yellowish pea green, but today it has a turquoise hue. And whereas the lights would switch with a brief moment of fading up and down, now they blink on and off in an instant.
I will be consigning myself to the farthest reaches of geekdom by admitting this, but I used to feel a surge of excitement whenever, a decade or so ago, I noticed these new-style traffic lights. That’s because I knew I was witnessing the birth of a new age of light technology. Even if traffic lights didn’t press your buttons, the chances are that you felt the impact of the same innovations in other ways, most notably when the definition of your DVD player got a boost from the introduction of Blu-Ray technology, which happened about a decade ago. What made the difference was the development of a material that could be electrically stimulated into emitting bright blue light: the key component of blue light-emitting diodes (LEDs), used in traffic lights and other full-colour signage displays, and of lasers, which read the information on Blu-Ray DVDs.
It’s for such reasons that this year’s Nobel laureates in physics have genuinely changed the world. Japanese scientists Isamu Akasaki, Hiroshi Amano and Shuji Nakamura only perfected the art of making blue-light-emitting semiconductor devices in the 1990s, and as someone who watched that happen I still feel astonished at how quickly this research progressed from basic lab work to a huge commercial technology. By adding blue (and greenish-blue) to the spectrum of available colours, these Japanese researchers have transformed LED displays from little glowing dots that simply told you if the power was on or off to full-colour screens in which the old red-green-blue system of colour televisions, previously produced by firing electron beams at phosphor materials on the screen, can now be achieved instead with compact, low-power and ultra-bright electronics.
It’s because LEDs need much less power than conventional incandescent light bulbs that the invention of blue LEDs is ultimately so important. Sure, they also switch faster, last longer and break less easily than old-style bulbs – you’ll see fewer out-of-service traffic lights these days – but the low power requirements (partly because far less energy is wasted as heat) mean that LED light sources are also good for the environment. Now that they can produce blue light too, it’s possible to make white-light sources from a red-green-blue combination that can act as regular lighting sources for domestic and office use. What’s more, that spectral mixture can be tuned to simulate all kinds of lighting conditions, mimicking daylight, moonlight, candle-light or an ideal spectrum for plant growth in greenhouses. The recent Making Colour exhibition at the National Gallery in London featured a state-of-the-art LED lighting system to show how different the hues of a painting can seem under different lighting conditions.
As with so many technological innovations, the key was finding the right material. Light-emitting diodes are made from semiconductors that convert electrical current into light. Silicon is no good at doing this, which is why it has been necessary to search out other semiconductors that are relatively inexpensive and compatible with the silicon circuitry on which all microelectronics is based. For red and yellow-green light that didn’t prove so hard: semiconductors such as gallium arsenide and gallium aluminium arsenide have been used since the 1960s for making LEDs and semiconductor lasers for optical telecommunications. But getting blue light from a semiconductor proved much more elusive. From the available candidates around the early 1990s, both Akasaki and Amano at Nagoya University and Nakamura at the chemicals company Nichia put their faith in a material called gallium nitride. It seemed clear that this stuff could be made to emit light at blue wavelengths, but the challenge was to grow crystals of sufficient quality to do that efficiently – if there were impurities or flaws in the crystal, it wouldn’t work well enough. Challenges of this kind are typically an incremental business rather than a question of some sudden breakthrough: you have to keep plugging away and refining your techniques, improving the performance of your system little by little.
Nakamura’s case is particularly appealing because Nichia was a small, family-run company on the island of Shikoku, generally considered a rural backwater – not the kind of place you would expect to beat the giants of Silicon Valley in a race for such a lucrative goal. It was his conviction that gallium nitride really was the best material for the job that kept him going.
The Nobel committee has come up trumps here – it’s a choice that rewards genuinely innovative and important work, which no one will grumble about, and which in retrospect seems obvious. And it’s a reminder that physics is everywhere, not just in CERN and deep space. |
32ce436be3ad95cf | Improving Students' Understanding of Quantum Mechanics Documents
Main Document
Improving Students' Understanding of Quantum Mechanics
written by Chandralekha Singh, Mario Belloni, and Wolfgang Christian
Richard Feynman once famously stated that nobody understands quantum mechanics. He was, of course, referring to the many strange, unintuitive foundational aspects of quantum theory such as its inherent indeterminism and state reduction during measurement according to the Copenhagen interpretation. But despite its underlying fundamental mysteries, the theory has remained a cornerstone of modern physics. Most physicists, as students, are introduced to quantum mechanics in a modern-physics course, take quantum mechanics as advanced undergraduates, and then take it again in their first year of graduate school. One might think that after all this instruction, students would have become certified quantum mechanics, able to solve the Schrödinger equation, manipulate Dirac bras and kets, calculate expectation values, and, most importantly, interpret their results in terms of real or thought experiments. That sort of functional understanding of quantum mechanics is quite distinct from the foundational issues alluded to by Feynman.
Published August 1, 2006
Last Modified June 22, 2008 |
b54d45cfeee6f75f | From Wikipedia, the free encyclopedia
Jump to: navigation, search
This solution of the vibrating drum problem is, at any point in time, an eigenfunction of the Laplace operator on a disk.
In mathematics, an eigenfunction of a linear operator, A, defined on some function space, is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. More precisely, one has
A f = \lambda f
for some scalar, λ, the corresponding eigenvalue. The solution of the differential eigenvalue problem also depends on any boundary conditions required of f. In each case there are only certain eigenvalues λ = λn (n = 1, 2, 3, ...) that admit a corresponding solution for f = fn (with each fn belonging to the eigenvalue λn) when combined with the boundary conditions. Eigenfunctions are used to analyze A.
For example, fk (x) = ekx is an eigenfunction for the differential operator
A = \frac{d^2}{dx^2} - \frac{d}{dx}
for any value of k, with corresponding eigenvalue λ = k2k. If boundary conditions are applied to this system (e.g., f = 0 at two physical locations in space), then only certain values of k = kn satisfy the boundary conditions, generating corresponding discrete eigenvalues \lambda_n=k_n^2-k_n.
Specifically, in the study of signals and systems, the eigenfunction of a system is the signal f (t) which when input into the system, produces a response y(t) = λ f (t) with the complex constant λ.[1]
Derivative operator[edit]
A widely used class of linear operators acting on function spaces are the differential operators on function spaces. As an example, on the space C of infinitely differentiable real functions of a real argument t, the process of differentiation is a linear operator since
\frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt}, \qquad f,g \in C^{\infty}, \quad a,b \in \mathbf{R}.
The eigenvalue equation for a linear differential operator D in C is then a differential equation
D f = \lambda f
The functions that satisfy this equation are commonly called eigenfunctions. For the derivative operator d/dt, an eigenfunction is a function that, when differentiated, yields a constant times the original function. That is,
\frac{d}{dt} f(t) = \lambda f(t)
for all t. This equation can be solved for any value of λ. The solution is an exponential function
f(t) = Ae^{\lambda t}.
The derivative operator is defined also for complex-valued functions of a complex argument. In the complex version of the space C, the eigenvalue equation has a solution for any complex constant λ. The spectrum of the operator d/dt is therefore the whole complex plane. This is an example of a continuous spectrum.
Vibrating strings[edit]
The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation.
Let h(x, t) denote the sideways displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. From the laws of mechanics, applied to infinitesimal portions of the string, one can deduce that the function h satisfies the partial differential equation
\frac{\partial^2 h}{\partial t^2} = c^2\frac{\partial^2 h}{\partial x^2},
which is called the (one-dimensional) wave equation. Here c is a constant that depends on the tension and mass of the string.
This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:
\frac{d^2}{dx^2}X=-\frac{\omega^2}{c^2}X \qquad \frac{d^2}{dt^2}T=-\omega^2 T.
Each of these is an eigenvalue equation, for eigenvalues -\tfrac{\omega^2}{c^2} and ω2, respectively. For any values of ω and c, the equations are satisfied by the functions
X(x) = \sin \left(\frac{\omega x}{c} + \varphi \right),
T(t) = \sin(\omega t + \psi),
where φ and ψ are arbitrary real constants. If we impose boundary conditions (that the ends of the string are fixed with X(x) = 0 at x = 0 and x = L, for example) we can constrain the eigenvalues. For those boundary conditions, we find sin(φ) = 0, and so the phase angle φ = 0 and
\sin\left(\frac{\omega L}{c}\right) = 0.
Thus, the constant ω is constrained to take one of the values ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form
h(x,t) = \sin \left (\frac{n\pi x}{L} \right )\sin(\omega_n t).
From the point of view of our musical instrument, the frequency ωn is the frequency of the n-th harmonic, which is called the (n − 1)-th overtone.
Quantum mechanics[edit]
Eigenfunctions play an important role in many branches of physics. An important example is quantum mechanics, where the Schrödinger equation
H\psi = E \psi,
has solutions of the form
\psi(t) = \sum_k e^{-\frac{i E_k t}{\hbar}} \varphi_k,
where φk are eigenfunctions of the operator H with eigenvalues Ek. The fact that only certain eigenvalues Ek with associated eigenfunctions φk satisfy Schrödinger's equation leads to a natural basis for quantum mechanics and the periodic table of the elements, with each Ek an allowable energy state of the system. The success of this equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.
Since the Hamiltonian operator H is a Hermitian Operator, its eigenfunctions are orthogonal functions. This is not necessarily the case for eigenfunctions of other operators (such as the example A mentioned above). Orthogonal functions fi (i = 1, 2, ...) have the property that
0 = \int \overline{f_i} f_j
where fi is the complex conjugate of fi.
whenever ij, in which case the set { fi | iI} is said to be orthogonal. Also, it is linearly independent.
1. ^ Bernd Girod, Rudolf Rabenstein, Alexander Stenger, Signals and systems, 2nd ed., Wiley, 2001, ISBN 0-471-98800-6 p. 49
See also[edit] |
2cab76e2af7da67a | This article is part of the series Advanced Materials Nanocharacterization.
Open Access Nano Express
Scaling properties of ballistic nano-transistors
Ulrich Wulf*, Marcus Krahlisch and Hans Richter
Author affiliations
BTU Cottbus, Fakultät 1, Postfach 101344, 03013 Cottbus, Germany
For all author emails, please log on.
Citation and License
Nanoscale Research Letters 2011, 6:365 doi:10.1186/1556-276X-6-365
Received:5 November 2010
Accepted:28 April 2011
Published:28 April 2011
© 2011 Wulf et al; licensee Springer.
Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade.
In the past years, channel lengths of field-effect transistors in integrated circuits were reduced to arrive at currently about 40 nm [1]. Smaller conventional transistors have been built [2-9] with gate lengths down to 10 nm and below. As well-known with decreasing channel length the desired long-channel behavior of a transistor is degraded by short-channel effects [10-12]. One major source of these short-channel effects is the multi-dimensional nature of the electro-static field which causes a reduction of the gate voltage control over the electron channel. A second source is the advent of quantum transport. The most obvious quantum short-channel effect is the formation of a source-drain tunneling regime below threshold gate voltage. Here, the ID - VD-traces show a positive bending as opposed to the negative bending resulting for classically allowed transport [13,14]. The source-drain tunneling and the classically allowed transport regime are separated by a close-to linear threshold trace (LTT). Such a behavior is found in numerous MOSFETs with channel lengths in the range of a few tens of nanometers (see, for example, [2-9]).
Starting from a three-dimensional formulation of the transport problem it is possible to construct a one-dimensional effective model [14] which allows to derive scale-invariant expressions for the drain current [15,16]. Here, the quantity arises as a natural scaling length for quantum transport where εF is the Fermi energy in the source contact and m* is the effective mass of the charge carriers. The quantum short-channel effects were studied as a function of the dimensionless characteristic length l = L/λ of the transistor channel, where L is its physical length.
In this conference contribution, we discuss the physics of the major quantities in our scale-invariant model which are the chemical potential, the supply function, and the scale-invariant current transmission. We specify its range of applicability: generally, for a channel length up to a few tens of nanometers a LTT is definable up to room temperature. For higher temperatures, a LTT can only be found below a channel length of 10 nm. An inspection of the ID - VG-traces yields in qualitative agreement with experiments that at low drain voltages transport becomes thermally activated below the threshold gate voltage while it does not for large drain voltages. Though our model reproduces interesting qualitative features of the experiments it fails to provide a quantitative description: the theoretical values are larger than the experimental ones by a little less than a decade. Such a finding is expected for our simple model.
Tsu-Esaki formula for the drain current
In Refs. [13,14], the transport problem in a nano-FET was reduced to a one-dimensional effective problem invoking a "single-mode abrupt transition" approximation. Here, the electrons move along the transport direction in an effective potential given
(see Figure 1b). The energy zero in Equation 1 coincides with the position of the conduction band minimum in the highly n-doped source contact. As shown in [14]
thumbnailFigure 1. Generic n-channel nano-field effect transistor. (a) Schematic representation. (b) One-dimensional effective potential Veff.
where Ek = 1 is the bottom of the lowest two-dimensional subband resulting in the z-confinement potential of the electron channel at zero drain voltage (see Figure 4b of Ref. [13]). The parameter W is the width of the transistor. Finally, VD = eUD is the drain potential at drain voltage UD which is assumed to fall off linearly.
Experimentally, one measures in a wide transistor the current density J, which is the current per width of the transistor that we express as
Here is the number of equivalent conduction band minima ('valleys') in the electron channel and I0 = 2F/h. In Refs. [15,16] a scale-invariant expression
was derived. Here, m = μ/εF is the normalized chemical potential in the source contact, vD = VD/εF is the normalized drain voltage, and vG = VG/εF is the normalized gate voltage. As illustrated in Figure 1(b) the gate voltage is defined as the energy difference μ - V0 = VG, i.e., for VG > 0 the transistor operates in the ON-state regime of classically allowed transport and for VG < 0 in the source-drain tunneling regime. The control variable VG is used to eliminate the unknown variable V0. For the chemical potential in the source contact one finds (see next section)
where u = kBT/εF is the normalized thermal energy. Equation 4 has the form of a Tsu-Esaki formula with the normalized supply function
Here, F-1/2 is the Fermi-Dirac integral of order -1/2 and is the inverse function of F1/2. The effective current transmission depends on which is the normalized energy of the electron motion in the y-z-plane while is their energy in the x-direction. In the next sections, we will discuss the occurring quantities in detail.
Chemical potential in source- and drain-contact
For a wide enough transistor and a sufficient junction depth a (see Figure 1) the electrons in the contacts can be treated as a three-dimensional non-interacting electron gas. Furthermore, we assume that all donor impurities of density Ni are ionized. From charge neutrality it is then obtained that the electron density n0 is independent of the temperature and given by
Here me is the effective mass and NV is the valley-degeneracy factor in the contacts, respectively. In the zero temperature limit a Sommerfeld expansion of the Fermi-Dirac integral leads to
Equating 7 and 8 results in
which is identical with (5) and plotted in Figure 2. As well-known, with increasing temperature the chemical potential falls off because the high-energy tail of the Fermi-distribution reaches up to ever higher energies.
thumbnailFigure 2. Normalized chemical potential vs. thermal energy according to Equation 9 in green solid line and parabolic approximation in red dash-dotted line.
Supply function
As shown in Ref. [14] the supply function for a wide transistor can be written as
This expression can be interpreted as the partition function (loosely speaking the "number of occupied states") in the grand canonic ensemble of a non-interacting homogeneous three-dimensional electron gas in the subsystem of electrons with a given lateral wave vector (ky, kz) yielding the energy in the y-z-direction. Formally equivalent it can be interpreted as the full partition function in the grand canonic ensemble of a one-dimensional electron gas at the chemical potential μ - ε. Performing the limit the Riemann sum in the variable can be replaced by the Fermi-Dirac integral F-1/2. It results that
with the normalized transistor width w = W/λ. For the scaling of the supply function in Equation 11 we define (see Ref. [14])
where and we use the identity V0= εF = m - vG. For the source contact we write
leading to the first factor in the square bracket of the Tsu-Esaki equation 4. In the drain contact, the chemical potential is lower by the factor VD. Replacing μ μ - VD yields
Below we will show that for transistor operation the low temperature limit is relevant (see Figure 2). Here, one may apply in leading order (resulting from a Sommerfeld expansion) and F-1/2(-x → ∞) → exp (x). Since V0 > 0 the factor vG - m is negative and we obtain from (12)
From Figure 3 it is seen that for ε below the chemical potential the supply function is well described by the square-root dependence in the limit. If ε lies above the chemical chemical one obtains the limit which is a small exponential tail due to thermal activation.
thumbnailFigure 3. Supply function in the source contact (see Equation 6) for u = 0.1 and vG = 0 (black line), low-temperature limit according to Equation 15 for α < 0 (red dashed line) and α > 0 (green dashed line). Because of the small temperature m(u) ~ 1 so that occurs at .
Current transmission
The effective current transmission in Equation 16 is given y
It is calculated from the scattering solutions of the scaled one-dimensional Schrödinger equation
with β = 2m*V0L2/ħ2 = l2(m - vG), and ŷ = y/L. The scaled effective potential is given by , , and ,where . As usual, the scattering functions emitted from the source contact obey the asymptotic conditions and
with and .
As can be seen from Figure 4, around the current transmission changes from around zero to around one. For weak barriers there is a relatively large current transmission below one leading to drain leakage currents. For strong barriers this remnant transmission vanishes and we can approximate the current transmission by an ideal one.
thumbnailFigure 4. Scaled effective model. (a) Scaled effective potential. (b) Effective current transmission at u = 0.1, vD = 0.5, and vG = 0 ( = 0.504 and m = 0.992). The considered characteristic lengths are l = 4 (red, weak barrier, β = 15.87) and l = 25 (green, strong barrier, β = 619.8). The ideal limit (Equation 19) in blue line.
To a large extent the Fowler Nordheim oscillations in the numerical transmission average out performing the integration in Equation 4.
Parameters in experimental nano-FETs
Heavily doped contacts
In the heavily doped contacts the electrons can be approximated as a three-dimensional non-interacting Fermi gas. Then from (8) the Fermi energy above the bottom of the conduction band is given by
For n++-doped Si contacts the valley-degeneracy is NV = 6 and the effective mass is taken as . Here m1 = 0.19m0 and m2 = 0.98m0 are the effective masses corresponding to the principle axes of the constant energy ellipsoids. In our later numerical calculations we set εF = 0.35 eV assuming a level of source-doping as high as Ni = n0 = 1021 cm-3.
Electron channel
In the electron channel a strong lateral subband quantization exists As well-known [17] at low temperatures only the two constant energy ellipsoids with the heavy mass m2 perpendicular to the (100)-interface are occupied leading to a valley degeneracy of gv = 2. The in-plane effective mass is therefore the light mass m* = m1 entering the relation
Here εF = 0.35 eV was assumed. One then has in Equation 3 I0 = ~ 27μA and with λ ~ 1 nm as well as = 2 one obtains J0 = 5.4 × 104 μA/μm.
Drain characteristics
Typical drain characteristics are plotted in Figure 5 for a low temperature (u = 0.01) and at room temperature (u = 0.1). It is seen that for both the temperatures a LTT can be identified. We define the LTT as the j - vD trace which can be best fitted with a linear regression j = σthvD in the given interval 0 ≤ vD ≤ 2. The best fit is determined by the minimum relative mean square deviation. The gate voltage associated with the LTT is denoted with . It turns out that at room temperature lies slightly above zero and at low temperatures slightly below (see Figure 5c). In general, the temperature dependence of the drain current is small. The most significant temperature effect is the enhancement of the resonant Fowler-Nordheim oscillations found at negative vG at low temperatures. From Figure 5d, it can be taken that the slope of the LTT σth decreases with increasing l and increasing temperature. For "hot" transistors (u = 0.2) a LTT can only be defined up to l ~ 10.
thumbnailFigure 5. Calculated drain characteristics for l = 10, vG starting from 0.5 with decrements of 0.1 (solid lines) at the temperature (a) u = 0.1 and (b) u = 0.01. In green dashed lines the LTT. For u = 0.1 the LTT occurs at a gate voltage of = -0.05 and for u = 0.01 at = 0.05. (c) , and (d) σth versus characteristic length for u = 0.01 (black), u = 0.1 (red), and u = 0.2 (green).
Threshold characteristics
The threshold characteristics at room temperature are plotted in Figure 6 for a "small" drain voltage (vD = 0.1) and a "large" drain voltage (vD = 2.0). For the largest considered characteristic length l = 60 it is seen that below zero gate voltage the drain current is thermally activated for both considered drain voltages. A comparison with the results for l = 25 and l = 10 yields that for the small drain voltage the ID - VG trace is only weakly effected by the change in the barrier strength. In contrast, at the high drain voltage the drain current below vG = 0 grows strongly with decreasing barrier strength. The drain current does not reach the thermal activation regime any more, it falls of much smoother with increasing negative vG. As can be gathered from Figure 8 this effect is seen in experiments as well. We attribute it to the weakening of the tunneling barrier with increasing vD. To confirm this point the threshold characteristics for a still weaker barrier strength (l = 3) is considered. No thermal activation is found in this case even for the small drain voltage.
thumbnailFigure 6. Calculated threshold characteristics at u = 0.1 (a) for l = 60 and (b) l = 25, and (c) l = 3. The dashed straight lines in blue are guides to the eye exhibiting a slope corresponding to thermal activation.
We discuss our numerical results on the background of experimental characteristics for a 10 nm gate length transistor [4,5] reproduced in Figure 7. As demonstrated in Sect. "Parameters in experimental nano-FETs" one obtains from Equation 21 a characteristic length of λ ~ 1 nm under reasonable assumptions. For the experimental 10 nm gate length, we thus obtain l = L/λ = 10. Furthermore, Equation 20 yields the value of εF = 0.35 eV. The conversion of the experimental drain voltage V into the theoretical parameter vD is given by
thumbnailFigure 7. Drain characteristics in experiment and theory. (a) Experimental drain characteristics for a nano-transistor with L = 10 nm [4,5]. Our assumption for the LTTis marked with a green dashed line leading to a threshold gate voltage of = 0.15V. (b) Theoretical drain characteristics for l = 10 and u = 0.1 (see Fig. 5a) with the green dashed threshold characteristic at = -0.05.
The maximum experimental drain voltage of 0.75 V then sets the scale for vD ranging from zero to vD = 0.75 eV/0.35 eV ~ 2. For the conversion experimental gate voltage VG to the theoretical parameter vG we make linear ansatz as
where is the experimental threshold gate voltage (see Figure 8a). The constant β is chosen so that converts into . In our example, it is shown from Figure 8a = 0.15 V and from Figure 8b = -0.05, so that β = -0.2 eV. To match the experimental drain characteristic to the theoretical one we first convert the highest experimental value for VG into the corresponding theoretical one. Inserting in (23) VG = 0.75 V yields vG ~ 0.5. Second, we adjust the experimental and the theoretical drain current-scales so that in Figure 7 the curves for the experimental current at VG = 0.7 and the theoretical curve at vG = 0.5 agree. It then turns out that the other corresponding experimental and theoretical traces agree as well. This agreement carries over to the range of negative gate voltages with thermally activated transport. This can be gathered from the ID - VG traces in Figure 8. We note that the constant of proportionality in Equation 23 given by 1 eV is more then εF which one would expect from the theoretical definition vG = VG/εF. Here, we emphasize that the experimental value of e VG corresponds to the change of the potential at the transistor gate while the parameter vG describes the position of the bottom of the lowest two-dimensional subband in the electron channel. The linear ansatz in Equation 23 and especially the constant of proportionality 1 eV can thus only be justified in a self-consistent calculation of the subband levels as has been provided, e.g., by Stern[18].
thumbnailFigure 8. Threshold characteristics in experiment and theory. (a) Experimental threshold characteristics for the nano-transistor in Fig. 7a. (b) Theoretical threshold characteristics for l = 10 and u = 0.1 with the blue dashed lines corresponding to thermal activation.
The experimental and the theoretical drain characteristics in Figure 7 look structurally very similar. For a quantitative comparison we recall from Sect. "Parameters in experimental nano-FETs" the value of J0 = 5.4 × 104μA/μm. Then the maximum value j = 0.15 in Figure 7b corresponds to a theoretical current per width of 8 × 103μA/μm. To compare with the experimental current per width we assume that in the y-axis labels in Figures 7a and 8a it should read μA/μm instead of A/μm. The former unit is the usual one in the literature on comparable nanotransistors (see Refs. [2-9]) and with this correction the order of magnitude of the drain current per width agrees with that of the comparable transistors. It is found that the theoretical results are larger than the experimental ones by about a factor of ten. Such a failure has to be expected given the simplicity of our model. First, for an improvement it is necessary to proceed from potentials resulting in a self-consistent calculation. Second, our representation of the transistor by an effectively one-dimensional system probably underestimates the backscattering caused by the relatively abrupt transition between contacts and electron channel. Third, the drain current in a real transistor is reduced by impurity interaction, in particular, by inelastic scattering. As a final remark we note that in transistors with a gate length in the micrometer scale short-channel effects may occur which are structurally similar to the ones discussed in this article (see Sect. 8.4 of [10]). Therefore, a quantitatively more reliable quantum calculation would be desirable allowing to distinguish between the short-channel effects on micrometer scale and quantum short-channel effects.
After a detailed discussion of the physical quantities in our scale-invariant model we show that a LTT is present not only in the low temperature limit but also at room temperatures. In qualitative agreement with the experiments the ID - VG-traces exhibit below the threshold voltage thermally activated transport at small drain voltages. At large drain voltages the gate-voltage dependence of the traces is much weaker. It is found that the theoretical drain current is larger than the experimental one by a little less than a decade. Such a finding is expected for our simple model.
LTT: linear threshold trace.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
UW worked out the theroretical model, carried out numerical calculations and drafted the manuscript. MK carried out numerical calculations and drafted the manuscript. HR drafted the manuscript. All authors read and approved the final manuscript.
1. Auth C, Buehler H, Cappellani A, Choi H-h, Ding G, Han W, Joshi S, McIntyre B, Prince M, Ranade P, Sandford J, Thomas C: 45 nm High-k+Metal Gate Strain-Enhanced Transistors.
Intel Technol J 2008, 12:77-85. OpenURL
2. Yu B, Wang H, Joshi A, Xiang Q, Ibok E, Lin M-R: 15 nm Gate Length Planar CMOS Transistor.
IEDM Tech Dig 2001, 937. OpenURL
3. Doris B, Ieong M, Kanarsky T, Zhang Y, Roy RA, Dokumaci O, Ren Z, Jamin F-F, Shi L, Natzle W, Huang H-J, Mezzapelle J, Mocuta A, Womack S, Gribelyuk M, Jones EC, Miller RJ, Wong HSP, Haensch W: Extreme Scaling with Ultra-Thin Si Channel MOSFETs.
IEDM Tech Dig 2002, 267. OpenURL
4. Doyle B, Arghavani R, Barlage D, Datta S, Doczy M, Kavalieros J, Murthy A, Chau R: Transistor Elements for 30 nm Physical Gate Lengths.
Intel Technol J 2002, 6:42. OpenURL
5. Chau R, Doyle B, Doczy M, Datta S, Hareland S, Jin B, Kavalieros J, Metz M: Silicon Nano-Transistors and Breaking the 10 nm Physical Gate Length Barrier.
61st Device Research Conference 2003; Salt Lake City, Utah (invited talk) OpenURL
6. Tyagi S, Auth C, Bai P, Curello G, Deshpande H, Gannavaram S, Golonzka O, Heussner R, James R, Kenyon C, Lee S-H, Lindert N, Miu M, Nagisetty R, Natarajan S, Parker C, Sebastian J, Sell B, Sivakumar S, St Amur A, Tone K: An advanced low power, high performance, strained channel 65 nm technology.
IEDM Tech Dig 2005, 1070. OpenURL
7. Natarajan S, Armstrong M, Bost M, Brain R, Brazier M, Chang C-H, Chikarmane V, Childs M, Deshpande H, Dev K, Ding G, Ghani T, Golonzka O, Han W, He J, Heussner R, James R, Jin I, Kenyon C, Klopcic S, Lee S-H, Liu M, Lodha S, McFadden B, Murthy A, Neiberg L, Neirynck J, Packan P, Pae S, Parker C, Pelto C, Pipes L, Sebastian J, Seiple J, Sell B, Sivakumar S, Song B, Tone K, Troeger T, Weber C, Yang M, Yeoh A, Zhang K: A 32 nm Logic Technology Featuring 2nd-Generation High-k + Metal-Gate Transistors, Enhanced Channel Strain and 0.171 μm2 SRAM Cell Size in a 291 Mb Array.
IEDM Tech Dig 2008, 1. OpenURL
8. Fukutome H, Hosaka K, Kawamura K, Ohta H, Uchino Y, Akiyama S, Aoyama T: Sub-30-nm FUSI CMOS Transistors Fabricated by Simple Method Without Additional CMP Process.
IEEE Electron Dev Lett 2008, 29:765. OpenURL
9. Bedell SW, Majumdar A, Ott JA, Arnold J, Fogel K, Koester SJ, Sadana DK: Mobility Scaling in Short-Channel Length Strained Ge-on-Insulator P-MOSFETs.
IEEE Electron Dev Lett 2008, 29:811. OpenURL
10. Sze SM: Physics of Semiconductor Devices. New York: Wiley; 1981. OpenURL
11. Thompson S, Packan P, Bohr M: MOS Scaling: Transistor Challenges for the 21st Century.
Intel Technol J 1998, Q3:1. OpenURL
12. Brennan KF: Introduction to Semiconductor Devices. Cambridge: Cambridge University Press; 2005. OpenURL
13. Nemnes GA, Wulf U, Racec PN: Nano-transistors in the LandauerBüttiker formalism.
J Appl Phys 2004, 96:596. Publisher Full Text OpenURL
14. Nemnes GA, Wulf U, Racec PN: Nonlinear I-V characteristics of nanotransistors in the Landauer-Büttiker formalism.
J Appl Phys 2005, 98:84308. Publisher Full Text OpenURL
15. Wulf U, Richter H: Scaling in quantum transport in silicon nanotransistors.
Solid State Phenomena 2010, 156-158:517. OpenURL
16. Wulf U, Richter H: Scale-invariant drain current in nano-FETs.
J Nano Res 2010, 10:49. OpenURL
17. Ando T, Fowler AB, Stern F: Electronic properties of two-dimensional systems.
Rev Mod Phys 1982, 54:437. Publisher Full Text OpenURL
18. Stern F: Self-Consistent Results for n-Type Si Inversion Layers.
Phys Rev B 1972, 5:4891. Publisher Full Text OpenURL |
943b7f870cb502ae | Take the 2-minute tour ×
I'm still in high school, and while I can't complain about the quality of my teachers (all of them have done at least a bachelor, some a masters) I usually am cautious to believe what they say straight away. Since I'm interested quite a bit in physics, I know more about it than other subjects and I spot things I disagree with more often, and this is the most recent thing:
While discussing photons, my teacher made a couple of statements which might be true but sound foreign to me:
• He said that under certain conditions, photons have mass. I didn't think this was true at all. I think he said this to avoid confusion regarding $E=mc^2$, however, in my opinion it only adds to the confusion since objects with mass can't travel with the speed of light, and light does have a tendency to travel with the speed of light.. I myself understand how photons can have a momentum while having no mass because I lurk this site, but my classmates don't.
• He said photons don't actually exist, but are handy to envision. This dazzled my mind. Even more so since he followed this statement by explaining the photo-electric effect, which to me seems like a proof of the existence of photons as the quantum of light. He might have done this to avoid confusion regarding the wave-particle duality.
This all seems very odd to me and I hope some of you can clarify.
share|improve this question
This question may be useful physics.stackexchange.com/q/34067 even though is closed. – Jorge Feb 7 '13 at 19:28
Photons do have a mass inside a superconductor. Which is why, inside a superconductor, the electromagnetic force becomes short-range. Perhaps that's what your teacher meant. – Dmitry Brant Feb 7 '13 at 20:07
@DmitryBrant I know this is to some extent just a matter of semantics, but I personally feel it's somewhat misleading to call the effective mass of a photon inside of a superconductor its mass. – joshphysics Feb 7 '13 at 21:10
Also Ylyk, you might be interested to read the section in en.wikipedia.org/wiki/Photon#Experimental_checks_on_photon_mass that talks about experimental checks on photon mass. – joshphysics Feb 7 '13 at 21:12
I would definitely ignore anything that your teacher or classmates have to say about physics that you can not find in a good text book. Until you reach topics of quantum gravity and quantum information theory, these things are well understood within the physics community (although not by all its members). I would also ignore most media releases until you can sift the good from the bad. A good list of freely available books can be found athttp://physics.stackexchange.com/questions/6157/list-of-freely-available-physics-books – user11547 Feb 8 '13 at 11:45
5 Answers 5
up vote 8 down vote accepted
1. Photons are massless. This should not cause confusion with $E=mc^2$ because the expression for the relativistic energy of the photon is $E = h \nu$, there $\nu$ is the photon's frequency and $h$ is Planck's constant. You can also understand the relativistic energy of the photon by noting that $E = pc$ where $p=\hbar k$ is the magnitude of its momentum, and photons possess momentum, as you point out. $k$ is the photon's wavevector and $\hbar = h/2\pi$ is the reduced Plank's constant. To tie this all together, we have the formula $E=\sqrt{m^{2}c^{4}+p^{2}c^{2}}$.
2. Photons exist! As you point out, the existence of photons has physical consequences that can be measured. Perhaps, as you also mention, your teacher is trying to insist that you resist thinking of photons purely as particles (which is probably not such a bad idea), but the statement "photons don't exist," in my opinion, should be considered just as fallacious as "the keyboard on which I'm typing doesn't exist.
You're being prudent by taking everything your teacher says with a large grain of salt. In my experience in physics (heck in life as a whole), it's good to take everything that anyone ever says with a grain of salt (including my response for that matter).
share|improve this answer
It depends on the definition of mass, obviously. Photons have zero rest mass. But physicists working in relativity also use "mass" as a synonim of "energy". – Bzazz Feb 7 '13 at 22:48
@Bzazz Nuclear and particle physicists use the work "mass" to mean only the rest mass. Every time. While there is no difficulty in defining the "relativistic mass" the concept obscures and confuses rather than clarifying. – dmckee Feb 7 '13 at 22:57
Yes, that's why I specified physicists working on relativity. In all the courses of GR I had, you put c=1 and talk of mass and energy equivalently. Of course, one seldom talks about the mass of a photon, because, we are more familiar with speaking of photon energy. But if we want to look for "the mass of the photon" as in the question, this is what comes to my mind first. Remember that relativistic mass is effectively mass, in the sense that also experiences gravity. – Bzazz Feb 7 '13 at 23:05
@Bzazz I think this sort of statement is the type of corruption of physics that we should be concerned about. Although any object with energy can create curvature, Photons are bosonic and do not directly interact with the higgs boson, and are therefore massless. See some good answer at physics.stackexchange.com/questions/23161/… – user11547 Feb 8 '13 at 10:57
I definitely agree with @HalSwyers on this one. – joshphysics Feb 8 '13 at 16:22
He is doing a good job trying to communicate the weirdness of Photons, but a poor one at being consistant. It's difficult to communicate how strange they are, without using the relationships you described.
First of all photons have no mass. At rest. The concept of describing a photon at rest is a bit weird even as there is no such thing.
Once your talking about photons in motion it is tempting to say they have mass, but most physicists these days don't speak of it this way. Rather its best to say it has ENERGY. While people like to say things can have relativistic mass, this is incorrect, mass is a constant: the mass a thing has never changes no matter how fast you travel: rather the energy to accelerate it increases (inertia).
So a photon has no mass: but it does have momentum. This is the fascinating thing about light. And leads to the fact that momentum is not always related to mass as you mentioned.
Examine $E=mc^2$ in more detail and the correct equation actually is $E^2 = m^2 c^4 + p^2 c^2$. As photons have momentum related to its energy/frequency this is fine.
The best way to think of light is as photons. Light is a photon, that is the fact, BUT light waves are only a model. The appearance of wave phenomenon is because quantum mechanically light interacts probabilistically, and this nature allows it to display wave like properties.
The science of Quantum Electro-Dynamics examines this strange behavior.
For now think of light as a photon and a wave, it allows every day behavior to be modeled well and is perfectly fine.
But in reality light is a photon particle (this is why the photo-electric effect works) that when described using Schrodinger's Equations (a description of probabilites) can be transformed into a description of a wave of Electro-Magnetism.
share|improve this answer
There are two reasons why a photon can't be described by the Schrödinger equation: A single particle theory for a photon is non-sense and Schrödinger's equation is non-relativistic. – Jorge Feb 7 '13 at 20:08
For modeling interactions between photons to show that they display wave characteristics, Shrodinger's equation works as a good approximation to demonstrate his equations and Maxwells are equivalent. Nontheless like I mentioned QED manages it better: see Feynman's books for good explanations without equations. – Eric_ Feb 7 '13 at 20:41
Eric, Schrödinger's equation is explicitly built on the Newtonian relationship between kinetic energy, momentum and mass (the rest mass for those that insist). It really, really doesn't do to use it with photons. If you must do non-field-theoretical QM with light, use the Klein-Gordon equation. However, you've got the right relationship in your post: $E^2 = m^2c^4 + p^2c^2$ is the correct answer. Don't detract from it by using the wrong wave equation. – dmckee Feb 7 '13 at 22:45
In any case, welcome to Physics.SE. WE have the MathJax rendering engine active on the site which lets you write latex-alike math inside pairs of $'s (for inline) or $$ (for block typesetting). I've done this post for you. – dmckee Feb 7 '13 at 22:47
I have sat in a college class and watched step by step as a Schrodinger equation is transformed into a wave equation with a form identical to maxwells. The point is not that it explains light, but that probabilistic quantum mechanics explains how a wave nature can appear for ANY and ALL particle phenomenon. There is wave particle duality in everything, not just light. – Eric_ Feb 8 '13 at 0:00
Regarding your two points:
He said that under certain conditions, photons have mass.
Massless particles move at the speed of light $c$ in vacuum. By his statement your teacher may have been alluding to the fact that photons travel at speeds slower than $c$ when they travel through media, like glass for example. However, I would rather phrase this as something like "in the transmission of a photon through a medium, the photon's transit time through that medium is such that it travels as if it had a mass." The travel of photons through media is a rather complex affair, which I don't fully understand, involving interactions with charged particles in the medium and quasiparticle states, and I'm not sure to what extent the incident photon even retains its identity whilst in the medium (maybe that's another question).
There is no question that photons exist. Particle physicists deal with "hard" (i.e high energy) photons, which behave like particles - scatter off other charged particles etc. Good evidence that "soft" (low energy) photons exist comes from antibunching experiments.
share|improve this answer
1) I would refer your high school teacher to some of the good answers found in this physics stack exchange question. The answer with the highest votes is really good and is referring to the electroweak theory which governs the electroweak section of the Standard Model.
General Relativity respects special relativity, and therefore respects that the invariant mass of the photon is zero, since the photon is described by a null-like vector in its rest frame.
In particle physics, as discussed in the links above, the current understanding is that mass is a measure of the relative interaction strength of a particle with the Higgs field as mediated by the Higgs boson. The photon does not directly interact with the Higgs boson, and therefore has no mass.
2) As far as visualizing the photon, I would venture the easiest way is to think in terms of classical EM theory (which is a gauge theory btw) where we consider the orthogonal oscillating electric and magnetic fields as representing the photon.
share|improve this answer
$E = mc^2$ is a popular formula but only valid in a special case when "the total additive momentum is zero for the system under consideration" $^1$
The more general "energy-momentum relation" is: $E^2 = (mc^2)^2 + (pc)^2$
(Also, here's a neat little video for your classmates ;) -> http://www.youtube.com/watch?v=NnMIhxWRGNw )
You can read more about this and the correct four-vector notation under:
$^1$ http://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence
Even today in the time of quantum optics, there are still quite a lot of papers, that try to grasp what a "photon" is, e.g.:
1. What is a photon - David Finkelstein
2. Light reconsidered - Arthur Zajonc
3. The concept of the photon - revisited - Ashok Muthukrishan, Marlan O. Scully, M.Suhail Zubairy
and many more all together in this nice review: http://www.sheffield.ac.uk/polopoly_fs/1.14183!/file/photon.pdf
Or in the words of Roy Glauber: "A photon is what a photodetector detects"
share|improve this answer
protected by Qmechanic Feb 8 '13 at 17:42
Would you like to answer one of these unanswered questions instead?
|
b4001d984afda062 | Skip to main content
Chemistry LibreTexts
• Page ID
• Lecture 1: The Rise of Quantum Mechanics
Classical mechanics is unable to explain certain phenomena observed in nature including the emission of blackbody radiators that is sensitive to the temperature of the radiator. This distribution follow's Planck distribution which can be used to derive two other experimental Laws (Wien's and Stefan-Boltzmann laws). A key finding is that the energy given off by a blackbody was not continuous, but given off at certain specific wavelengths, in regular increments.
• Lecture 2: The Rise of Quantum Mechanics
Classical mechanics is unable to explain certain phenomena observed in nature. The photoelectron effect has several experimental observations that break with classical predictions. Einstein proposed a solution that light is quantized given with each quantum of light is called a photon. And the energy is proportional to its frequency. This was an impressive argument in that it said light is not always a wave, but can be a particle. This duality also applies to matter.
• Lecture 3: The Rise of Quantum Mechanics
Hydrogen atom emission spectra consist of "lines" rather than a continuum expected of classical mechanics. These lines were separated into different classes. Rydberg showed that a simple single equation can predict the energies of these transitions by introducing two integers of unknown origin. While the photoelectron effect demonstrated that light can be wave-like and particle-like (e.g., "photon"), de Broglie demonstrated that matter also exhibits wave-like and particle-like behavior.
• Lecture 4: Bohr atom and Heisenberg Uncertainty
The Bohr atom was the first successful description of a quantum atom from basic principles (either as a particle or as a wave, both were discussed). From a particle perspective, stable orbits are predicted from the result of opposing forces (Coloumb's force vs. centripetal force). From a wave perspective, stable "standing waves" are predicted . The Bohr atom predicts quantized energies. Heisenberg's Uncertainly principle argues that trajectories do not exist in quantum mechanics.
• Lecture 5: Classical Wave Equations and Solutions
Schrödinger Equation is a wave equation that is used to describe quantum mechanical system and is akin to Newtonian mechanics in classical mechanics. The Schrödinger Equation is an eigenvalue/eigenvector problem. To use it we have to recognize that observables are associated with linear operators that "operate" on the wavefunction.
• Lecture 6: Schrödinger Equation
The Schrödinger Equation has solutions called wavefunctions. The time-dependent Schrödinger Equation results in time-dependent wavefunctions with both spatial aspect and a temporal aspects. The time-independent Schrödinger Equation results in time-independent wavefunctions with only a spatial aspect. Which one we use dependents if their is an explicit time-dependence in the Hamiltonian. It is important to recognize that wavefunctions ALWAYS have a temporal part (we typically ignore though).
• Lecture 7: Operators, Free Particles and the Quantum Superposition Principle
Wavefunctions have a probabilistic interpretation, more specifically, the wavefunction squared (or to be more exact, the Ψ∗Ψ is a probability density). To get a probability, we have to integrate Ψ∗Ψ over an interval. The probabilistic interpretation means Ψ∗Ψ must be finite, nonnegative and not infinite. and that the wavefunctions must be normalized. We the introduced the particle in the box, which is "easy" to solve the Schrödinger Equation to get oscillatory wavefunctions.
• Lecture 8: Topical Overview of PIB and Postulates QM
This lecture focused on gaining an intuition of wavefunctions with an emphasis on the particle in the box. Specifically, we considered the four principal properties of continuous distributions and applied it to the particle in the box. We want to develop an intuition behind how the energy and wavefunctions change in PIB when mass is increased, when box length is increased and when quantum number n is increased. We ended the discussion discussing that eigenstates of an operator are orthogonal.
• Lecture 9: More on PIB and Orthonormality
We continued the discussion of the PIB and the intuition we want from the model system. We revised the time-dependent solutions to the model system (which is always there). We emphasized not only that the total wavefunction must be oscillating in time (although we often ignore that in this class), it has both a real and imaginary component (we will revisit that again later on). We discussion symmetry of functions and integration over odd integrands and ended on the topic of orthonormality.
• Lecture 10: Expectation values, 2D-PIB and Heisenberg Uncertainty Principle
We extend the 1D particle in a box to the 2-D and 3D cases. From this we identified a few interesting phenomena including multiple quantum numbers and degeneracy where multiple wavefunctions share the identical energy. We were able to provide a quantitative backing in using the Heisenberg Uncertainty principle from wavefuctions in terms of the standard deviations and we ended the lecture on the five postulates of quantum mechanics.
• Lecture 11: Vibrations
Three aspects were addressed: (1) Introduction of the commutator which is meant to evaluate is two operators commute. Not every pair of operators will commute meaning the order of operations matter. (2) Redefine the Heisenberg Uncertainty Principle now within the context of commutators to identify if any two quantum measurements can be simultaneously evaluated. (3) We introduction of vibrations, including the harmonic oscillator potential were qualitatively shown (via Java application).
• Lecture 12: Vibrational Spectroscopy of Diatomic Molecules
We first introduce bra-key notation as a means to simplify the manipulation of integrals. We introduced a qualitative discussion of IR spectroscopy and then focused on "selection rules" for what vibrations are "IR-active." The two criteria we got discussed were (1) the vibration requires a changing dipole moment and (2) that \(\Delta v = \pm 1\) required for the transition (within harmonic oscillators). These selection rules can be derived from the concept of a transition moment and symmetry.
• Lecture 13: Harmonic Oscillators and Rotation of Diatomic Molecules
Symmetry (and direct product tables for odd/even functions) were discussed and showed Harmonic Oscillator wavefunctions alternated between even and odd due to Hermite polynomial component, which affects the transition moment integral so only transitions in the IR between adjacent wavefunctions will be allowed (i.e., no harmonics). This is an approximation and the Taylor expansion of an arbitrary potential shows that anharmonic terms must be used. We introduced the Morse oscillator & rotations.
• Lecture 14: Chalk talk review of Oscillators
Projector difficulties resulted in a chalk talk/discussion involving quantum harmonic oscillators, harmonic oscillators eigenstates, anhamonicity, Morse potential etc.for class instead of intended presentation.
• Lecture 15: 3D Rotations and Microwave Spectroscopy
We continue our discussion of the solutions to the 3D rigid rotor: The wavefunctions (the spherical harmonics), the energies (and degeneracies) and the TWO quantum numbers (\(J\) and \(m_J\)) and their ranges. We discussed that the components of the angular momentum operator are subject to the Heisenberg uncertainty principle and cannot be know to infinite precision simultaneously, however the magnitude of angular momentum and any component can be. This results in the vectoral representation.
• Lecture 16: Linear Momentum and Electronic Spectroscopy
The potential, Hamiltonian and Schrödinger equation for the Hydrogen atom is introduced. The solution of which involves radial and angular components. The latter is just the spherical harmonics derived from the rigid rotor systems. The radial component is a function of four terms: a normalization constant, associated Laguerre polynomial, a nodal function, and an exponential decay. We also discussed that the energy is a function of only one quantum number and that there is a degeneracy to address
• Lecture 17: Hydrogen-like Solutions
While there are three quantum numbers in the solutions to the corresponding Schrodinger equation, that the energy only is a function of n . We continued our discussion of the radial component of the wavefunctions as a product of four terms that crudely results in an exponentially decaying amplitude as a function of distance from the nucleus scaled by a pair of polynomials. We discussed the volume and shell element in spherical space and introduce the radial distribution function.
• Lecture 18: Orbital Angular Momentum, Spectroscopy and Multi-Electron Atoms
Angular moment of an electron is described by the \(l\) quantum number. The \(m_l\) quantum number designates the orientation of that angular moment wrt the z-axis. The degeneracy can be partial broken by an applied magnetic fields. There is not always do a one-to-one correspondence between quantum numbers and orbitals. Basic electronic spectroscopy was reviewed and specifically selection rules. The impossible to solve He system was discussed requiring approximations; a poor one was introduced.
• Lecture 19: Variational Method, Effective Charge, and Matrix Representation
Three aspects were addressed: (1) We continued discussing the complications of electron-electron repulsions and showed ignoring it is really pretty poor. (2) We can qualitatively address them by introducing an effective charge within a shielding and penetration perspective. (3) We motivated variational method by arguing the energy of a trial wavefunction will be lowest when it most likely resembles the true wavefunction (the same for the corresponding energies).
• Lecture 20: Variational Method Approximation and Linear Varational Method
The variational method approach requires postulating a trial wavefunction and calculating the energy of that function as a function of the parameters of that trail wavefunction. Then we can minimize the energy as a function of these parameters and the closer the wavefunction "looks" like the true wavefunction, the closer the trail energy matches the true energy. Several example trial wavefunctions for the He atom are discussed. We introduce the matrix representation of Quantum mechanics.
• Lecture 21: Linear Variational Theory
This lecture reviews the basic steps in variational method, the linear variational method and the linear variation method with functions that have parameters that can float (e.g., a linear combination of Gaussians with variable widths in ab initio chemistry calculations). The latter two will be more applicable in the discussions of molecules using atomic orbitals as the basis set (th LCAO approximation). The final approximation, perturbation theory is introduced, but not used in an example.
• Lecture 22: Perturbation Theory
The basic steps perturbation theory is discussed including its application to the energy and wavefunctions. A reminder of the orbital approximation was discussed (where an N-electron wavefunction can be described as N 1-electron orbitals that resemble the hydrogen atom wavefunctions). A consequence of the orbital approximation is the ability to construct electron configurations which are filled by the aufbau principle. However, the aufbau principle is only a guideline and not a hardfast rule.
• Lecture 23: Electron Spin, Indistinguishability and Slater Determinants
This lecture address two unique aspects of electrons: spin and indistinguishability and how they couple into describing multi-electron wavefunctions. The spin results in an angular momentum that follows the same properties of orbital angular moment including commutators and uncertainty effect. The Slater determinant wavefunction is introduced as a way to consistently address both properties.
• Lecture 24: Coupling Angular Momenta and Atomic Term Symbols
Last lecture address how the different orbital angular momenta of multi-electron atoms couple to break degeneracies predicted from the "Ignorance is Bliss" approximation (i.e., the hydrogen atom). Total angular momenta are introduced along with multiplicity. Atomic term symbols are discussed along with all three of Hund's rules to identify the most stable combination of angular momenta for a specific electron configuration.
• Lecture 25: Molecules and Molecular Orbital Theory
The application of term symbols to describe atomic spectroscopy is demonstrated. The corresponding selection rules are discussed. The Born-Approximation is introduced to help solve the N-bodies Schrödinger equation of molecules. This introduces the concept of a potential energy curve (surface). The LCAO is introduced as a mechanism to solve for Molecular Orbitals (MOs).
• Lecture 26: Populating Molecular Orbitals: σ and π Orbitals
• Lecture 27: Molecular Orbitals and Diatomics
Bond order, bond length and bond energy is emphasized for H2 species. Simple MO theory does not predicted He dimers. Bond order too
• Lecture Extra: Hartree vs. Hartree-Fock, SCF, and Koopman's Theorem
The consequence of indistinguishability in electronic structure calculations. The Hartree and Hartree-Fock (HF)caclulations were introduced within the Self-Consistent-Field (SCF) approach (similar to numerical evaluation of minima). The Hartree method treats electrons via only as an average repulsion energy and the HF approach using Slater determinant wavefunctions introduces an exchange energy term. Ionization energy and electron affinities are discussed within the context of Koopman's theorem.
• Lecture Extra II: Molecular Orbitals with higher Energy Atomic Orbitals
The MOs of first row diatomics is discussed including both π and σ MOs. The MO diagram is presented. Bond order, bond length, and bond energies are emphasized. The flip over of pi/sigma MO is demonstrated and the paramagnetism of oxygen is a natural conclusion of MO theory.
Thumbnail: Michael Faraday delivering a Christmas lecture at the Royal Institution. ca. 1856. Image used with permission (Public Domain). |
b1d7b7b038656c53 | lördag 16 april 2011
One Mind vs Many Minds in Physics
Elimination of the One Mind of Louis XVI witnessed by Many Minds: Birth of modernity.
In Dr Faustus of Modern Physics I describe how modern physics was born in the early 20th century from a deal with the Devil replacing the fundamental principles of classical physics of
• objective reality of space and time
• cause-effect: determinism: causality
• logical consistency
by the new fundamental principles of modern physics of
• relativity: subjective reality of space and time under universal invariance
• statistics: atomistic games of roulette
• duality: both wave and particle at the same time.
The deal was motivated by the following problems which appeared impossible to solve using classical continuum physics and asked for solution to maintain scientific credibility:
• 2nd law of thermodynamics (irreversibility in formally reversible systems)
• observer independent speed of light (Michelson-Morley experiment)
• blackbody radiation (ultraviolet catastrophy)
• photoelectric effect (inexplicable frequency dependence).
In Dr Faustus of Modern Physics I open a door to different resolutions of these pressing problems with less severe side effects than the relativity-statistics-duality of modern physics. I describe this new approach as
• many-minds physics: many actors/observers: many gods: no master,
as opposed to
• one-mind physics: one universal actor/observer: one God: one master,
which is the current paradigm of modern physics as a combination of
• Einstein's relativity theory based on universal invariance
• quantum mechanics based on Schrödinger's multidimensional wave equation.
I present many-minds physics in the books
The basic difference between many-minds and one-mind physics can be understood as the difference between bottom-up and top-down control of a system, in political terms as the difference between democracy and autocracy/dictatorship, or between market economy and
socialistic economy.
In many-minds relativity each observer is tied to his coordinate system and the pertinent question concerns what agreement of observations by different observers is possible, without asking for universal agreement.
In many-minds quantum mechanics each electron/particle solves its own version of the Schrödinger equation and the multidimensional wave function asking for universal agreement does not appear.
In Computational Thermodynamics and Mathematical Physics of Blackbody Radiation I show that finite precision computation can replace atomistic games of roulette as explanation of irreversibility in formally reversible systems and the 2nd law with its direction of time.
Altogether I propose different resolutions of the problems which once troubled physics, resolutions which do not require basic principles of rationality and enlightenment to be abandoned. In many-minds physics, each actor/observer uses an individual classical perspective without any need of universality, like individual actors in a market economy.
PS Any similarity in the above picture with KTH-gate, is purely coincidental.
3 kommentarer:
1. No comments so far... Does this mean that nobody understands or has anyone been censored?
2. Well, maybe Anonyms will be censored in an open society just like hiding the face under a burka may be forbidden, in an open democratic society, or?
3. excellent submit, very informative. I ponder why the other experts of this
Also visit my blog ... plan |
a7f18625f6243542 | Non-Cartesian reference frames
Fig.: Latitude and longitude of Baghdad.
The Cartesian reference frame is particularly convenient as its three coordinates are equivalent: A translation by 5cm along x is the same as a translation by 5cm along y apart from the direction. However, any triple of linearly independent coordinates are equally well suited to describe the position of an object in space uniquely. Instead of lengths along the coordinate axes, angles can be used as coordinates.
The latitude-longitude system used to fix positions on the surface of the Earth is an example of this: Latitude is the angle between the equator plane and the rotation axis of the planet; longitude is the angle between the Greenwich meridian and the projection of the point into the equator plane. Because the distance from the Earth's centre is irrelevant (and approximately constant for purposes of surface navigation), two coordinates suffice. If we were to use a Cartesian frame, x y and z would all be different for each point on the surface. Therefore, the latitude-longitude system makes use of the symmetry of the problem.
Generally, it is a good idea to use a spherical reference frame (one length, two angles) for objects with spherical symmetry (such as atoms) and an axial frame (two lengths, one angle) for objects with axial symmetry (such as chemical bonds).
Spherical coordinates
Fig.: Transformations between the Cartesian and spherical frames.
The spherical reference frame usually used in physics differs slightly from the latitude-longitude frame: There are no longitudes west of Greenwich - instead the corresponding angular coordinate runs clockwise up to 360o. Note that the polar angle is defined as the angle with the z axis, not with the xy plane (as in the geographical latitude). The spherical coordinates are:
Coordinate transformation
To transform from spherical coordinates into Cartesian: x=r\cos\phi\sin\theta;y=r\sin\phi\sin\theta;z=r\cos\theta
...and back: r=\sqrt{x^2+y^2+z^2};\phi=\arctan{\frac{y}{x}};\theta=\arctan{\frac{\sqrt{x^2+y^2}}{z}}
The del operator in spherical coordinates
Since the three coordinates are not equivalent now, the del operator takes on a slightly more complicated form in spherical coordinates. To solve the Schrödinger equation, we need to apply its square to the trial wave function.
Del-squared applied to a function, f: \nabla^2f=\frac{\partial^2f}{\partial x^2}+\frac{\partial^2f}{\partial y^2}+\frac{\partial^2f}{\partial z^2}
The first derivative w.r.t. x expressed in spherical coordinates: \frac{\partial f}{\partial x}=\frac{\partial f}{\partial r}\frac{\partial r}{\partial x}+\frac{\partial f}{\partial\theta}\frac{\partial\theta}{\partial x}+\frac{\partial f}{\partial\phi}\frac{\partial\phi}{\partial x}
The first derivatives with respect to y and z are determined analogously, and then the second derivatives can be obtained. Try it yourself - it isn't difficult although maybe a bit tedious. It is quite gratifying, though, when you derive yourself where the factors in front of and between the differentials arise in the final formula. Here is what you should get:
\nabla^2f=\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial f}{\partial r}\right)+\frac{1}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial f}{\partial\theta}\right)+\frac{1}{r^2\sin^2\theta}\frac{\partial^2f}{\partial\phi^2}}
Note that, unlike in the Cartesian frame, the three terms are not independent from each other: The polar term contains the radius as a parameter, and the azimuth term contains both the radius and the polar angle as parameters.
Armed with the del operator in spherical coordinates, we can now set up and solve the Schrödinger equation for the hydrogen atom. |
6edbcab5c4e6a5ad | Functional Analysis and Its Applications
, Volume 17, Issue 3, pp 193–200 | Cite as
Asymptotic expansion of the spectral function for second-order elliptic operators in Rn
• G. S. Popov
• M. A. Shubin
Functional Analysis Asymptotic Expansion Spectral Function Elliptic Operator
Unable to display preview. Download preview PDF.
Unable to display preview. Download preview PDF.
Literature Cited
1. 1.
V. S. Buslaev, "On the asymptotic behavior of spectral characteristics of exterior problems for the Schrödinger operator," Izv. Akad. Nauk SSSR, Ser. Mat.,39, No. 1, 148–235 (1975).Google Scholar
2. 2.
A. A. Arsen'ev, "Asymptotic behavior of the spectral function of the Schrödinger equation," Zh. Vychisl. Mat. Mat. Fiz.,7, No. 6, 507–518 (1967).Google Scholar
3. 3.
B. R. Vainberg, "On the short-wave asymptotic of solutions to stationary problems and the asymptotic for t → ∞ of solutions to nonstationary problems," Usp. Mat. Nauk,30, No. 2, 1–55 (1975).Google Scholar
4. 4.
V. S. Buslaev, "Scattered plane waves, spectral asymptotics, and trace formulas for exterior problems," Dokl. Akad. Nauk SSSR,197, No. 5, 999–1002 (1971).Google Scholar
5. 5.
A. Majda and J. Ralston, "An analogue of Weyl's formula for unbounded domains. I, II, III," Duke Math. J.,45, No. 1, 183–196 (1978);45, No. 3, 513–536;46, No. 4, 725–731 (1979).Google Scholar
6. 6.
V. Petkov and G. Popov, "Asymptotic behavıor of the scattering phase for nontrapping obstacles," Ann. Inst. Fourier,32, No. 3, 111–150 (1982).Google Scholar
7. 7.
V. Ya. Ivrii and M. A. Shubin, "On the asymptotic behavior spectral shift function," Dokl. Akad. Nauk SSSR,263, No. 2, 283–284 (1982).Google Scholar
8. 8.
L. Hörmander, "The spectral function of an elliptic operator," Acta Math.,121, Nos. 3–4, 193–218 (1968).Google Scholar
9. 9.
J. Rauch, "Asymptotic behaviour of solutions to hyperbolic differential equations with zero speeds," Commun. Pure Appl. Math.,31, No. 4, 431–480 (1978).Google Scholar
10. 10.
J. Duistermaat and L. Hörmander, "Fourier integral operators. II," Acta Math.,128, Nos. 3–4, 183–269 (1972).Google Scholar
11. 11.
S. Kuroda, "Scattering theory for differential operators. I. Operator theory. II. Self-adjoint elliptic operators," J. Math. Soc. Jpn.,25, No. 1, 75–104; No. 4, 222–234 (1973).Google Scholar
12. 12.
M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis, Academic Press, New York (1972).Google Scholar
13. 13.
H. McKean and I. Singer, "Curvature and the eigenvalues of the Laplacian," J. Diff. Geom.,1, No. 4, 43–69 (1967).Google Scholar
14. 14.
J. Duistermaat and V. Guillemin, "The spectrum of positive elliptic operators and periodic bicharacteristics," Invent. Math.,29, No. 1, 39–79 (1975).Google Scholar
15. 15.
V. Guillemin and S. Sternberg, Geometric Asymptotics, Math. Surveys, No. 14, Amer. Math. Soc., Providence, Rhode Island (1977).Google Scholar
16. 16.
17. 17.
V. M. Babich, "On the short-wave asymptotic of the solution to the point-source problem in a nonhomogeneous medium," Zh. Vychisl. Mat. Mat. Fiz.,5, No. 5, 949–951 (1965).Google Scholar
18. 18.
V. V. Kucherenko. "Quasiclassical asymptotics of the point-source function for the stationary Schrödinger equation," Theor. Mat. Fiz.,1, No. 3, 384–406 (1969).Google Scholar
19. 19.
V. V. Kucherenko, "Some properties of the short-wave asymptotic of the fundamental solution of the equation [Δ + k2n(x)]u = 0," in: Trudy MIÉM, Asymptotic Methods and Difference Schemes, Vol. 25, Moscow (1972).Google Scholar
20. 20.
V. M. Babich and Yu. O. Rapoport, "Short-time asymptotic of the fundamental solution to the Cauchy problem for a second-order parabolic equation," in: Problems of Mathematical Physics, No. 7, Leningrad State Univ. (1974), pp. 21–38.Google Scholar
21. 21.
V. M. Babich, "Hadamard's method and the asymptotic of the spectral function of a secondorder differential operator," Mat. Zametki,28, No. 5, 689–694 (1980).Google Scholar
22. 22.
G. S. Popov and M. A. Shubin, "Complete asymptotic expansion of the spectral function for second-order elliptic operators in Rn," Usp. Mat. Nauk,38, No. 1, 187–188 (1983).Google Scholar
Copyright information
© Plenum Publishing Corporation 1984
Authors and Affiliations
• G. S. Popov
• M. A. Shubin
There are no affiliations available
Personalised recommendations |
c65df020773b0109 | PhD Paul Tiwald
Local Electronic Excitations in Extended Systems: A Quantum-Chemistry Approach
Real solids and surfaces are not "perfect". Crystals inevitably contain various defects and surfaces are subject to interactions with ambient particles leading, for example, to oxidation and adsorption. Since such effects are present in everyday devices and applications a deep understanding of the underlying physics is of great importance. In this thesis we study the properties of two very localized imperfections: the F-type color center in alkali-halide crystals and the charge transfer during scattering of an ion from an insulator surface. Both effects have been studied for a long time but a detailed theoretical understanding on the ab-initio level seems to be missing.
This thesis provides state-of-theart ab-initio calculations and addresses open questions. In particular, we present an ab-initio study of the physics underlying the so-called Mollwo-Ivey relation. This relation connects the F-center absorption energies with the crystal lattice constants and has not been fully understood so far. Second, we present the first ab-initio results on the charge-transfer probability during scattering of a proton from a lithium-fluoride surface. This study is based on a non-adiabatic molecular dynamics approach that provides microscopic insight into the charge-transfer process. Both the light absorption by the color center and the charge transfer represent local electronic excitations: the F-type color center consists of an electron strongly localized at an anionic vacancy and the electron transferred is strongly localized in close vicinity of the proton. This localization allows for application of the so-called embedded cluster approach in which the extended system is approximated by an embedded finite-sized active cluster. To study the properties of the active clusters we apply high-level quantum chemistry methods solving the electronic Schrödinger equation. |
0033ceeb5f13e93a | Why skin a cat at all?
Schrödinger’s cat walks into a bar. And doesn’t.
There may be more than one way to skin a cat; but why are we skinning cats at all? The case of Schrödinger’s cat has been the source of wonder, intelligent discussion and excellent t-shirts for some time, however there is a slight risk with the popularity the underlying mechanics of the thought is missed. To anyone familiar with the situation, or indeed quantum mechanics, it won’t come as any surprise that there are different interpretations of the paradoxical situation proposed by Schrödinger, many of which are worth understanding before forming a view on the subatomic kingdom. Before we look at how to skin the cat, we first establish why we are being so cruel in the first place.
The ideas of quantum mechanics are disturbing, leading some of the world’s greatest minds to be extremely spooked. The big problem? It’s an abstract theorem constructed in the realm of mathematics, where classical descriptions of the world have to be left aside. All you have ever experienced is a classical description of the world and indeed all humans before you have ever experienced is a classical description, so it shouldn’t be too surprising that this is a little uncomfortable. Yet despite this complexity, humans have managed to harness the power of the quantum realm to the point where it is estimated a third of the US gross national product is the direct result of quantum mechanics (Tegmark & Wheeler).
This post, along with some future posts are intended to serve as a preamble to a rather more developed post; so if you don’t find the subject overly fascinating in its own right (which you should!) I hope the ideas will weave together later into an acceptable marriage.
All this feline chatter
Schrödinger is most famous for his derivation of the second order partial differential equation known as the wave function, denoted Ψ, which rightfully bagged him a Nobel prize in 1933. The wave function is a complex valued probability amplitude; which in essence is a mathematical expression of all the things something may be with probabilities assigned to them. In the quantum realm for example, we may be talking about an electron around a nucleus; the function is computed using all the possible degrees of freedom the electron have (states it can be in). You should never be concerned if you don’t know how to interpret the wave function at first – because nobody, including Schrödinger himself did when they first looked at it. Wave functions can be produced by including all points of position or momentum space (for example), and allow for the inclusion of discrete degrees of freedom (for example spin of +1/2 and -1/2). The wave function is very complex, but to understand Schrödinger’s cat all you need to appreciate is that the wave function is describing all the states a particle may occupy and the probability associated with being in each state; so if you like anything that can happen is included.
Schrödinger’s cat can be considered a paradox; what he is doing is demonstrating the issues that arise when one links the quantum realm and the macroscopic world that we are so familiar with. To set up the experiment imagine you have a sealed box in which you can extract no information (light is unable to penetrate, sound cannot escape). Within the box you have a radioactive substance, which decays randomly (i.e. puny humans have no predictive power) and a Geiger counter which measures radiation. Next to this you have a cat, a vial of poison and a hammer. If the Geiger counter should sound then the hammer goes, the vial breaks and the cat is dead. Intrinsically the fate of the cat is now the result of the quantum mechanical rules which govern the random decay.
Is the cat dead or is the cat alive? The short answer: you have no idea. To assume the cat is dead or alive is to assume you know something about the conditions within the box to make that prediction; by the construction of the experiment you do not. But if you cannot say that the cat is dead or alive what is it? Being dead or alive is surely a binary construct? We must consider the cat in superposition – which in normal terms is a suspended state of being dead or alive. All we know is that there is a probability of the cat being dead, and alive; it is simultaneously “stuck” in these states until we open the box and we determine the cats luck by conventional means. Sound like nonsense? That’s the point – a cat cannot genuinley be both dead and alive… can it?
The construction is designed to illustrate the weird world of quantum mechanics; you know it is often said that quantum mechanics is like playing dice. Well unfortunately the world is so damn strange that even that is an oversimplification; since when one plays dice the dice obey nice normal classical mechanics, we just don’t have the required knowledge to form predictive assumptions. In the world of quantum mechanics the classical world, we think, melts away. So the paradox provides an interesting way of illustrating the situation where we cannot know the state of an object without observing it and the seemingly paradoxical situations that can arise when we tie the information we have learned about the quantum realm to our more familiar macroscopic surroundings. Now you see why we must skin the cat; we need some answers.
The Copenhagen Interpretation
This is the leading interpretation of the thought experiment (and quantum mechanics in general) and is likely to be what you have been taught (potentially even represented as absolute truth) if you have studied the subject or done some reading. The interpretation was developed by some big names; Niels Bohr and Werner Heisenberg, but interestingly they didn’t fully agree on any approach and never actually laid down a formal description of their interpretation. As such the approach has developed over time with the help of many minds and you many not always see identical descriptions when comparing one text to another. The key point is that a physical system has no definite properties before being measured. The rest flows from here.
So the wave function represents a system with everything that can be known before any observation takes place; the set of all possibilities with all probabilities. The system itself may well contain incompatibility – for example the well known uncertainty principle that asserts one may not know about the position and the momentum of an object to within a certain threshold of accuracy. This function exists in this state, quite happily, until terrible humans come along and measure the thing. At the point of measuring the system, what we are actually doing is collapsing the wave function (into an eigenstate) for the observer. That is when we observe the system we collapse all of the possible outcomes and probabilities into one known outcome of which we can interpret as being classical. So we haven’t actually measured in the quantum realm at all, we have collapsed the “normal” behaviour with our observation and made it fit with our window on the universe. Proponents of this theory point to the fact that humans are classical and not quantum beings and we can only observe quantum systems by reducing them to mere classical situations – the inner workings or a quantum mechanical system are not observable to a human and never will be.
So in the case of our dear cat, what actually is happening is before the event the cat is in its state of being dead and alive – with a probability of 50:50 dropping out for each scenario from our wave function. When we actually observe the cat we collapse this situation into one fixed definite situation – we know the cat is dead or alive. Critics of this theory are often say that it isn’t actually an explanation at all – but rather a work around to the problem by simply stating you cannot observe it and if you do everything that existed before collapses. Unfortunately the Copenhagen Interpretation may well be true; so if the cat is dead I am afraid it’s your fault for looking.
The many worlds interpretation
If you are interested in a full post on this, you should check out Mekhi’s excellent post here.
This interpretation is probably one of my favourites; it’s just so fanciful. What it lacks in rigorous detail it certainly makes up for in imagination. The bedrock of this theory is that anything that can happen…. sort of will. In the quantum mechanical situation particles exist under the rules of probability; in the many worlds theorem everything exists. In parallel. It’s quite a mind-bender; the theory has both the mathematical construction of the Schrödinger equation; and a much less well defined correspondence between the quantum realm and our experiences.
To tackle this theory, understand the difference between a world and the Universe. Simply there is one Universe that contains all the worlds. Drilling down a little further, when we define any world, it becomes unique in the past and many in the future. So interestingly, this theory takes the idea that there is one I; I am unique however there are many Joe’s. Excuse me? I am all of the future Joe’s. All of them; and yet I am only one I, the unique defined present version of myself. So what we have is a situation that looks like this:
The wave function of the Universe is taken to be the wave functions of all the different worlds; which are not alternatives but rather all actual realities happening until they are defined. I’m sorry, but if you’re not in neurological heaven right now you are on the wrong page.
Aside from the fact that parallel universes are delightful, the Many Worlds Theory has some other big plus points. Firstly – you don’t need to collapse the wave function like you do in the Copenhagen Interpretation. The idea that you have a system that is totally independent from initial conditions and only influenced by probability is uncomfortable since it is so at odds with out experimental evidence (which under the Copenhagen Interpretation is the measurement problem). But secondly, the Many Worlds Theory resolves many (if not all) of the paradoxes of Quantum Mechanics – one of these being Schrödinger’s cat. In fact is was said Schrödinger himself was waring up to this theory.
Under the Many Worlds Theory, Schrödinger’s cat is in two parallel universes where the cat both lives and dies. It is only when the cat is observed that the situation is fixed in the present; with the many worlds of the future living on. Here is a illustration on the scenario.
This theory is gaining traction as a description of the Universe – but it has work to do. It violates some very old laws around nature that we either need to overcome or rewrite before we are converted (see Ockham’s razor if you are interested).
Information interpretation
Does reality contain information? Is information reality? I am rolling with the latter; it would appear that truth is nothing more than a set of propositions – information. It is actually quite a fun game to play, to pick anything then break it down into nothing more than a set of axioms. No matter how complex the system is (like for example a human) you can keep breaking it down into simpler and simpler systems until you get to an elementary system. This is just like a binary system; it is or it is not, which can be represented by a 1 or a 0. When we get to this point we have the smallest unit of information possible (sometimes called a quibit in quantum mechanics). The key idea here therefore is that an elementary system is just a bit of information.
But what about randomness? Well it is a little different; all you looking at is objective randomness which is the result of a lack of information. We are closing in on the cat. The elegance in this theory is that it answers the question of why the world is quantised at all. It is disturbing if you really muse on it. Well the answer is simply the quantisation of information; if reality is merely information, with an elementary system being binary then of course on the smallest of scales the quantised picture will arise. Delightful. The theory also goes on to address quantum entanglement, pointing to the fact that the phenomena arises where the elementary bit is information for more than one of the systems. All tangled up.
Now in the case of our dear cat, the issue is such that there is no information for asking if the cat is dead or alive; and therefore we don’t have the answer. When we look at the cat, information is created (in an objectively random way); this new information arises all the time in information theory and is either subsequently destroyed, or in our case stabilised which then becomes a measurement. In essence the problem with viewing the quantum world is that viewing and taking measurements extracts information from the system – and with information being the currency of reality, we leave ourselves with less in the bank.
Did the cat live?
How should I know. Schrödinger’s cat in all three interpretations is tied to an unknown fate until we observe it – but the situation before and after is viewed differently. Do not forget; this is about creating a thought experiment to highlight quantum mechanics. So whilst, for example, the information theory might seem like a pretentious way of saying I don’t have information before I look, (go figure), its beauty and elegance as a description of the universe on the smallest of scales should not be overlooked. I cannot resist the lure of the quantum realm; it quite literally makes up everything, the building blocks of everything and anything. There are many more interpretations you may wish to explore if you are interested; the three I highlight are the most popular.
65 responses to “Why skin a cat at all?
1. So, you’re saying if we skin the cat then it is absolutely dead?
This is my problem with quantum theory stuff as I know it. And I’ll admit, I have none of the background or the understanding of the equations that support it. It seems to me to be a purely idealistic exercise of pure science.
From what I understand, quantum mechanics seems to just be a way to calculate what will happen based on what we know in a statistical manner. In other words, it’s purely conjecture. The problem appears to be that we just don’t have the ability to measure cause and effect at the subatomic level to predict outcome at this time, so we calculate odds.
It’s never been explained to me in a way I understand how we’ve determined that this is an actual reality and not just statistical guess work with practical applications.
If we bury a box with a cat in it for five hundred years Schrodinger style and come back to look at it, the cat is dead. If we do it five thousand times, the cat is dead. I understand scientifically we can’t claim to know that without investigating. That’s science in its strictest, purest sense. But when we start to conjecture about multiple universes and alternate realities, we’ve left science behind.
Again, this is just a layman’s understanding. I am genuinely curious to understand this phenomenon. Not at all trying to rip what you wrote. It fascinates me.
Liked by 2 people
• Hello and firstly thank you for visiting and reading! You make some interesting points; interesting and correct. With the burial of the box; I would say yes you are correct but the difference in that situation is you know that it is statistically nearly impossible for a cat to live for that long – both the number of years and the lack of nourishment. So in a sense – you do have the information you need.
What I would say around quantum mechanics is there are certain elements of it that we absolutely know work; beyond any reasonable doubt. Codes, microscopes and lasers alike all work and rely on quantum mechanics.
The real forefront of quantum mechanics – the bit that causes most confusion is a work in progress. But using the Copenhagen Interpretation, you can kind of think about it as from the sense of a human it will always be random because by interfering with the system I am fixing it. It is spooky to say that something is only the way it is when no body is looking; but when you get down onto the smallest of scales, like we are on, to extract information you need to send in a probe – and the smallest probes we have, even a photon, causes havoc. So you can at least appreciate why it is very difficult to extract information from a QM system?
In a way – it actually is statistical guess work with practical applications!
Liked by 2 people
• Thank you, sir. Just trying to wrap my head around this thing to answer the question “What is it?” I’ve watched documentaries, etc, but don’t really seem to get anywhere other than it’s mind blowing and a really interesting concept.
Liked by 1 person
• Me and you both! I would recommend trying to follow some lecture series – Leonard Susskind’s theoretical minimum is a great resource but there is lots out there. But don’t ever let go of the fact that there are multiple interpretations as highlighted in my post. Some things are “truths” but many things are still uncertain
2. all you need to grasp is that the wave function describes all a particle may be the associated probability;
Sorry, I can understand the syntax of this sentence and so i can’t grasp what that “all you need to grasp is” — should that be ‘all a particle may be with? the associated probability?
Liked by 1 person
• Sorry, probably a bad sentence to get sloppy with my syntax right!
I have amended to:
Do let me know this makes things clearer?
Liked by 2 people
3. The phrase “to skin a cat” is American slang, and it does not refer to felines; it refers to catfish. Besides that, the post is interesting.
And reminds me of a joke.
Schrödinger, Heisenberg, and Ohm were going to a party. Heisenberg was driving Schrödinger’s car above the speed limit.
A policeman stops the car and says to Heisenberg: “Hey buddy do you know how hast you were going?” Heisenberg replied “No, but I can tell you exactly where I am.”
“Well, you were doing 75 kph in a 30 kph zone.”
“Great. Now I am lost.”
The policeman thinks this is suspicious and says: “I need to see what’s in the boot.” Heisenberg gives the policeman the keys, and the policeman proceeds to open the boot, looks inside, comes back to the passengers and says: “Do you know you have a dead cat back there?”
Schrödinger replies: “Thanks to you, now I do, you jerk.”
The policeman tells them to get out of the car because they are all too suspicious and they have to go to the police station.
Of course, Ohm resisted.
Liked by 2 people
• I did not know that Keith! That is very interesting indeed – I know little of catfish (and I am not sure I want to know much of them!).
I love the joke! Hadn’t heard that before either
Liked by 1 person
4. This is great … I guess it’s different levels of ‘perception’ or ‘seeing’ is it? …and the human mind …brilliant tho it is at organising and rationalising the world around us so that we can actually live in it …AND in the scientific world come up with tools and equations to take our understanding and development to the next level …there is a point where it is all turned on its head …and we haven’t yet got the tools and equations to explain it …on a quantum level it’s taking things down to such a minute scale that it is merely the tiniest of micro dots of ‘energy’ for want of a better word …which is in fact the most rudimentary yet complex ….and is this then the elusive ‘ God particle’ folks go on about? …Hmmmm and although the cat is dead to us as the human observer …it’s ‘energy’ …or ‘God particle’ or ‘Particles’ has just drifted off elsewhere …maybe to a parallel universe …maybe to a very different one …or maybe absorbed into the one we already know …not necassarily as a cat but possibly ANYTHING …a kind of recycling or reincarnation …and am probably not making ANY sense so had better pop off to bed ….but this stuff is facinating none the less
Liked by 1 person
• Why thank you very much I am glad you enjoyed. You are making sense – and actually whilst I don’t necessarily believe in reincarnation one of my late night thoughts is around how weird and strange it is that the very atoms that make you up may have one made up another person a very long time ago! Quantum mechanics is really a wonderful journey of discovery. If quantum mechanics and general relativity don’t change the way you look at the world then something is very wrong
Liked by 2 people
5. Essentially this, from a psychological point of view, is a problem of the nature of reality. Ever since I was a kid of about 12 and read about Einstein’s understanding about the nature of time as a dimension I became puzzled over what it meant to be alive. The space dimensions that we can see do not pop into reality only when you open the front door and step outside to see that the sky exists and there is a street outside that was there yesterday and can be observed again when you look at it. So, if Einstein’s understanding of time is real then the future and the past are really there and were there yesterday and will be there tomorrow and the sense that things change is an illusion of traveling from the past to the future. It’s all there even if you don’t see it. And because it’s there you have a chance to predict it and be somehow sure that tomorrow morning the street will not be full of tigers or be a huge chasm so that merely stepping outside with your eyes closed will be a death sentence. In other words, reality is like a book where the story has a beginning, a middle and an end and just because you’re on page 5 does not mean that page 7 is not there until you get past page 6. But then the basic question is what is this thing I call myself that is turning the pages and why can’t I get tired of this book and pick up another (or is there another?) When I requested answers about this feeling of moving from the past to the future of scientific sources the only answer I got was that it is an illusion. Frankly, that explains nothing that I can make sense of. The idea that there are moments of creation where, if I change my mind, I create entire universes does not fit well in my experiences. The idea that there are a variety of pathways through time and which direction I move into is somehow responsive to my personal choice is a bit more digestible but creates a good deal of mystery over whatever I am that can decide the pathways and move into a pathway others I know do not move the same way. Knowledge is, after all, a mental quality and the rest of the universe seems rather rigidly restricted to cause and effect and I do not see why I should be free of that since I am totally embedded in this universe. Doubtlessly there are seeming random effects such as atomic decay but is this truly random or is it merely that we do not know how cause and effect functions at atomic levels? We are aware that there is a statistic half-life duration of atomic decay but is that perennially indeterminate? The whole can of worms lays in understanding the nature of time and how we relate to it. I highly respect mathematic analysis but math is merely a language of precise description and is quite capable of constructing fantasies as with any language so I am still very puzzled.
Liked by 1 person
• Hello and thank you for your very interesting topic! I think you are touching on both the issue of determinism and free will along with the nature of time as a fourth dimension. Time is obviously special as our fourth dimension as it is currently the only one that we cannot traverse in any direction we choose; although that isn’t to say it isn’t a dimension at all – it is just to say that we can only perceive the forward motion. I am building up to a post on what it means to be conscious I think you will find interesting; I just didn’t want to include all this information in it so I thought I would post it as a separate post; I think it will at least tackle many of your questions, although some of them are of course so big if I had the answers I would not be working in finance. I think the important thing to appreciate is you won’t ever get a clean cut agreement out of scientists on some of these issues as it is the forefront of scientific progress open to interpretation. So if you were to sing from the MW hymn sheet then yes you do enter a new world (not universe) when you make a choice in life – fixing the past and leaving the future open. In the Copenhagen Interpretation however you do not. Around your point on randomness; I like to think of randomness as defined from a human perspective i.e. there are too many variables or we have too little information to model the thing – so from a human point of view it is (and perhaps always will be) impossible to make predictive assumptions – so it seems random.
6. It is not often I encounter fighting with the fundamental unknowns on a personal level and it is a welcome experience to discover another perceptive discussion in this very irritating insecurity. As I have mentioned elsewhere I have come to the conclusion that this “myself”, with which each of us identifies, is a kind of manufactured gadget useful to the rather unknown powerful entity within this living creature where I live. This ultra being needs a tool to navigate the great unknown which is exterior to the creature so it receives a variety of inputs from the several senses and out of the influx of impulses invents what we call reality which is a very limited version of whatever universe is perceived. This fiction is where I live and try to get the superbeing of my body some form of tolerable maintenance. Each living creature invents its own reality since each one has different requirements and different sets of sensors and different experiences to decide how to react. Whatever the hell is really out there in the real reality is far too complex, full of irrelevants to our existences, that is discarded as of no utility. So even this elemental construction in which we each exist is far too incomplete to solve the major basic puzzles. But we do what we can.
Liked by 1 person
• A very interesting and philosophical point. I do hope that reality is not too complex; but just seemingly too complex in the sense that a computer would have seemed to a human of 500 years ago. Indeed we do what we can!
• In several recent scientific reports research has revealed that our fellow species which are equipped with quite different sense apparatus than humans are quite adept at many skills formerly assumed were human alone. Even bumblebees seem to become emotional with pinhead sized brains in somewhat the same way that humans might in delight. I do not presume that these very different creatures are the equal of humans in working with complex abstractions but only express that our own sensibilities, whatever we might make of them, probably are unresponsive to rather fundamental goings on that remain unknown. The current experiences with dark energies and dark matter are probably only the very tip of the great unknown we have yet to discover.
Liked by 1 person
• How interesting! And yes I do agree that we are only on the tip of the unknown; we have come so far and yet we have so far to go. That is what makes now a very exciting time to be alive!
7. I don’t pretend to understand the math, but it seems to me that the whole theory depends on being described within mathematical terms, a sort of language, and any language has limits in how it describes things. I’m more familiar with programming languages, where instead of x/2 you’d have true/false. Unknowns are described simply as Null rather than a superposition of all possible x’s, and can be, in fact must be checked for as neither the logical path for true or false can be assumed. However, checking for Null does not define it. Seems like that’s the key difference, the math of quantum physics wants to require a definition even if it’s all possibilities, programming logic simply acknowledges something is unknown and makes allowances for it.
Liked by 2 people
• Thank you for your comment Dave and you actually raise a very interesting point; I agree with your logic. You may be familiar with Rodger Penrose; a very famous UK mathematician who believes that certain elements of quantum mechanics may actually be non-computable. The collapse of the wave function is one such phenomena that it is currently believed may bot be possible to ever actually programme. That said there are many parallels that can be drawn particularly with the information theory; but you are right in the sense that we cannot simply allow for an unknown in the same way
Liked by 1 person
8. Hi thanks for such an interesting read, I find your blog fascinating. For the sake of discussion I am going to go out on a limb here and say some things that I have absolutely no empirical evidence of just to put it out there.
Space is information, our bodies are subject to classical physics (due to our limited sensory perception), but our mind dwells in the quantum world, mind is all and no thing. There is no problem with viewing the quantum world when we learn to see it with our mind, within meditation. And in the non dual awareness of mind nothing is ever lost or gained, the bank has no balance = 0
I am not sure if I favour the many worlds theory or the Copenhagen interpretation, I must read so much more on both subjects, however, I disagree with elements from both as you have described them. Probably the biggest issue with both is that it seems to me that they do not seem to account for multiple observers (7 billion of us) Perhaps the newer MIW does a good job of this.
I agree with you whole heartedly resistance to the lure of the quantum realm is futile.
Liked by 2 people
• Hi thank you very much for your kind words – I am glad that you find the blog of interest! MIW does allow for different observers it is certainly worth exploring further – in fact so does the information based approach. In that sense the cat is its own observer. It is often the case that when things start to get subjective they stray away from the realm of science – although I whole heartedly agree that they are the “big questions” of the future, how we can marry up these ideas to come up with the right answer!
Liked by 1 person
• Great question Joe,
I can only suggest two things, cross disciplinary dialog and lots of it, and quantum physicists need to learn to meditate. Only then can we begin to experiment in areas where mind and information meet.
Liked by 2 people
• nope sorry, thinking does not count but good try. Meditation is not avoiding the thoughts, but learning to see the space between them. Seeing what happens as they arise, exist, and fall back to where they arose from. Not so unlike most\all phenomenea. It is said that only an intillectual understanding of the ultimate truths is not enough, one must learn to see it as well.
Liked by 3 people
• Well then I am indeed lost! I have to say I do not fully understand everything you say, but that does not stop it being very interesting!
• The space between the thoughts
Here is where the world of meditation gets a little freaky and the pop culture understanding of mindfulness and relaxation ends and something difficult to describe begins to become apparent. Normally our inner dialogue or train of thought goes on and on. For most of us our thoughts seem endless and super fast. And every once in a while we “like” a thought and it materializes for us. I digress. Imagine sitting at the crossing gate waiting for a cargo train to go by and noticing that between every wagon there is a space that we can see past and into what is beyond the train. Actually if we sat here and blinked our eyes st the correct frequency we would only see this “space” and not the train anymore. This is what one can train to see in meditation. This is the place/space from where everything arises from and returns to in mind.
Liked by 1 person
• I am not sure that is something I can ever wrap my brain around! I can try, but I am afraid I may be a train with no carriages. Nonetheless the concepts are very interesting to me indeed
Liked by 1 person
9. Well, the good news is that you don’t have to wrap your brain around anything. Much like in science, meditation is an experiment with based on the experience of meditators practicing for the last few thousand years and your own experience on the meditation cushion. Everyone has similar worries when they start and we all have the same potential to meditate there are no people who cannot. Now I wish that was the case when someone shows me all those formulas and functions….
… that is just a train wreck on paper to me. 🙂
10. Let me see if I’ve got this right…
To observe something, we have to be able to see it.
To see really small things, we shine a light on them.
some things are so small that if you do shine a light on them, the light photons themselves can knock around the thing that you are trying to observe.
This interference can cause the really small thing that we are trying to observe to do things that it might not have done if we hadn’t shone the light on it to observe it
we don’t know what it does unless we observe it, right? Or not.
That seems simple enough…
Also, that is a FANTASTIC cat!
Liked by 2 people
• Hello and welcome to the blog! Exactly that yes; taking any form of measurement involves interfering with the thing you are trying to measure. A bit like if every time you looked at the scales you got lighter and then when you looked away you went back to your normal weight. You wouldn’t actually know your weight, but you may be able to deduce from other phenomena that you are actually quite a bit heavier!
Liked by 2 people
11. Okay, this is how you know I’ve not been feeling well:
I appreciated that Schrödinger’s cat walked into a bar. And didn’t. I loved Keith’s joke…got that. The dialogue between you and quantumpreceptor: check, because his thoughts sidled up along my vague ones. Dave Ply: Null. I get that, “I don’t know” is also a valid conclusion.
But I swear, Joe, even though I usually get a glimmer, I was unable to get past “All this feline chatter,” meaning The Sub-Title, not the text following it! One of my other selves, in one of those many other worlds, must have gotten all the deep thinking instincts this morning.
I’ll catch up. Eventually 😀
Liked by 1 person
12. Pingback: Consciousness, perception and fuzzy reality | Rationalising The Universe·
13. Pingback: A peculiar perspective on Creation – it’s ‘implausible’! | Richard's Watch·
14. Pingback: Conscious thought and fuzzy reality·
15. Pingback: What happens behind my back? | Rationalising The Universe·
16. Pingback: What happens behind my back?·
17. Pingback: Saving the Present | Rationalising The Universe·
18. Pingback: Saving the Present·
19. I’m glad to see more people realizing that nothing is random. It is just our observation skills and lack of in depth experimentation on such a small world which leaves us puzzled to what actually goes on.
Liked by 1 person
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
251bc3744f568165 | Monday, October 19, 2009
Intelligent Design - The Anthropic Hypothesis
Isaiah 1:18
"Come now, and let us reason together,",,,
Anthropo - Greek origin from anthropos: man, meaning man, human, as anthropology
In 1610, the Italian scientist Galileo Galilee (1564-1642) verified Polish astronomer Nicolaus Copernicus's (1473-1543) heliocentric theory. The heliocentric theory was hotly debated at the time, for it proposed a revolutionary idea for the 1600's stating all the planets revolved around the sun. Many people of the era had simply presumed everything in the universe revolved around the earth (geocentric theory), since from their limited perspective everything did seem to be revolving around the earth. As well the geocentric theory seems to agree with the religious sensibilities of being made in God's image, though the Bible never actually directly states the earth is the 'center' of the universe.
Job 26:7
“He stretches the north over empty space; He hangs the earth on nothing”
Galileo had improved upon the recently invented telescope. With this improved telescope he observed many strange things about the solar system. This included the phases of Venus as she revolved around the sun and the fact Jupiter had her own satellites (moons) which revolved around her. Thus, Galileo wrote and spoke about what had become obvious to him; the planets do indeed revolve around the sun. It is now commonly believed that man was cast down from his special place in the grand scheme of things, for the Earth beneath his feet no longer appeared to be the 'center of the universe', and indeed the Earth is now commonly believed by many people to be reduced to nothing but an insignificant speck of dust in the vast ocean of space. Yet actually the earth became exalted in the eyes of many people of that era, with its supposed removal from the center of the universe, since centrality in the universe had a very different meaning in those days. A meaning that equated being at the center of the universe with being at the 'bottom' of the universe, or being in the 'cesspool' of the universe.
The Copernican Revolution - March 2010
Excerpt: Danielson(2001) made a compelling case that this portrayal is the opposite of what really happened, i.e., that before the Copernican Revolution, Earth was seen not as being at the center, but rather at the bottom, the cesspool where all filth and corruption fell and accumulated.
Yet contrary to what is popularly believed by many people today, of the earth being nothing but a insignificant speck of dust lost in a vast ocean of space, there is actually a strong case to be made for the earth being central in the universe once again.
In what I consider an absolutely fascinating discovery, 4-dimensional (4D) space-time was created in the Big Bang and continues to 'expand equally in all places':
Where is the centre of the universe?:
Thus from a 3-dimensional (3D) perspective, any particular 3D spot in the universe is to be considered just as 'center of the universe' as any other particular spot in the universe is to be considered 'center of the universe'. This centrality found for any 3D place in the universe is because the universe is a 4D expanding hypersphere, analogous in 3D to the surface of an expanding balloon. All points on the surface are moving away from each other, and every point is central, if that’s where you live.
4-Dimensional Space-Time Of General Relativity - video
So in a holistic sense, as facts revealed later in this paper will bear out, it may now be possible for the earth to, once again, be considered 'central to the universe'. This intriguing possibility, for the earth to once again be considered central, is clearly illustrated by the fact the Cosmic Microwave Background Radiation (CMBR), remaining from the creation of the universe, forms a sphere around the earth.
Earth As The Center Of The Universe - illustrated image
The Known Universe - Dec. 2009 - a very cool video (please note the centrality of the earth in the universe)
This centrality that we observe for ourselves in the universe also happens to give weight to the verses of the Bible that indirectly imply centrality for the earth in the universe:
Psalm 102:19
The LORD looked down from His sanctuary on high, from heaven He viewed the earth,
On top of this '4D expanding hypersphere geometry', and other considerations of Einstein's special theory of relativity that show that the speed of light stays the same, while all other movement in the universe, no matter how fast or slow, is relative to that 'unchanging' speed of light, the primary reason the CMBR forms a sphere around the earth is because the quantum wave collapse of photons to their "uncertain" 3D wave/particle state, is dependent on 'conscious observation' in quantum mechanics. Moreover, this wave collapse of photons, to their 'uncertain' 3D wave/particle state, is shown by experiment to be instantaneous, and is also shown to be without regard to distance. i.e. It is universal for each observer (A. Aspect). CMBR, coupled with quantum mechanics, ultimately indicates that 'quantum information' about all points in the universe is actually available to each 'central observer', in any part of the 4D expanding universe, simultaneously. The primary reason that 'observers' are now to be considered 'central' in the reality of the universe is because of the failure of materialism to explain reality. Here is a clip of a talk in which Alain Aspect talks about the failure of 'local realism', or the failure of materialism, to explain reality:
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video
Physicists close two loopholes while violating local realism - November 2010
(of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.)
Falsification of Local Realism without using Quantum Entanglement - Anton Zeilinger
One of the first, and most enigmatic, questions that arises from people after seeing the Quantum actions that are 'observed' in the infamous double slit experiment is, "What does conscious observation have to do with anything in the experiments of quantum mechanics?" and thus by extrapolation of that question, "What does conscious observation have to do with anything in the universe?" Yet, the seemingly counter-intuitive conclusion that consciousness is to be treated as a separate entity when dealing with quantum mechanics, and thus with the universe, has some very strong clout behind it.
Quantum mind–body problem
Parallels between quantum mechanics and mind/body dualism were first drawn by the founders of quantum mechanics including Erwin Schrödinger, Werner Heisenberg, Wolfgang Pauli, Niels Bohr, and Eugene Wigner
"It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness." Eugene Wigner (1902 -1995) from his collection of essays "Symmetries and Reflections – Scientific Essays"; Eugene Wigner laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963.
Here is the key experiment that led Wigner to his Nobel Prize winning work on quantum symmetries:
Eugene Wigner
Excerpt: To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using another clock, perhaps being left-handed), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another.
i.e. In the experiment the 'world' (i.e. the universe) does not have a ‘privileged center’. Yet strangely, the conscious observer does exhibit a 'privileged center'. This is since the 'matrix', which determines which vector will be used to describe the particle in the experiment, is 'observer-centric' in its origination! Thus explaining Wigner’s dramatic statement, “It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness.”
Further weight for consciousness to be treated as a separate entity in quantum mechanics, and thus the universe, is also found in the fact that it is impossible to 'geometrically' maintain 3-Dimensional spherical symmetry of the universe, within the sphere of the Cosmic Microwave Background Radiation, for each 3D point of the universe, unless all the 'higher dimensional quantum information waves' actually do collapse to their 'uncertain 3D wave/particle state', universally and instantaneously, for each point of conscious observation in the universe just as the experiments of quantum mechanics are telling us that they do. The 4-D expanding hypersphere of the space-time of general relativity is insufficient to maintain such 3D integrity/symmetry, all by itself, for each different 3D point of observation in the universe. The primary reason for why the 4D space-time, of the 3D universe, is insufficient to maintain 3D symmetry, by itself, is because the universe is shown to have only 10^79 atoms. In other words, it is geometrically impossible to maintain such 3D symmetry of centrality with finite 3D material resources to work with for each 3D point in the universe. Universal quantum wave collapse of photons, to each point of 'conscious observation' in the universe, is the only answer that has adequate sufficiency to explain the 3D centrality we witness for ourselves in this universe.
From a slightly different point of reasoning this following site, through a fairly exhaustive examination of the General Relativity equations themselves, acknowledges the insufficiency of General Relativity to account for the 'completeness' of 4D space-time within the sphere of the CMBR from different points of observation in the universe.
The Cauchy Problem In General Relativity - Igor Rodnianski
Excerpt: 2.2 Large Data Problem In General Relativity - While the result of Choquet-Bruhat and its subsequent refinements guarantee the existence and uniqueness of a (maximal) Cauchy development, they provide no information about its geodesic completeness and thus, in the language of partial differential equations, constitutes a local existence. ,,, More generally, there are a number of conditions that will guarantee the space-time will be geodesically incomplete.,,, In the language of partial differential equations this means an impossibility of a large data global existence result for all initial data in General Relativity.
The following article speaks of a proof developed by legendary mathematician Kurt Gödel, from a thought experiment, in which Gödel showed General Relativity could not be a complete description of the universe:
Excerpt: Gödel's personal God is under no obligation to behave in a predictable orderly fashion, and Gödel produced what may be the most damaging critique of general relativity. In a Festschrift, (a book honoring Einstein), for Einstein's seventieth birthday in 1949, Gödel demonstrated the possibility of a special case in which, as Palle Yourgrau described the result, "the large-scale geometry of the world is so warped that there exist space-time curves that bend back on themselves so far that they close; that is, they return to their starting point." This means that "a highly accelerated spaceship journey along such a closed path, or world line, could only be described as time travel." In fact, "Gödel worked out the length and time for the journey, as well as the exact speed and fuel requirements." Gödel, of course, did not actually believe in time travel, but he understood his paper to undermine the Einsteinian worldview from within.
The fact that photons are shown to travel as uncollapsed quantum information waves in the double slit experiment, and not as collapsed particles, is what gives us a solid reason for proposing this mechanism of the universal quantum wave collapse of photons to each conscious observer.
Double-slit experiment
Excerpt: In quantum mechanics, the double-slit experiment (often referred to as Young's experiment) demonstrates the inseparability of the wave and particle natures of light and other quantum particles. A coherent light source (e.g., a laser) illuminates a thin plate with two parallel slits cut in it, and the light passing through the slits strikes a screen behind them. The wave nature of light causes the light waves passing through both slits to interfere, creating an interference pattern of bright and dark bands on the screen. However, at the screen, the light is always found to be absorbed as though it were made of discrete particles, called photons.,,, Any modification of the apparatus that can determine (that can let us observe) which slit a photon passes through destroys the interference pattern, illustrating the complementarity principle; that the light can demonstrate both particle and wave characteristics, but not both at the same time.
Double Slit Experiment – Explained By Prof Anton Zeilinger – video
Also of note is exactly how well established quantum theory is as to being 'correct':
An experimental test of all theories with predictive power beyond quantum theory - May 2011
It is also interesting to note that materialists, instead of dealing forthrightly with the Theistic implications of quantum wave collapse, postulated quasi-infinite parallel universes, i.e. Many-Worlds, in which any absurdity would not be out of bounds in the infinite parallel universes i.e. Elvis could be president, pink elephants, etc.. etc.. in the Many-Worlds model;
Quantum mechanics
Excerpt: The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[43] This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet:
This following experiment extended the double slit experiment to show that the 'spooky actions', for instantaneous quantum wave collapse, happen regardless of any considerations for time or distance i.e. The following experiment shows that quantum actions are 'universal and instantaneous':
Wheeler's Classic Delayed Choice Experiment:
Excerpt: Now, for many billions of years the photon is in transit in region 3. Yet we can choose (many billions of years later) which experimental set up to employ – the single wide-focus, or the two narrowly focused instruments. We have chosen whether to know which side of the galaxy the photon passed by (by choosing whether to use the two-telescope set up or not, which are the instruments that would give us the information about which side of the galaxy the photon passed). We have delayed this choice until a time long after the particles "have passed by one side of the galaxy, or the other side of the galaxy, or both sides of the galaxy," so to speak. Yet, it seems paradoxically that our later choice of whether to obtain this information determines which side of the galaxy the light passed, so to speak, billions of years ago. So it seems that time has nothing to do with effects of quantum mechanics. And, indeed, the original thought experiment was not based on any analysis of how particles evolve and behave over time – it was based on the mathematics. This is what the mathematics predicted for a result, and this is exactly the result obtained in the laboratory.
Genesis, Quantum Physics and Reality
Excerpt: Simply put, an experiment on Earth can be made in such a way that it determines if one photon comes along either on the right or the left side or if it comes (as a wave) along both sides of the gravitational lens (of the galaxy) at the same time. However, how could the photons have known billions of years ago that someday there would be an earth with inhabitants on it, making just this experiment? ,,, This is big trouble for the multi-universe theory and for the "hidden-variables" approach.
And of course all this leads us back to this question. "What does our conscious observation have to do with anything in collapsing the wave function of the photon in the double slit experiment and in the universe?",,
What drives materialists crazy is that consciousness cannot be seen, tasted, smelled, touched, heard, or studied in a laboratory. But how could it be otherwise? Consciousness is the very thing that is DOING the seeing, the tasting, the smelling, etc… We define material objects by their effect upon our senses – how they feel in our hands, how they appear to our eyes. But we know consciousness simply by BEING it!
Moreover, What is causing the quantum waves to collapse from their 'higher dimension' in the first place since we 'conscious' humans are definitely not the ones who are causing the photon waves to collapse to their 'uncertain 3D wave/particle' state? With the refutation of the materialistic 'hidden variable' argument and with the patent absurdity of the materialistic 'Many-Worlds' hypothesis, then I can only think of one sufficient explanation for quantum wave collapse to photon;
Psalm 118:27
God is the LORD, who hath shown us light:,,,
In the following article, Physics Professor Richard Conn Henry is quite blunt as to what quantum mechanics reveals to us about the 'primary cause' of our 3D reality:
Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University
Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007
Art Battson - Access Research Group
Personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist." The following comment from a blogger on UD reflects fairly closely how I, as a Christian, view reality;
"I do believe in the physical, concrete universe as real. It isn’t just an illusion. However, being a Christian, I can say, also, that the spiritual realm is even more real than the physical. More real, in this sense, however, isn’t to be taken to mean that the physical is “less” real, but that it is less important. The physical, ultimately, really derives its significance from the spiritual, and not the other way around. I submit to you, though, that the spiritual reality, in some sense, needs the physical reality, just as a baseball game needs a place to be played. The game itself may be more important than the field, but the game still needs the field in order to be played. The players are the most important part of the game, but without bats, balls, and gloves, the players cannot play. Likewise, without a physical, concrete reality, the spiritual has “no place to play”. Love, without a concrete reality, has no place to act out its romance; joy has nothing to jump up and down on, and consciousness has nothing to wake up to." - Brent - UD Blogger
Professor Henry's bluntness on the implications of quantum mechanics continues here:
Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University
Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe.
And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial…
As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by this quote from Max Planck.
Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck Planck was a devoted Christian from early life to death, was a churchwarden from 1920 until his death, and believed in an almighty, all-knowing, beneficent God (though, paradoxically, not necessarily a personal one) This deep 'Christian connection', of Planck, is not surprising when you realize practically every, if not every, founder of each major branch of modern science also ‘just so happened’ to have some kind of a deep Christian connection.)
Colossians 1:17
I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its 'uncertain' 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe:
Psalm 33:13-15
Moreover, the argument for God from consciousness can be framed like this:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality.
2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality.
4. Therefore, consciousness is found to precede material reality.
The expansion of every 3D point in the universe, and the quantum wave collapse of the entire universe to each point of conscious observation in the universe, is obviously a very interesting congruence in science between the very large (relativity) and the very small (quantum mechanics). A congruence that Physicists, and Mathematicians, seem to be having a extremely difficult time 'unifying' into a 'theory of everything'.(Einstein, Penrose).
Roger Penrose
Quantum Mechanics Not In Jeopardy: Physicists Confirm Decades-Old Key Principle Experimentally - July 2010
Excerpt: the research group led by Prof. Gregor Weihs from the University of Innsbruck and the University of Waterloo has confirmed the accuracy of Born’s law in a triple-slit experiment (as opposed to the double slit experiment). "The existence of third-order interference terms would have tremendous theoretical repercussions - it would shake quantum mechanics to the core," says Weihs. The impetus for this experiment was the suggestion made by physicists to generalize either quantum mechanics or gravitation - the two pillars of modern physics - to achieve unification, thereby arriving at a one all-encompassing theory. "Our experiment thwarts these efforts once again," explains Gregor Weihs. (of note: Born's Law is an axiom that dictates that quantum interference can only occur between pairs of probabilities, not triplet or higher order probabilities. If they would have detected higher order interference patterns this would have potentially allowed a reformulation of quantum mechanics that is compatible with, or even incorporates, gravitation.)
"There are serious problems with the traditional view that the world is a space-time continuum. Quantum field theory and general relativity contradict each other. The notion of space-time breaks down at very small distances, because extremely massive quantum fluctuations (virtual particle/antiparticle pairs) should provoke black holes and space-time should be torn apart, which doesn’t actually happen." - G J Chaitin
The conflict of reconciling General Relativity and Quantum Mechanics appears to arise from the inability of either theory to successfully deal with the Zero/Infinity problem that crops up in different places of each theory:
Excerpt: The biggest challenge to today's physicists is how to reconcile general relativity and quantum mechanics. However, these two pillars of modern science were bound to be incompatible. "The universe of general relativity is a smooth rubber sheet. It is continuous and flowing, never sharp, never pointy. Quantum mechanics, on the other hand, describes a jerky and discontinuous universe. What the two theories have in common - and what they clash over - is zero.",, "The infinite zero of a black hole -- mass crammed into zero space, curving space infinitely -- punches a hole in the smooth rubber sheet. The equations of general relativity cannot deal with the sharpness of zero. In a black hole, space and time are meaningless.",, "Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object,,, According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge.
Quantum Mechanics and Relativity – The Collapse Of Physics? – video – with notes as to plausible reconciliation that is missed by materialists
How Quantum Gravity Destroys Physicalism - video
Moreover, this extreme ‘mathematical difficulty', of reconciling General Relativity with Quantum Mechanics into the much sought after 'Theory of Everything', was actually somewhat foreseeable from previous work, earlier in the 20th century, in mathematics by Godel:
The following scientist offers a very interesting insight into this issue of 'reconciling' the mental universe of Quantum Mechanics with the space-time of General Relativity:
How the Power of Intention Alters Matter - Dr. William A. Tiller
Excerpt: "Most people think that the matter is empty, but for internal self consistency of quantum mechanics and relativity theory, there is required to be the equivalent of 10 to 94 grams of mass energy, each gram being E=MC2 kind of energy. Now, that's a huge number, but what does it mean practically? Practically, if I can assume that the universe is flat, and more and more astronomical data is showing that it's pretty darn flat, if I can assume that, then if I take the volume or take the vacuum within a single hydrogen atom, that's about 10 to the minus 23 cubic centimeters. If I take that amount of vacuum and I take the latent energy in that, there is a trillion times more energy there than in all of the mass of all of the stars and all of the planets out to 20 billion light-years. That's big, that's big. And if consciousness allows you to control even a small fraction of that, creating a big bang is no problem." - Dr. William Tiller - has been a professor at Stanford U. in the Department of materials science & Engineering
This following experiment is really interesting as to establishing the plausibility of Tiller's preceding hypothesis:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video
I once asked a evolutionist, after showing him the preceding experiment, "Since you ultimately believe that the 'god of random chance/chaos' produced everything we see around us, what in the world is my mind doing pushing your god around?"
Yet, to continue on, the unification, into a 'theory of everything', between what is in essence the 'infinite Theistic world of Quantum Mechanics' and the 'finite Materialistic world of the space-time of General Relativity' seems to be directly related to what Jesus apparently joined together with His resurrection, i.e. related to the unification of infinite God with finite man. Dr. William Dembski in this following comment, though not directly addressing the Zero/Infinity conflict in General Relativity and Quantum Mechanics, offers insight into this 'unification' of the infinite and the finite:
William Dembski PhD. Mathematics
Also of related interest to this ‘Zero/Infinity conflict of reconciliation’, between General Relativity and Quantum Mechanics, is that a ‘uncollpased’ photon, in its quantum wave state, is mathematically defined as ‘infinite’ information:
Wave function
Quantum Computing – Stanford Encyclopedia
Single photons to soak up data:
It is important to note that the following experiment actually encoded information into a photon while it was in its quantum wave state, thus destroying the notion, held by many, that the wave function was not 'physically real' but was merely 'abstract'. i.e. How can information possibly be encoded into something that is not physically real but merely abstract?
Ultra-Dense Optical Storage - on One Photon
Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image's worth of data into a photon, slow the image down for storage, and then retrieve the image intact.
The following paper mathematically corroborated the preceding experiment and cleaned up some pretty nasty probabilistic incongruities that arose from a purely statistical interpretation, i.e. it seems that stacking a ‘random infinity', (parallel universes to explain quantum wave collapse), on top of another ‘random infinity', to explain quantum entanglement, leads to irreconcilable mathematical absurdities within quantum mechanics:
Quantum Theory's 'Wavefunction' Found to Be Real Physical Entity: Scientific American - November 2011
Excerpt: David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. "This strips away obscurity and shows you can't have an interpretation of a quantum state as probabilistic," he says.
The quantum (wave) state cannot be interpreted statistically - November 2011
Moreover there is actual physical evidence that lends strong support to the position that the 'Zero/Infinity conflict', we find between General Relativity and Quantum Mechanics, was successfully dealt with by Christ:
General Relativity, Quantum Mechanics, Entropy, and The Shroud Of Turin - updated video
Turin Shroud Enters 3D Age - Pictures, Articles and Videos
Turin Shroud 3-D Hologram - Face And Body - Dr. Petrus Soons - video
A Quantum Hologram of Christ's Resurrection? by Chuck Missler
Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. Dame Piczek created a one-fourth size sculpture of the man in the Shroud. When viewed from the side, it appears as if the man is suspended in mid air (see graphic, below), indicating that the image defies previously accepted science. The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically.
THE EVENT HORIZON (Space-Time Singularity) OF THE SHROUD OF TURIN. - Isabel Piczek - Particle Physicist
Particle Radiation from the Body - M. Antonacci, A. C. Lind
Excerpt: The Shroud’s frontal and dorsal body images are encoded with the same amount of intensity, independent of any pressure or weight from the body. The bottom part of the cloth (containing the dorsal image) would have born all the weight of the man’s supine body, yet the dorsal image is not encoded with a greater amount of intensity than the frontal image. Radiation coming from the body would not only explain this feature, but also the left/right and light/dark reversals found on the cloth’s frontal and dorsal body images.
Shroud Of Turin Is Authentic, Italian Study Suggests - December 2011
Excerpt: Last year scientists were able to replicate marks on the cloth using highly advanced ultraviolet techniques that weren’t available 2,000 years ago — nor during the medieval times, for that matter.,,, Since the shroud and “all its facets” still cannot be replicated using today’s top-notch technology, researchers suggest it is impossible that the original image could have been created in either period.
Scientific hypotheses on the origin of the body image of the Shroud - 2010
Excerpt: for example, if we consider the density of radiation that we used to color a single square centimeter of linen, to reproduce the entire image of the Shroud with a single flash of light would require fourteen thousand lasers firing simultaneously each on a different area of linen. In other words, it would take a laser light source the size of an entire building.
Scientists say Turin Shroud is supernatural - December 2011
Excerpt: After years of work trying to replicate the colouring on the shroud, a similar image has been created by the scientists.
Press release Video on preceding paper:
Scientists Claim 'Shroud of Turin' Could Not Have Been Faked - video
Also of note as to providing a viable 'mechanism' for the apparent 'burst of light' emanating from the body of Christ:
Cellular Communication through Light
Biophotons - The Light In Our Cells - Marco Bischof - March 2005
Excerpt page 2: The Coherence of Biophotons: ,,, Biophotons consist of light with a high degree of order, in other words, biological laser light. Such light is very quiet and shows an extremely stable intensity, without the fluctuations normally observed in light. Because of their stable field strength, its waves can superimpose, and by virtue of this, interference effects become possible that do not occur in ordinary light. Because of the high degree of order, the biological laser light is able to generate and keep order and to transmit information in the organism.
The Real Bioinformatics Revolution - Proteins and Nucleic Acids 'Singing' to One Another?
Excerpt: the molecules send out specific frequencies of electromagnetic waves which not only enable them to ‘see' and ‘hear' each other, as both photon and phonon modes exist for electromagnetic waves, but also to influence each other at a distance and become ineluctably drawn to each other if vibrating out of phase (in a complementary way).,,, More than 1 000 proteins from over 30 functional groups have been analysed. Remarkably, the results showed that proteins with the same biological function share a single frequency peak while there is no significant peak in common for proteins with different functions; furthermore the characteristic peak frequency differs for different biological functions. ,,, The same results were obtained when regulatory DNA sequences were analysed.
Are humans really beings of light?
Excerpt: "We now know, today, that man is essentially a being of light.",,, "There are about 100,000 chemical reactions happening in every cell each second. The chemical reaction can only happen if the molecule which is reacting is excited by a photon... Once the photon has excited a reaction it returns to the field and is available for more reactions... We are swimming in an ocean of light."
Coast to Coast - Vicki's Near Death Experience (Blind From Birth) part 1 of 3
Quote from preceding video: 'I was in a body and the only way that I can describe it was a body of energy, or of light. And this body had a form. It had a head. It had arms and it had legs. And it was like it was made out of light. And 'it' was everything that was me. All of my memories, my consciousness, everything.' -
Vicky Noratuk
St. Augustine
Thus, when one allows God into math, as Godel indicated must ultimately be done to keep math from being 'incomplete', then there actually exists a very credible, empirically backed, reconciliation between Quantum Mechanics and General Relativity into a 'Theory of Everything'! Yet it certainly is one that many dogmatic Atheists will try to deny the relevance of.,,, As a footnote; Godel, who proved you cannot have a mathematical ‘Theory of Everything’, without allowing God to bring completeness to the 'Theory of Everything', also had this to say:
The God of the Mathematicians – Goldman
Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” – Kurt Gödel – (Gödel is considered by many to be the greatest mathematician of the 20th century)
Philippians 2: 5-11
While I agree with a criticism, from a Christian, that was leveled against the preceding Shroud of Turin video, that God indeed needed no help from the universe in the resurrection event of Christ, I am, none-the-less, very happy to see that what is considered the number one problem of Physicists and Mathematicians in physics today, of a unification into a 'theory of everything' for what is in essence the finite materialistic world of General Relativity and the infinite Theistic world of Quantum Mechanics, does in fact seem to find a credible successful resolution for 'unification' within the resurrection event of Jesus Christ Himself. It seems almost overwhelmingly apparent to me from the 'scientific evidence' we now have that Christ literally ripped a hole in the finite entropic space-time of this universe to reunite infinite God with finite man. That modern science would even offer such a almost tangible glimpse into the mechanics of what happened in the tomb of Christ should be a source of great wonder and comfort for the Christian heart.
Psalms 16:10
Acts 2:31
A shortened form of the evidence is here:
Centrality of Each Individual Observer In The Universe and Christ’s Very Credible Reconciliation Of General Relativity and Quantum Mechanics
It is also interesting to note that 'higher dimensional' mathematics had to be developed before Einstein could elucidate General Relativity, or even before Quantum Mechanics could be elucidated;
The Mathematics Of Higher Dimensionality – Gauss & Riemann – video
3D to 4D shift - Carl Sagan - video with notes
Excerpt from Notes: The state-space of quantum mechanics is an infinite-dimensional function space. Some physical theories are also by nature high-dimensional, such as the 4-dimensional general relativity.
I think it should be fairly clear by now that, much contrary to the mediocrity of earth and of humans brought about by the heliocentric discoveries of Galileo and Copernicus, the findings of modern science are very comforting to Theistic postulations in general, and even lends strong support of plausibility to the main tenet of Christianity which holds Jesus Christ is the only begotten Son of God.
Matthew 28:18
Of related note; there is a mysterious 'higher dimensional' component to life:
Excerpt: Many fundamental characteristics of organisms scale
with body size as power laws of the form:
Y = Yo M^b,
4-Dimensional Quarter Power Scaling In Biology - video
Of related note to 'invariant' patterns found in life that are inexplicable for natural selection to explain the origination of:
Chargaff’s “Grammar of Biology”: New Fractal-like Rules - 2011
Excerpt from Conclusion: It was shown that these rules are valid for a large set of organisms: bacteria, plants, insects, fish and mammals. It is noteworthy that no matter the word length the same pattern is observed (self-similarity). To the best of our knowledge, this is the first invariant genomic properties publish(ed) so far, and in Science invariant properties are invaluable ones and usually they have practical implications.
Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for 'random' Natural Selection to be the rational explanation for the invariant scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this 'four dimensional scaling' of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional 'expectation' for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an 'emergent' property of the 3-D material realm.
Earth’s crammed with heaven,
And every common bush afire with God;
But only he who sees, takes off his shoes,
The rest sit round it and pluck blackberries.
- Elizabeth Barrett Browning
Excerpt: This paper highlights the distinctive and non-material nature of information and its relationship with matter, energy and natural forces. It is proposed in conclusion that it is the non-material information (transcendent to the matter and energy) that is actually itself constraining the local thermodynamics to be in ordered disequilibrium and with specified raised free energy levels necessary for the molecular and cellular machinery to operate.
Quantum entanglement holds together life’s blueprint - 2010
Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford.
The relevance of continuous variable entanglement in DNA - July 2010
Quantum Information/Entanglement In DNA & Protein Folding - short video
Quantum Computing in DNA – Stuart Hameroff
Excerpt: Hypothesis: DNA utilizes quantum information and quantum computation for various functions. Superpositions of dipole states of base pairs consisting of purine (A,G) and pyrimidine (C,T) ring structures play the role of qubits, and quantum communication (coherence, entanglement, non-locality) occur in the “pi stack” region of the DNA molecule.,,, We can then consider DNA as a chain of qubits (with helical twist).
Output of quantum computation would be manifest as the net electron interference pattern in the quantum state of the pi stack, regulating gene expression and other functions locally and nonlocally by radiation or entanglement.
Quantum Action confirmed in DNA by direct empirical research;
DNA Can Discern Between Two Quantum States, Research Shows - June 2011
Excerpt: -- DNA -- can discern between quantum states known as spin. - The researchers fabricated self-assembling, single layers of DNA attached to a gold substrate. They then exposed the DNA to mixed groups of electrons with both directions of spin. Indeed, the team's results surpassed expectations: The biological molecules reacted strongly with the electrons carrying one of those spins, and hardly at all with the others. The longer the molecule, the more efficient it was at choosing electrons with the desired spin, while single strands and damaged bits of DNA did not exhibit this property.
Does DNA Have Telepathic Properties?-A Galaxy Insight - 2009
Can Quantum Mechanics Play a Role in DNA Damage Detection? (Short answer; YES!) – video - as well at about 27 Minute mark in the video - Fröhlich Condensation and Quantum Consciousness
It turns out that quantum information has been confirmed to be in protein structures as well;
Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73
Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state.
Quantum states in proteins and protein assemblies:
Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign.
Myosin Coherence
Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects.
Here's another measure for quantum information in protein structures:
Proteins with cruise control provide new perspective:
The preceding is solid confirmation that far more complex information resides in proteins than meets the eye, for the calculus equations used for ‘cruise control’, that must somehow reside within the quantum information that is ‘constraining’ the entire protein structure to its ‘normal’ state, is anything but ‘simple classical information’. For a sample of the equations that must be dealt with, to ‘engineer’ even a simple process control loop like cruise control along a entire protein structure, please see this following site:
PID controller
It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'living soul' for man that lives past the death of the body.
Genesis 2:7
Falsification Of Neo-Darwinism by Quantum Entanglement/Information
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description)
Here are several more scriptures on man's 'eternal soul' at this following site:
Bible Reference Notes: “Hell and the Eternal Soul.”
Further notes:
The Unbearable Wholeness of Beings - Steve Talbott
The ‘Fourth Dimension’ Of Living Systems
Quantum no-hiding theorem experimentally confirmed for first time - March 2011
Quantum no-deleting theorem
Does the fact that quantum information, which cannot be created nor destroyed, is found in molecular biology, at such a foundational level and such a massive scale, provide conclusive proof for the 'living soul' of man??? Well, all by itself, maybe not 'conclusive proof' in the strictest sense of the notion, but it certainly makes the question 'Does man have a living soul?' a whole lot more integrated to how the foundation of reality itself is found to be structured! i.e. Makes it far more credible scientifically!
Of related interest, this following article is interesting for it draws attention to the fact that humans 'just so happen' to be near the logarithmic center of the universe, between Planck's length and the cosmic horizon of the cosmic background radiation (10^-33 cm and 10^28 cm respectively) .
Scale of the Universe (From Planck length to the Cosmic Background Radiation) - interactive scale
As to the fact that, as far as the solar system itself is concerned, the earth is not 'central', I find the fact that this seemingly insignificant earth is found to revolve around the much more massive sun to be a 'poetic reflection' of our true spiritual condition. In regards to God's 'kingdom of light', are we not to keep in mind our lives are to be guided by the much higher purpose which is tied to our future in God's kingdom of light? Are we not to avoid placing too much emphasis on what this world has to offer, since it is so much more insignificant than what heaven has to offer?
Louie Giglio - How Great Is Our God - Part 2 - video
You could fit 262 trillion earths inside (the star of) Betelgeuse. If the Earth were a golfball that would be enough to fill up the Superdome (football stadium) with golfballs,,, 3000 times!!! When I heard that as a teenager that stumped me right there because most of my praying had been advising God, correcting God, suggesting things to God, drawing diagrams for God, reviewing things with God, counseling God. - Louie Giglio
C.S. Lewis
Sara Groves - You Are The Sun - Music video
Psalm 8: 3-4
Journey Through the Universe - George Smoot- Frank Turek - video
The Catholic church put Galileo on trial for teaching planets revolve around the sun. They found him guilty of heresy; forced him to recant publicly what he had written; then placed him under house arrest. The religious leaders are said to have done this to Galileo because this supposed 'heresy' of Galileo is thought to have upset the basic biblical belief of man being made in God's image. Though the actual story of how science and religion became separated is a lot subtler than what is currently believed from the Galileo affair, (Why Galileo was Wrong, Even Though He was Right), this particular episode between the church and Galileo is now generally looked at as the start of what most people presume is a great divide between science and religion which has lasted for several centuries. Yet despite this common perception of a great divide, within the last century there has been a veritable avalanche of discovery, from many diverse fields of science, which has greatly narrowed this perception of a 'great divide' between science and religion.
The Return of the God Hypothesis - Stephen Meyer
video lecture:
Multiple Competing Worldviews - Stephen Meyer on John Ankerberg - video - November 4, 2011
Richard Dawkins Lies About William Lane Craig AND Logic! - video and article defending each argument
Evidence For the Existence of God - William Lane Craig - video lecture defending each argument
The narrowing of this great divide started with astronomer Edwin Hubble's (1889-1953) discovery, in 1929, of galaxies speeding away from each other. This, as well as many other discoveries confirming the Big Bang, has firmly established the universe actually had a beginning just as theologians have always claimed that it does.
Beyond The Big Bang: William Lane Craig Templeton Foundation Lecture (HQ) 1/6 - video
William Lane Craig vs Peter Millican: "Does God Exist?", Birmingham University, October 2011 - video
The Scientific Evidence For The Big Bang - Michael Strauss PhD. - video
Evidence Supporting the Big Bang
Quantum Evidence for a Theistic Big Bang
The best data we have [concerning the Big Bang] are exactly what I would have predicted, had I nothing to go on but the five books of Moses, the Psalms, the bible as a whole.
Dr. Arno Penzias, Nobel Laureate in Physics - co-discoverer of the Cosmic Background Radiation - as stated to the New York Times on March 12, 1978
“Certainly there was something that set it all off,,, I can’t think of a better theory of the origin of the universe to match Genesis”
Robert Wilson – Nobel laureate – co-discover Cosmic Background Radiation
George Smoot – Nobel laureate in 2006 for his work on COBE
“,,,the astronomical evidence leads to a biblical view of the origin of the world,,, the essential element in the astronomical and biblical accounts of Genesis is the same.”
Robert Jastrow – Founder of NASA’s Goddard Institute – Pg.15 ‘God and the Astronomers’
,,, 'And if your curious about how Genesis 1, in particular, fairs. Hey, we look at the Days in Genesis as being long time periods, which is what they must be if you read the Bible consistently, and the Bible scores 4 for 4 in Initial Conditions and 10 for 10 on the Creation Events'
Hugh Ross - Evidence For Intelligent Design Is Everywhere; video
Prof. Henry F. Schaefer cites several interesting quotes, from leading scientists in the field of Big Bang cosmology, about the Theological implications of the Big Bang in the following video:
The Big Bang and the God of the Bible - Henry Schaefer PhD. - video
Entire video:
"The Big Bang represents an immensely powerful, yet carefully planned and controlled release of matter, energy, space and time. All this is accomplished within the strict confines of very carefully fine-tuned physical constants and laws. The power and care this explosion reveals exceeds human mental capacity by multiple orders of magnitude."
Prof. Henry F. Schaefer - closing statement of part 5 of preceding video
The Creation Of The Universe (Kalam Cosmological Argument)- Lee Strobel - William Lane Craig - video
Dr. William Lane Craig defends the Kalam Cosmological argument for the existence of God against various attempted refutations - video playlist
Hugh Ross PhD. - Evidence For The Transcendent Origin Of The Universe - video
What Atheists Just Don't Get (About God) - Video
What Contemporary Physics and Philosophy Tell Us About Nature and God - Fr. Spitzer & Dr. Bruce Gordon (Dr. Gordon speaks for the last 25 minutes) - video
Formal Proof For The Transcendent Origin Of the Universe - William Lane Craig - video
Are Many Worlds and the Multiverse the Same Idea? - Sean Carroll
Excerpt: When cosmologists talk about “the multiverse,” it’s a slightly poetic term. We really just mean different regions of spacetime, far away so that we can’t observe them, but nevertheless still part of what one might reasonably want to call “the universe.” In inflationary cosmology, however, these different regions can be relatively self-contained — “pocket universes,” as Alan Guth calls them.
"The prediction of the standard model that the universe began to exist remains today as secure as ever—indeed, more secure, in light of the Borde-Guth-Vilenkin theorem and that prediction’s corroboration by the repeated and often imaginative attempts to falsify it. The person who believes that the universe began to exist remains solidly and comfortably within mainstream science." - William Lane Craig
Inflationary spacetimes are not past-complete - Borde-Guth-Vilenkin - 2003
Excerpt: inflationary models require physics other than inflation to describe the past boundary of the inflating region of spacetime.
Here is a video of Alan Guth,,,
Did Our Universe have a Beginning? (Alan Guth) - video
,,,Where towards the very end of the video, after considering some fairly exotic materialistic scenarios of 'eternal inflation' of 'pocket universes', Alan Guth concedes that "The ultimate theory for the origin of the universe is still very much up for grabs".
Alexander Vilenkin is far more direct than Alan Guth:
"It is said that an argument is what convinces reasonable men and a proof is what it takes to convince even an unreasonable man. With the proof now in place, cosmologists can long longer hide behind the possibility of a past eternal universe. There is no escape, they have to face the problem of a cosmic beginning." Alexander Vilenkin - Many Worlds In One - Pg. 176
"The conclusion is that past-eternal inflation is impossible without a beginning."
Alexander Vilenkin - from pg. 35 'New Proofs for the Existence of God' by Robert J. Spitzer (of note: A elegant thought experiment of a space traveler traveling to another galaxy, that Borde, Guth, and Vilenkin, used to illustrate the validity of the proof, is on pg. 35 of the book as well.)
Cosmologist Alexander Vilenkin of Tufts University in Boston
How Atheists Take Alexander Vilenkin (& the BVG Theorem) Out Of Context - William Lane Craig - video
Genesis 1:1-3
This following video gives a very small glimpse at the power involved when God said 'Let there be light':
God's Creative and Sustaining Word - Dr. Don Johnson - video
The following video and article are very suggestive as to providing almost tangible proof for God 'speaking' reality into existence:
The Deep Connection Between Sound & Reality - Evan Grant - Allosphere - video
Music of the sun recorded by scientists - June 2010
The following video is cool:
What pi sounds like (when put to music) - cool video
It is also very interesting to note that among all the 'holy' books, of all the major religions in the world, only the Holy Bible was correct in its claim for a transcendent origin of the universe. Some later 'holy' books, such as the Mormon text "Pearl of Great Price" and the Qur'an, copy the concept of a transcendent origin from the Bible but also include teachings that are inconsistent with that now established fact. (Ross; Why The Universe Is The Way It Is; Pg. 228; Chpt.9; note 5)
The Most Important Verse in the Bible - Prager University - video
The Uniqueness of Genesis 1:1 - William Lane Craig - video
This discovery, of a beginning for the universe, has crushed the materialistic belief that postulated the universe has always existed and had no beginning.
Christianity and The Birth of Science - Michael Bumbulis, Ph.D
Dr. Meyer on the Christian History of Science - video
A Short List Of The Christian Founders Of Modern Science
Founders of Modern Science Who Believe in GOD - Tihomir Dimitrov
The following is a good essay, by Robert C. Koons, in which the popular misconception of a war between science and religion, that neo-Darwinists often use in public to defend their, ironically, pseudo-scientific position, is in fact a gross misrepresentation of the facts. For not only does Robert Koons find Theism, particularly Chistian Theism, absolutely vital to the founding of modern science, but also argues that the Theistic worldview is necessary for the long term continued success of science into the future:
IV. The Dependency of Science Upon Theism (Page 21)
The Origin of Science
Excerpt: Modern science is not only compatible with Christianity, it in fact finds its origins in Christianity.
Christianity Is a Science-Starter, Not a Science-Stopper By Nancy Pearcey
The 'Person Of Christ' was, and is, necessary for science to start and persist!
Bruce Charlton's Miscellany - October 2011
Excerpt: I had discovered that over the same period of the twentieth century that the US had risen to scientific eminence it had undergone a significant Christian revival. ,,,The point I put to (Richard) Dawkins was that the USA was simultaneously by-far the most dominant scientific nation in the world (I knew this from various scientometic studies I was doing at the time) and by-far the most religious (Christian) nation in the world. How, I asked, could this be - if Christianity was culturally inimical to science?
In spite of the fact that modern science can be forcefully argued to owe its very existence to Christianity, many scientists before Hubble's discovery had been swayed by the materialistic philosophy and had thus falsely presumed the universe itself was infinite in size as well as falsely presumed it was eternal in duration. This 'simplistic' conclusion of theirs seems to stem from the fact that it is self evident that something cannot come from nothing, and they simply could not envision the logical necessity of a eternal transcendent Being who created this material realm. The materialistic philosophy was slightly supported by the first law of thermodynamics which states energy can neither be created nor destroyed by any material means. This belief of the universe having no beginning had held the upper hand in scientific circles even though the very next law, the second law of thermodynamics, 'entropy', or the law of universal decay into equilibrium, had raised some serious doubts about the validity of believing the universe had no beginning. As well in mathematics, in overlapping congruence with entropy, the mathematical impossibility of a temporal infinite regression of causes demanded a beginning for the universe; i.e. the existence of a material reality within time called for an 'Alpha', an 'Uncaused Cause', for the material universe that transcends the material universe.
William Lane Craig - Hilbert's Hotel - The Absurdity Of An Infinite Regress Of 'Things' - video
Time Cannot Be Infinite Into The Past - video
If there's a beginning, must there be a cause for that beginning? - Stephen Meyer - video
Does God Exist? - Argument From The Origin Of Nature - Kirk Durston - video
entire video:
The First Cause Must Be Different From All Other Causes - T.G. Peeler
Einstein's general relativity equation has now been extended to confirm not only did matter and energy have a beginning in the Big Bang, but space-time also had a beginning. i.e. The Big Bang was an absolute origin of space-time, matter-energy, and as such demands a cause which transcends space-time, matter-energy.
(Hawking, Penrose, Ellis) - 1970
In conjunction with the mathematical, and logical, necessity of an 'Uncaused Cause' to explain the beginning of the universe, in philosophy it has been shown that,,,
"The 'First Mover' is necessary for change occurring at each moment."
Michael Egnor - Aquinas’ First Way
I find this centuries old philosophical argument, for the necessity of a 'First Mover' accounting for change occurring at each moment, to be validated by quantum mechanics. One line of evidence arises from the smallest indivisible unit of time; Planck time:
Planck time
Excerpt: One Planck time is the time it would take a photon travelling at the speed of light to cross a distance equal to one Planck length. Theoretically, this is the smallest time measurement that will ever be possible,[3] roughly 10^−43 seconds. Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change. As of May 2010, the smallest time interval that was directly measured was on the order of 12 attoseconds (12 × 10^−18 seconds),[4] about 10^24 times larger than the Planck time.
The 'first mover' is further warranted to be necessary from quantum mechanics since the possibility for the universe to be considered a self-sustaining 'closed loop' of cause and effect is removed with the refutation of the 'hidden variable' argument, as first postulated by Einstein, in quantum entanglement experiments. As well, there also must be a sufficient transcendent cause (God/First Mover) to explain quantum wave collapse for 'each moment' of the universe.
God is the ultimate existence which grounds all of reality
It is also interesting to note that materialists, instead of honestly dealing with the obvious theistic implications of quantum mechanics, will many times invoke something called Everett's Many Worlds interpretation, also referred to as decoherence, when dealing with quantum mechanics. Yet this 'solution' ends up creating profound absurdities of logic rather than providing any rational solution:
Quantum mechanics
Excerpt: The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[39] This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet:
Perhaps some may say that Everett’s Many Worlds interpretation of infinite parallel universes is not so absurd after all, if so,, then in some other parallel universe in which you also live, Elvis just so happens to be president of the United states, and you just so happen to come to the opposite conclusion, in that parallel universe, that Many Worlds is in fact absurd! For me, I find that type of 'flexible thinking', stemming from Many Worlds, to be completely absurd!!! Moreover, that one example from Many Worlds, of Elvis being President, is just small potatoes to the levels of absurdity that we would actually be witnessing if Many Worlds were the truth for how reality was constructed.
As a interesting sidelight to this, Einstein hated the loss of determinism that quantum mechanics brought forth to physics, as illustrated by his infamous 'God does not play dice' quote. yet on a deeper philosophical level, I’ve heard one physics professor say something to the effect that the lack of strict determinism in quantum wave collapse actually restored the free will of man to its rightful place, or probably more correctly he said something more like this,,, ‘The proof of free will is found in the indeterminacy of the quantum wave collapse'. I find this statement to be especially true now that conscious observation is shown to be primary to quantum wave collapse to a quasi-3D particle state, for how could our thoughts truly be free if they were merely the result of particle fluctuations in our brain, whether random or deterministic fluctuations of particles? Moreover as is quite obvious to most people, free will is taken as seriously true by all societies, or else why should we spank our children or punish anybody in jails if they truly had no free will to control their actions? Indeed what right would God have to judge anyone if they truly had no free will?
Though I feel very, very, comfortable with how the evidence fits the Theistic model of Quantum Mechanics, in which God is the cause of wave function/packet collapse to each unique observer in the universe, The following article deconstructs many, if not all, of the 'alternative' Quantum Mechanic Models.
The Metaphysics of Quantum Mechanics - James Daniel Sinclair - October 2010
Abstract: Is the science of Quantum Mechanics the greatest threat to Christianity? Some years ago the journal Christianity Today suggested precisely that. It is true that QM is a daunting subject. This barrier is largely responsible for the fear. But when the veil is torn away, the study of QM builds a remarkably robust Christian apologetic. When pragmatic & logically invalid interpretations are removed, there remain four possibilities for the nature of reality (based on the work of philosopher Henry Stapp). Additional analysis shows two are exclusive to theism. The third can be formulated with or without God. The last is consistent only with atheism. By considering additional criteria, options that deny God can be shown to be false.
This following video is very good, and easy to understand, for pointing out some of the unanswerable dilemmas that quantum mechanics presents to the atheistic philosophy of materialism as materialism is popularly understood:
Dr. Quantum - Double Slit Experiment & Entanglement - video
This following experiment extended Wheeler's delayed choice double slit experiment, which I referenced earlier, to highlight the centrality of 'information' in the Double Slit Experiment and refutes any 'detector centered' arguments for why the wave collapses:
The Experiment That Debunked Materialism - video - (delayed choice quantum eraser)
(Double Slit) A Delayed Choice Quantum Eraser - updated 2007
Excerpt: Upon accessing the information gathered by the Coincidence Circuit, we the observer are shocked to learn that the pattern shown by the positions registered at D0 (Detector Zero) at Time 2 depends entirely on the information gathered later at Time 4 and available to us at the conclusion of the experiment.
i.e. This experiment clearly shows that the ‘material’ detector is secondary in the experiment and that a conscious observer, being able to know the information of which path a photon takes with local certainty, is primary to the wave collapsing to a particle in the experiment.
It is also very interesting to note that some materialists seem to have a very hard time grasping the simple point of these extended double slit experiments, but to try to put it more clearly; To explain an event which defies time and space, as the quantum erasure experiment clearly does, you cannot appeal to any material entity in the experiment like the detector, or any other 3D physical part of the experiment, which is itself constrained by the limits of time and space. To give an adequate explanation for defying time and space one is forced to appeal to a transcendent entity which is itself not confined by time or space. But then again I guess I can see why forcing someone, who claims to be a atheistic materialist, to appeal to a non-material transcendent entity, to give an adequate explanation for such a ‘spooky’ event, would invoke such utter confusion on their part. Yet to try to put it in even more ‘shocking’ terms for the atheists, the ‘shocking’ conclusion of the experiment is that a transcendent Mind, with a capital M, must precede the collapse of quantum waves to 3-Dimensional particles. Moreover, it is impossible for a human mind to ever ‘emerge’ from any 3-D material basis which is dependent on a preceding conscious cause for its own collapse to a 3D state in the first place. This is more than a slight problem for the atheistic-evolutionary materialist who insists that our minds simply ‘emerged’, or evolved, from a conglomeration of 3D matter. In the following article Professor Henry puts it more clearly than I can:
The Mental Universe - Richard Conn Henry - Professor of Physics John Hopkins University
Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke "decoherence" - the notion that "the physical environment" is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in "Renninger-type" experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy.
Astrophysicist John Gribbin comments on the Renninger experiment here:
Solving the quantum mysteries - John Gribbin
Excerpt: From a 50:50 probability of the flash occurring either on the hemisphere or on the outer sphere, the quantum wave function has collapsed into a 100 per cent certainty that the flash will occur on the outer sphere. But this has happened without the observer actually "observing" anything at all! It is purely a result of a change in the observer's knowledge about what is going on in the experiment.
i.e. The detector is completely removed as to being the primary cause of quantum wave collapse in the experiment. As Richard Conn Henry clearly implied previously, in the experiment it is found that 'The physical environment' IS NOT sufficient within itself to 'create reality', i.e. 'The physical environment' IS NOT sufficient to explain quantum wave collapse to a 'uncertain' 3D particle.
Walt Whitman - Miracles
That the mind of a individual observer would play such an integral, yet not complete ‘closed loop’ role, in instantaneous quantum wave collapse to uncertain 3-D particles, gives us clear evidence that our mind is a unique entity. A unique entity with a superior quality of existence when compared to the uncertain 3D particles of the material universe. This is clear evidence for the existence of the ‘higher dimensional mind’ of man that supersedes any material basis that the mind has been purported to emerge from by materialists. I would also like to point out that the ‘effect’, of universal quantum wave collapse to each ‘central 3D observer’ in the universe (Wheeler; Delayed Choice, Wigner; Quantum Symmetries), gives us clear evidence of the extremely special importance that the ’cause’ of the ‘Infinite Mind of God’ places on each of our own individual souls/minds.
Psalm 139:17-18
These following studies and videos confirm this 'superior quality' of existence for our souls/minds:
Darwinian Evolution Vs. Consciousness (Soul) - video (notes in description of video)
Alvin Plantinga and the Modal Argument (for the existence of the soul) - video
Removing Half of Brain Improves Young Epileptics' Lives:
‘Surprisingly’, at the molecular level, the cells of the brain are found to be extremely ‘plastic’ to changes in ‘activity in the brain’ which is, or course, completely contrary to the reductive materialistic view of the mind ‘emerging’ from the material brain;
DNA Dynamism - PaV - October 2011
Excerpt: “It was mind-boggling to see that so many methylation sites — thousands of sites — had changed in status as a result of brain activity,” Song says. “We used to think that the brain’s epigenetic DNA methylation landscape was as stable as mountains and more recently realized that maybe it was a bit more subject to change, perhaps like trees occasionally bent in a storm. But now we show it is most of all like a river that reacts to storms of activity by moving and changing fast.”
Further notes on the transcendence of 'mind':
The Day I Died - Part 4 of 6 - The Extremely 'Monitored' Near Death Experience of Pam Reynolds - video
The following is on par with Pam Reynolds Near Death Experience. In the following video, Dr. Lloyd Rudy, a pioneer of cardiac surgery, tells stories of two patients who came back to life after being declared dead, and what they told him.
Famous Cardiac Surgeon’s Stories of Near Death Experiences in Surgery
The Scientific Evidence for Near Death Experiences - Dr Jeffery Long - Melvin Morse M.D. - video
Blind Woman Can See During Near Death Experience (NDE) - Pim von Lommel - video
Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This 'anomaly' is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).)
A neurosurgeon confronts the non-material nature of consciousness - December 2011
Excerpted quote: To me one thing that has emerged from my experience and from very rigorous analysis of that experience over several years, talking it over with others that I respect in neuroscience, and really trying to come up with an answer, is that consciousness outside of the brain is a fact. It’s an established fact. And of course, that was a hard place for me to get, coming from being a card-toting reductive materialist over decades. It was very difficult to get to knowing that consciousness, that there’s a soul of us that is not dependent on the brain.
Neurosurgeon Dr. Eben Alexander’s Near-Death Experience Defies Medical Model of Consciousness - audio interview
Of interest to Near Death Experiences is the fact that many Experiencers say that when they look at their body, while having a Near Death Experience, they find that their body is made of light. Well interestingly, it is found that humans emit 'ultra-weak' light;
Cellular Communication through Light
Are humans really beings of light?
Vicky Noratuk
Also of 'spiritual interest' is the fact that many responses of the mind are found to defy time and space:
Quantum Consciousness - Time Flies Backwards? - Stuart Hameroff MD
Excerpt: Dean Radin and Dick Bierman have performed a number of experiments of emotional response in human subjects. The subjects view a computer screen on which appear (at randomly varying intervals) a series of images, some of which are emotionally neutral, and some of which are highly emotional (violent, sexual....). In Radin and Bierman's early studies, skin conductance of a finger was used to measure physiological response They found that subjects responded strongly to emotional images compared to neutral images, and that the emotional response occurred between a fraction of a second to several seconds BEFORE the image appeared! Recently Professor Bierman (University of Amsterdam) repeated these experiments with subjects in an fMRI brain imager and found emotional responses in brain activity up to 4 seconds before the stimuli. Moreover he looked at raw data from other laboratories and found similar emotional responses before stimuli appeared.
Quantum Coherence and Consciousness – Scientific Proof of ‘Mind’ – video
Particular quote of note from preceding video;
“Wolf Singer Director of the Max Planck Institute for Brain Research (Frankfurt) has found evidence of simultaneous oscillations in separate areas of the cortex, accurately synchronized in phase as well as frequency. He suggests that the oscillations are synchronized from some common source, but the actual source has never been located.”
James J. Hurtak, Ph.D.
Brain ‘entanglement’ could explain memories - January 2010
Excerpt: In both cases, the researchers noticed that the voltage of the electrical signal in groups of neurons separated by up to 10 millimetres sometimes rose and fell with exactly the same rhythm. These patterns of activity, dubbed “coherence potentials”, often started in one set of neurons, only to be mimicked or “cloned” by others milliseconds later. They were also much more complicated than the simple phase-locked oscillations and always matched each other in amplitude as well as in frequency. (Perfect clones) “The precision with which these new sites pick up on the activity of the initiating group is quite astounding – they are perfect clones,” says Plen
In part three of this following video is a very interesting finding that indicates that 'transcendent' quantum coherence within the brain is far less active upon a person entering a sleeping state:
Through The Wormhole S2 E1 (1/6)
The preceding 'quantum evidence' provides a foundation for a plausible 'transcendent mechanism' for the following study:
Bridging the Gap - October 2011
Excerpt: Like a bridge that spans a river to connect two major metropolises, the corpus callosum is the main conduit for information flowing between the left and right hemispheres of our brains. Now, neuroscientists at the California Institute of Technology (Caltech) have found that people who are born without that link—a condition called agenesis of the corpus callosum, or AgCC—still show remarkably normal communication across the gap between the two halves of their brains.
This following study adds weight to the 'transcendence of mind';
Study suggests precognition may be possible - November 2010
Excerpt: A Cornell University scientist has demonstrated that psi anomalies, more commonly known as precognition, premonitions or extra-sensory perception (ESP), really do exist at a statistically significant level.
Mind-Brain Interaction and Science Fiction (Quantum connection) - Jeffrey Schwartz & Michael Egnor - audio
Do Conscious Thoughts Cause Behavior? -Roy F. Baumeister, E. J. Masicampo, and Kathleen D. Vohs - 2010
Excerpt: The evidence for conscious causation of behavior is profound, extensive, adaptive, multifaceted, and empirically strong.
"Thought precedes action as lightning precedes thunder."
Heinrich Heine - in the year 1834
A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel
Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,So we need a functioning brain to receive our consciousness into our waking consciousness. And as soon as the function of brain has been lost, like in clinical death or in brain death, with iso-electricity on the EEG, memories and consciousness do still exist, but the reception ability is lost. People can experience their consciousness outside their body, with the possibility of perception out and above their body, with identity, and with heightened awareness, attention, well-structured thought processes, memories and emotions. And they also can experience their consciousness in a dimension where past, present and future exist at the same moment, without time and space, and can be experienced as soon as attention has been directed to it (life review and preview), and even sometimes they come in contact with the “fields of consciousness” of deceased relatives. And later they can experience their conscious return into their body.
And though it is not possible to localize memories (information) inside the brain, it is interesting to note how extremely complex the brain is in its ability to manipulate rudimentary information:
Boggle Your Brain - November 2010
Excerpt: One synapse, by itself, is more like a microprocessor--with both memory-storage and information-processing elements--than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.
This following experiment is really interesting:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video
I once asked a evolutionist, after showing him the preceding experiment, "Since you ultimately believe that the 'god of random chance' produced everything we see around us, what in the world is my mind doing pushing your god around?"
The Mind Is Not The Brain - Scientific Evidence - Rupert Sheldrake - (Referenced Notes)
Here a Darwinian Psychologist has a moment of honesty facing the 'hard problem' that consciousness presents to materialism;
David Barash - Materialist/Atheist Darwinian Psychologist
Here is another article that is far more nuanced in the discerning of 'transcendent mind' from material brain, than the 'brute' empirical evidence I've listed:
The Mind and Materialist Superstition - Six "conditions of mind" that are irreconcilable with materialism:
Angus Menuge Interviewed by Apologetics 315 - audio interview
Description: Today's interview is with Dr. Angus Menuge, Professor of Philosophy at Concordia University, and author of Agents Under Fire: Materialism and the Rationality of Science. He talks about his background and work, the philosophy of mind, what reason (or reasoning) is, what materialism is as a worldview, things excluded from a materialistic worldview, methodological naturalism and materialism, accounting for free will, materialistic accounts of reason, the epistemological argument from reason, the ontological argument from reason, finding the best explanation for reason, problems with methodological naturalism, implications of materialism, practical application of the argument from reason, advice for apologists, the International Academy of Apologetics, and more.
Materialism and Human Dignity - Casey Luskin interviews Michael Egnor, professor of neurosurgery at SUNY, Stony Brook, on the relationship between the mind and the brain. - podcast
Is the Brain Just an Illusion? - Anika Smith interviews Denyse O'Leary - podcast
Genesis 2:7
I find it very interesting that the materialistic belief of the universe being stable, and infinite in duration, was so deeply rooted in scientific thought that Albert Einstein (1879-1955), when he was shown his general relativity equation indicated a universe that was unstable and would 'draw together' under its own gravity, added a cosmological constant to his equation to reflect a stable universe rather than entertain the thought that the universe had a beginning.
Einstein and The Belgian Priest, George Lemaitre - The "Father" Of The Big Bang Theory - video
The Universe Had a Definite Beginning - Einstein and Edwin Hubble - Stephen Meyer on the John Ankerberg show - video
of note: This was not the last time Einstein's base materialistic philosophy had severely misled him. He was also severely misled in the Bohr–Einstein debates in which he was repeatedly proven wrong in challenging the 'spooky action at a distance' postulations of the emerging field of quantum mechanics. This following video, which I listed earlier, bears worth repeating since it highlights the Bohr/Einstein debate and the decades long struggle to 'scientifically' resolve the disagreement between them:
The Failure Of Local Realism or Reductive Materialism - Alain Aspect - video
The following is an interesting exchange between Bohr and Einstein:
God does not play dice with the cosmos.
Albert Einstein
In response Niels Bohr said,
Do not presume to tell God what to do.
Though many words could be written on the deep underlying philosophical issues of that exchange between Bohr and Einstein, my take on the whole matter is summed up nicely, and simply, in the following verse and video:
Proverbs 16:33
Chance vs. God: The Battle of First Causes – John MacArthur - 10 minute audio
When astronomer Edwin Hubble published empirical evidence indicating a beginning for the universe, Einstein ended up calling the cosmological constant, he had added to his equation, the biggest blunder of his life. But then again mathematically speaking, Einstein's 'fudge factor' was not so much of a blunder after all. In the 1990's a highly modified cosmological constant, representing the elusive 'Dark Energy' to account for the accelerated expansion of the universe, was reintroduced into general relativity equations to explain the discrepancy between the ages of the oldest stars in the Milky Way galaxy and the age of the universe. Far from providing a materialistic solution, which would have enabled the universe to be stable and infinite as Einstein had originally envisioned, the finely-tuned cosmological constant, finely-tuned to 1 part in 10^120, has turned into one of the most powerful evidences of design from many finely-tuned universal constants of the universe. Universal, and transcendent, constants that seem to have no other apparent reason for being at precise values than to enable carbon-based life to be possible in this universe. These transcendent universal constants dramatically demonstrate the need for a infinitely powerful transcendent Creator, to account for the fact the universe is apparently the stunning work of a master craftsman who had carbon-based life in mind as His final goal. If the avalanche of incoming scientific evidence keeps going in the same direction as it has been going for the last century, and there is no hint the evidence will change directions, human beings, warts and all, could once again be popularly viewed as God's ultimate purpose for creating this universe. Man and the earth beneath his feet could very well be looked at as the 'center of the universe' by both scientists and theologians.
Genesis 1:26-27
Then God said, "Let us make man in Our image, according to Our image, according to Our likeness; let them have dominion ..."
God Of Wonders - City On A Hill - music video
A straight-forward interpretation of the anthropic hypothesis is simple in its proposition. It proposes the entire universe, in all its grandeur, was purposely created by an infinitely powerful transcendent Creator specifically with human beings in mind as the end result. Therefore a strict interpretation of the anthropic hypothesis would propose that each level of the universe's development towards man may reflect the handiwork of such a Creator. Here are some resources reflecting that approach:
"Creation as Science" - Hugh Ross - A Testable Creation Model - video
Is There Scientific Evidence for the Existence of God? - Walter Bradley
Creation of the Cosmos - Walter Bradley - video
God Is Not Dead Yet - William Lane Craig - The Revival of Theism In Philosophy since the 1960's
William Lane Craig lecture on Richard Dawkins book 'The God Delusion' - video
The investigative tool for the hypothesis is this: all the universe's 'links of chain' to the appearance of man may be deduced as 'intelligently designed' with what is termed 'irreducible complexity'. The term 'irreducible complexity' was coined in molecular biology by biochemist Michael Behe PhD. (1952-present) in his book 'Darwin's Black Box'. Irreducible complexity is best understood by comparison. It is similar to saying each major part of a finely made Swiss watch is necessary for the watch to operate. Take away any part and the watch will fail to operate. Though individual parts of the watch, or even a watch itself, may have some other purpose in some other system, the principle of integration for a specific singular purpose is a very anti-Darwinian concept that steadfastly resists materialistic explanation. In molecular biology the best known example for irreducible complexity, and thus for Intelligent Design, has become the bacterial flagellum.
Bacterial Flagellum - A Sheer Wonder Of Intelligent Design - video
Bacterial Flagellum: Visualizing the Complete Machine In Situ
Excerpt: Electron tomography of frozen-hydrated bacteria, combined with single particle averaging, has produced stunning images of the intact bacterial flagellum, revealing features of the rotor, stator and export apparatus.
Electron Microscope Photograph of Flagellum Hook-Basal Body
Engineering at Its Finest: Bacterial Chemotaxis and Signal Transduction - JonathanM - September 2011
Excerpt: The bacterial flagellum represents not just a problem of irreducible complexity. Rather, the problem extends far deeper than that. What we are now observing is the existence of irreducibly complex systems within irreducibly complex systems. How random mutations, coupled with natural selection, could have assembled such a finely set-up system is a question to which I defy any Darwinist to give a sensible answer.
Biologist Howard Berg at Harvard calls the Bacterial Flagellum
“the most efficient machine in the universe."
The flagellum has steadfastly resisted all attempts to elucidate its plausible origination by Darwinian processes, much less has anyone ever actually evolved a flagellum from scratch in the laboratory;
Genetic Entropy Refutation of Nick Matzke's TTSS (type III secretion system) to Flagellum Evolutionary Narrative:
Excerpt: Comparative genomic analysis show that flagellar genes have been differentially lost in endosymbiotic bacteria of insects. Only proteins involved in protein export within the flagella assembly pathway (type III secretion system and the basal-body) have been kept...
Excerpt: I am convinced that the T3SS is almost certainly younger than the flagellum. If one aligns the amino acid sequences of the flagellar proteins (that have homologous counterparts in the T3SS), and if one also aligns the amino acid sequences of the T3SS proteins, one finds that the T3SS protein amino acid sequences are much more conserved than the amino acid sequences of the flagellar proteins.,,, - LivingstoneMorford - experimental scientist - UD blogger
Stephen Meyer - T3SS Derived From Bacterial Flagellum (Successful ID Prediction) - video
Phylogenetic Analyses of the Constituents of Type III Protein Secretion Systems
Excerpt: We suggest that the flagellar apparatus was the evolutionary precursor of Type III protein secretion systems.
Peer-Reviewed Paper Investigating Origin of Information Endorses Irreducible Complexity and Intelligent Design - A.C. McIntosh per Casey Luskin - July 2010
Excerpt: many think that that debate has been settled by the work of Pallen and Matzke where an attempt to explain the origin of the bacterial flagellum rotary motor as a development of the Type 3 secretory system has been made. However, this argument is not robust simply because it is evident that there are features of both mechanisms which are clearly not within the genetic framework of the other.
Presenting the Positive Case for Design - Casey Luskin - February 14, 2012
Excerpt: If you think of the flagellum like an outboard motor, and the T3SS like a squirt gun, the parts they share are the ones that allow them to be mounted on the bracket of a boat. But the parts that give them their distinct functions -- propulsion or injection -- are not shared. I said that thinking you can explain the flagellum simply by referring me to the T3SS is like saying if you can account for the origin of the mounting-bracket on the back of you boat, then you've explained the origin of the motor too -- which obviously makes no sense.
"One fact in favour of the flagellum-first view is that bacteria would have needed propulsion before they needed T3SSs, which are used to attack cells that evolved later than bacteria. Also, flagella are found in a more diverse range of bacterial species than T3SSs. ‘The most parsimonious explanation is that the T3SS arose later," Howard Ochman - Biochemist - New Scientist (Feb 16, 2008)
Michael Behe on Falsifying Intelligent Design - video
Genetic analysis of coordinate flagellar and type III - Scott Minnich and Stephen Meyer
Michael Behe Hasn't Been Refuted on the Flagellum - March 2011
Bacterial Flagella: A Paradigm for Design – Scott Minnich – Video
Ken Miller's Inaccurate and Biased Evolution Curriculum - Casey Luskin - 2011
Excerpt: One mutation, one part knock out, it can't swim. Put that single gene back in we restore motility. ... knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We've done that with all 35 components of the flagellum, and we get the same effect. - Scott Minnich
Flagellum - Sean D. Pitman, M.D.
The Bacterial Flagellum – Truly An Engineering Marvel! - December 2010
Hiroyuki Matsuura, Nobuo Noda, Kazuharu Koide Tetsuya Nemoto and Yasumi Ito
Excerpt from bottom page 7: Note that the physical principle of flagella motor does not belong to classical mechanics, but to quantum mechanics. When we can consider applying quantum physics to flagella motor, we can find out the shift of energetic state and coherent state.
The manner in which bacteria with flagella move is also very interesting;
Structures and Mechanisms of Bacterial Motility - Marty Player
Excerpt: motile bacteria move in a random running and tumbling pattern when in an isotonic solution. While this type of movement may be totally random in some situations, in others motile bacteria bias this random walk.
Towards the end of the following video is a excellent animation of the 'running and tumbling' motion of bacteria with flagella;
Animations from E O Wilson’s Lord of the Ants – video
"A Bit Unprepossessing": Plantinga on the Logic of Dawkins's Blind Watchmaker - Jay W. Richards February 9, 2012
Excerpt: what Dawkins has in mind is something like this: If it's unlikely that a bacterial flagellum could have arisen by chance or the Darwinian mechanism, then any agent that designed the flagellum would be even less likely.
Plantinga finds a fatal problem here. Dawkins defines complexity as the property of something that has parts "arranged in a way that is unlikely to have arisen by chance alone." But God is immaterial and so doesn't have parts in this sense. According to Dawkins's own definition of complexity, therefore, God is not complex. One can make a similar point without invoking God. It doesn't follow that because an agent can produce organized complexity, that the agent is complex. (Frankly, I don't think it makes sense to refer to any agent as "complex.") Organized complexity might very well be a reliable sign of an intelligent agent. So Dawkins's argument against the improbability of God's existence, and, a fortiori, the improbability of intelligent design, fails.
As well, it has now been demonstrated that the specific sequence complexity, of a functional protein, can be mathematically quantified as functional information bits(Fits).
Functional information and the emergence of bio-complexity:
Abstract: Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define 'functional information,' I(Ex), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, Ex (e.g., the RNA-GTP binding energy), I(Ex)= -log2 [F(Ex)], where F(Ex) is the fraction of all possible configurations of the system that possess a degree of function > Ex. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree. In each case we observe evidence for several distinct solutions with different maximum degrees of function, features that lead to steps in plots of information versus degree of functions.
Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video
Entire video:
and this paper:
Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007
Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,,
Here is a brief discussion on a plausible way to more precisely measure the complete information content of a cell as well as measuring Landauer's principle in a cell:
It is interesting to note that many evolutionists are very evasive if questioned by someone to precisely define functional information. In fact I've seen some die-hard evolutionists deny that information even exists in a cell. Many times evolutionists will try to say information is generated using Claude Shannon's broad definition of information, since 'non-functional' information bits may be considered information in his broad definition of information, yet, when looked at carefully, Shannon information completely fails to explain the generation of functional information.
The Evolution-Lobby’s Useless Definition of Biological Information - Feb. 2010
Excerpt: By wrongly implying that Shannon information is the only “sense used by information theorists,” the NCSE avoids answering more difficult questions like how the information in biological systems becomes functional, or in its own words, “useful.”,,,Since biology is based upon functional information, Darwin-skeptics are interested in the far more important question of, Does neo-Darwinism explain how new functional biological information arises?
Mutations, epigenetics and the question of information
Excerpt: By definition, a mutation in a gene results in a new allele. There is no question that mutation (defined as any change in the DNA sequence) can increase variety in a population. However, it is not obvious that this necessarily means there is an increase in genomic information.,, If one attempts to apply Shannon’s theory of information, then this can be viewed as an increase. However, Shannon’s theory was not developed to address biological information. It is entirely unsuitable for this since an increase of information by Shannon’s definition can easily be lethal (and an increase in randomness increases Shannon ‘information’).
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors
Testable hypotheses about FSC
Null hypothesis #1
Stochastic ensembles of physical units cannot program algorithmic/cybernetic function.
Null hypothesis #2
Null hypothesis #3
Null hypothesis #4
The following site has a fairly concise definition for functional information (dFSCI; digital functionally specified complex information) by a blogger called gpuccio:
As well it is found that Claude Shannon's work on 'communication of information' actually fully supports Intelligent Design as is illustrated in the following video and article:
Shannon Information - Channel Capacity - Perry Marshall - video
Skeptic's Objection to Information Theory #1:
"DNA is Not a Code"
As well, William Dembski and Robert Marks have shown that the information found in life can be measured. And since the information can be measured it can be used to falsify Darwinian evolution:
"LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information":
Excerpt: Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.
William Dembski Is Interviewed By Casey Luskin About Conservation Of Information - Audio
Dr. Dembski has emphasized that the Law of Conservation of Information (LCI) is clearly differentiated from the common definition of Theistic Evolution since mainstream Theistic evolutionists, such as Ken Miller and Francis Collins, hold that the Design/Information found in life is not separable from the purely material processes of the universe, whereas Dembski and Marks are clearly saying the Design/Information found in life is detectable, can be separated from the material processes we see in the universe, and "can be measured in precise information-theoretic terms". In other words, the Dembski-Marks paper shows in order for gradual evolution to actually be true it cannot be random Darwinian evolution and that a 'Intelligent Designer' will have to somehow provide the additional functional information needed to make gradual evolution of increased functional complexity possible. Thus now the theoretical underpinnings, of random functional information generation by material processes, are completely removed from Darwinian ideology.
Yet even though God could very well have created life gradually, did God use gradual processes to create life on Earth? I don't think so. There are many solid lines of evidence pointing to the fact that the principle of Genetic Entropy (loss of functional information) is the true principle for all biological adaptations and that no gradual 'material processes' are involved in the "evolution" a lifeform, to greater heights of functional complexity, once God has created a Parent Kind/Species. This following site has a general outline of the evidence that argues forcefully against the gradual model of Theistic evolutionists:
Why Secular and Theistic Darwinists Fear ID - September 2010
The main problem, for the secular model of neo-Darwinian evolution to overcome, is that no one has ever seen purely material processes generate functional 'prescriptive' information.
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009
To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis.
Can We Falsify Any Of The Following Null Hypothesis (For Information Generation)
1) Mathematical Logic
2) Algorithmic Optimization
3) Cybernetic Programming
4) Computational Halting
5) Integrated Circuits
6) Organization (e.g. homeostatic optimization far from equilibrium)
7) Material Symbol Systems (e.g. genetics)
8) Any Goal Oriented bona fide system
9) Language
10) Formal function of any kind
11) Utilitarian work
Is Life Unique? David L. Abel - January 2012
Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic.
"Nonphysical formalism not only describes, but preceded physicality and the Big Bang
Formalism prescribed, organized and continues to govern physicodynamics."
The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010
The Law of Physicodynamic Incompleteness - David L. Abel - August 2011
Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility.
The GS (genetic selection) Principle – David L. Abel – 2009
Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.”
Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast:
Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast
Programming of Life - Information - Shannon, Functional & Prescriptive - video
While neo-Darwinian evolution has no evidence that material processes can generate functional prescriptive information, Intelligent Design does have 'proof of principle' that information can 'locally' violate the second law and generate potential energy:
Maxwell's demon demonstration turns information into energy - November 2010
Excerpt: Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a "spiral-staircase-like" potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information.
How Could God Interact with the World? (William Dembski)
After much reading, research, and debate with evolutionists, I find the principle of Genetic Entropy (loss of functional information) to be the true principle for all 'beneficial' biological adaptations which directly contradicts unguided neo-Darwinian evolution. As well, unlike Darwinian evolution which can claim no primary principles in science to rest its claim on for the generation of functional information, Genetic Entropy can rest its foundation in science directly on the twin pillars of the Second Law of Thermodynamics and on the Law of Conservation Of Information(LCI; Dembski,Marks)(Null Hypothesis;Abel). The first phase of Genetic Entropy, any life-form will go through, holds all sub-speciation adaptations away from a parent species, which increase fitness/survivability to a new environment for the sub-species, will always come at a cost of the functional information that is already present in the parent species genome. This is, for the vast majority of times, measurable as loss of genetic diversity in genomes. This phase of Genetic Entropy is verified, in one line of evidence, by the fact all population genetics' studies show a consistent loss of genetic diversity from a parent species for all sub-species that have adapted away (Maciej Giertych). This fact is also well testified to by plant and animal breeders who know there are strict limits to the amount of variability you can expect when breeding for any particular genetic trait. The second line of evidence, this primary phase of the principle of Genetic Entropy is being rigorously obeyed, is found in the fact the 'Fitness Test' against a parent species of bacteria has never been violated by any sub-species of a parent bacteria.
Testing Evolution in the Lab With Biologic Institute's Ann Gauger - podcast with link to peer-reviewed paper
Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger's paper, "Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,".
For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles:
Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video
This following study demonstrated that bacteria which had gained antibiotic resistance by mutation are less fit than wild type bacteria::
Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008
Excerpt: Therefore, in order to simulate competition in the wild, bacteria must be grown on minimal media. Minimal media mimics better what bacteria experience in a natural environment over a period of time. This is the place where fitness can be accurately assessed. Given a rich media, they grow about the same.
Also of note; there appears to be a in-built (designed) mechanism, which kicks in during starvation, which allows wild type bacteria to more robustly resist antibiotics than 'well fed' bacteria;
Starving bacteria fight antibiotics harder? - November 2011
Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology
Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution.
A Tale of Two Falsifications of Evolution - September 2011
Antibiotic resistance is ancient - September 2011
Evolution - Tested And Falsified - Don Patton - video
List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
The following study surveys four decades of experimental work, and solidly backs up the preceding conclusion that there has never been an observed violation of genetic entropy:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010
Michael Behe talks about the preceding paper on this podcast:
Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010
Where's the substantiating evidence for neo-Darwinism?
The previously listed 'fitness test', and paper by Dr. Behe, fairly conclusively demonstrates 'optimal information' was originally encoded within a parent bacteria/bacterium by God, and has not been added to by any 'teleological' methods in the beneficial adaptations of the sub-species of bacteria. Thus the inference to Genetic Entropy, i.e. that God has not specifically moved within nature in a teleological manner, to gradually increase the functional information of a genome, still holds as true for the principle of Genetic Entropy.
It seems readily apparent to me that to conclusively demonstrate God has moved within nature, in a teleological manner, to provide the sub-species bacteria with additional functional information over the 'optimal' genome of its parent species, then the fitness test must be passed by the sub-species against the parent species. If the fitness test is shown to be passed then the new molecular function, which provides the more robust survivability for the sub-species, must be calculated to its additional Functional Information Bits (Fits) it has gained in the beneficial adaptation, and then be found to be greater than 140 Fits. 140 Fits is what has now been generously set by Kirk Durston as the maximum limit of Functional Information which can reasonably be expected to be generated by the natural processes of the universe over the entire age of the universe (The actual limit is most likely to be around 40 Fits)(Of note: I have not seen any evidence to suggest that purely material processes can exceed the much more constrained '2 protein-protein binding site limit', for functional information/complexity generation, found by Michael Behe in his book "The Edge Of Evolution"). This fitness test, and calculation, must be done to rigorously establish materialistic processes did not generate the functional information (Fits), and to rigorously establish that teleological processes were indeed involved in the increase of Functional Complexity of the beneficially adapted sub-species. The second and final phase of Genetic Entropy, outlined by John Sanford in his book Genetic Entropy & the Mystery of the Genome, is when 'slightly detrimental' mutations, which are far below the power of natural selection to remove from a genome, slowly build up in a species/kind over long periods of time and lead to Genetic Meltdown.
Evolution Vs Genetic Entropy - Andy McIntosh - video
The first effect to be obviously noticed in the evidence, for the Genetic Entropy principle, is the loss of potential for morphological variability of individual sub-species of a kind. This loss of potential for morphological variability first takes place for the extended lineages of sub-species within a kind, and increases with time, and then gradually works in to the more ancient lineages of the kind, as the 'mutational load' of slightly detrimental mutations slowly builds up over time. This following paper, though of evolutionary bent, offers a classic example of the effects of Genetic Entropy over deep time of millions of years:
A Cambrian Peak in Morphological Variation Within Trilobite Species; Webster
Excerpt: The distribution of polymorphic traits in cladistic character-taxon matrices reveals that the frequency and extent of morphological variation in 982 trilobite species are greatest early in the evolution of the group: Stratigraphically old and/or phylogenetically basal taxa are significantly more variable than younger and/or more derived taxa.
The final effect of Genetic Entropy is when the entire spectrum of the species of a kind slowly start to succumb to 'Genetic Meltdown', and to go extinct in the fossil record. The occurs because the mutational load, of the slowly accumulating 'slightly detrimental mutations' in the genomes, becomes too great for each individual species of the kind to bear. From repeated radiations from ancient lineages in the fossil record, and from current adaptive radiation studies which show strong favor for ancient lineages radiating, the ancient lineages of a kind appear to have the most 'robust genomes' and are thus most resistant to Genetic Meltdown. All this consistent evidence makes perfect sense from the Genetic Entropy standpoint, in that Genetic Entropy holds God created each parent kind with a optimal genome for all future sub-speciation events. My overwhelming intuition, from all the evidence I've seen so far, and from Theology, is this; Once God creates a parent kind, the parent kind is encoded with optimal information for the specific purpose to which God has created the kind to exist, and God has chosen, in His infinite wisdom, to strictly limit the extent to which He will act within nature to 'evolve' the sub-species of the parent kind to greater heights of functional complexity. Thus the Biblically compatible principle of Genetic Entropy is found to be in harmony with the second law of thermodynamics and with the strict limit found for material processes ever generating any meaningful amount of functional information on their own (LCI: Dembski - Marks)(Abel; Null Hypothesis).
As a side light to this, it should be clearly pointed out that we know, for 100% certainty, that Intelligence can generate functional information i.e. irreducible complexity. We generate a large amount of functional information, which is well beyond the reach of the random processes of the universe, every time we write a single paragraph of a letter (+700 Fits average). The true question we should be asking is this, "Can totally natural processes ever generate functional information?", especially since totally natural processes have never been observed generating any functional information whatsoever from scratch (Kirk Durston). This following short video lays out the completely legitimate scientific basis for inferring Intelligent Design from what we presently observe:
Stephen Meyer: What is the origin of the digital information found in DNA? - short video
As well, 'pure transcendent information' is now shown to be 'conserved'. (i.e. it is shown that all transcendent information which can possibly exist, for all possible physical/material events, past, present, and future, already must exist.) This is since transcendent information exercises direct dominion of the foundational 'material' entity of this universe, energy, which cannot be created or destroyed by any known 'material' means. i.e. First Law of Thermodynamics.
Conservation Of Transcendent/Quantum Information - 2007 - video
This following experiment verified the 'conservation of transcendent/quantum information' using a far more rigorous approach;
Quantum no-hiding theorem experimentally confirmed for first time
These following studies verified the violation of the first law of thermodynamics that I had suspected in the preceding 2007 video:
How Teleportation Will Work -
Quantum Teleportation - IBM Research Page
Researchers Succeed in Quantum Teleportation of Light Waves - April 2011
Unconditional Quantum Teleportation - abstract
Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,,
Of note: conclusive evidence for the violation of the First Law of Thermodynamics is firmly found in the preceding experiment when coupled with the complete displacement of the infinite transcendent information of "Photon c":
In extension to the 2007 video, the following video and article shows quantum teleportation breakthroughs have actually shed a little light on exactly what, or more precisely on exactly Whom, has created this universe:
Scientific Evidence For God (Logos) Creating The Universe - 2008 - video
It is also very interesting to note that the quantum state of a photon is actually defined as 'infinite information' in its uncollapsed quantum wave state:
Quantum Computing - Stanford Encyclopedia
It should be noted in the preceding paper that Duwell, though he never challenges the mathematical definition of a photon qubit as infinite information, tries to refute Bennett's interpretation of infinite information transfer in teleportation because of what he believes are 'time constraints' which would prohibit teleporting 'backwards in time'. Yet Duwell fails to realize that information is its own completely unique transcendent entity, completely separate from any energy-matter, space-time, constraints in the first place.
This following recent paper and experiments, on top of the previously listed 'conservation of quantum information' papers, pretty much blew a hole in Duwell's objection to Bennett, of teleporting infinite information 'backwards in time', simply because he believed there was no such path, or mechanism, to do so:
Time travel theory avoids grandfather paradox - July 2010
Excerpt: “In the new paper, the scientists explore a particular version of CTCs based on combining quantum teleportation with post-selection, resulting in a theory of post-selected CTCs (P-CTCs). ,,,The formalism of P-CTCs shows that such quantum time travel can be thought of as a kind of quantum tunneling backwards in time, which can take place even in the absence of a classical path from future to past,,, “P-CTCs might also allow time travel in spacetimes without general-relativistic closed timelike curves,” they conclude. “If nature somehow provides the nonlinear dynamics afforded by final-state projection, then it is possible for particles (and, in principle, people) to tunnel from the future to the past.”
Physicists describe method to observe timelike entanglement - January 2011
It should also be noted that the preceding experiments pretty much dots all the i's and crosses all the t's as far as concretely establishing 'transcendent information' as its own unique entity. Its own unique entity that is completely separate from, and dominate of, space-time, matter and energy.
The following excerpt is also of interest to this issue of time constraints in quantum mechanics:
Solving the quantum mysteries - John Gribbin
Excerpt: As all physicists learn at university (and most promptly forget) the full version of the wave equation has two sets of solutions -- one corresponding to the familiar simple Schrödinger equation, and the other to a kind of mirror image Schrödinger equation describing the flow of negative energy into the past.
As well, I also have another reason to object to Duwell's complaint of 'no mechanism' for information travel to the past, in that I firmly believe Biblical prophecy has actually been precisely fulfilled by Israel's 'miraculous' rebirth as a nation in 1948, as this following video makes clear:
The Precisely Fulfilled Prophecy Of Israel Becoming A Nation In 1948 - video
This following video shows one reason why I personally know there is much more going on in the world than what the materialistic philosophy would lead us to believe:
Miracle Testimony - One Easter Sunday Sunrise Service - video
More supporting evidence for the transcendent nature of information, and how it interacts with energy, is found in these following studies:
Single photons to soak up data:
Ultra-Dense Optical Storage - on One Photon
Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image’s worth of data into a photon, slow the image down for storage, and then retrieve the image intact.,,, Quantum mechanics dictates some strange things at that scale, so that bit of light could be thought of as both a particle and a wave. As a wave, it passed through all parts of the stencil at once, carrying the "shadow" of the UR with it.
This following experiment clearly shows information is not an 'emergent property' of any solid material basis as is dogmatically asserted by some materialists:
Converting Quantum Bits: Physicists Transfer Information Between Matter and Light
The following articles show that even atoms (Ions) are subject to teleportation:
Atom takes a quantum leap - 2009
This following paper is fairly good for establishing the primacy of transcendent information in the 'reality' of this universe:
What is Truth?
It is also interesting to note that a Compact Disc crammed with information on it weighs exactly the same as a CD with no information on it whatsoever.,, Here are a few videos reflecting on some of the characteristics of transcendent information:
Information – Elusive but Tangible – video
Information? What Is It Really? Professor Andy McIntosh - video
Information? - What is it really? Brief Discussion on the Quantum view of information:
But to reflect just a bit more on the teleportation experiment itself, is interesting to note that scientists can only 'destroy' a photon in these quantum teleportation experiments. No one has 'created' a photon as of yet. I firmly believe man shall never do as such, since I hold only God is infinite, and perfect, in information/knowledge.
Job 38:19-20
Further reflection on the quantum teleportation experiment:
That a photon would actually be destroyed upon the teleportation (separation) of its 'infinite' information to another photon is a direct controlled violation of the first law of thermodynamics. (i.e. a photon 'disappeared' from the 'material' universe when the entire information content of a photon was 'transcendently displaced' from the material universe by the experiment, when photon “c” transcendently became transmitted photon “a”). Thus, Quantum teleportation is direct empirical validation for the primary tenet of the Law of Conservation of Information (i.e. 'transcendent' information cannot be created or destroyed). This conclusion is warranted because information exercises direct dominion of energy, telling energy exactly what to be and do in the experiment. Thus, this experiment provides a direct line of logic that transcendent information cannot be created or destroyed and, in information demonstrating transcendence, and dominion, of space-time and matter-energy, becomes the only known entity that can satisfactorily explain where all energy came from as far as the origination of the universe is concerned. That is transcendent information is the only known entity which can explain where all the energy came from in the Big Bang without leaving the bounds of empirical science as the postulated multiverse does. Clearly anything that exercises dominion of the fundamental entity of this physical universe, a photon of energy, as transcendent information does in teleportation, must of necessity possess the same, as well as greater, qualities as energy does possess in the first law of thermodynamics (i.e. Energy cannot be created or destroyed by any known material means according to the first law). To reiterate, since information exercises dominion of energy in quantum teleportation then all information that can exist, for all past, present and future events of energy, already must exist.
As well, the fact that quantum teleportation shows an exact 'location dominion', of a photon of energy by 'specified infinite information', satisfies a major requirement for the entity needed to explain the missing Dark Matter. The needed transcendent explanation would have to dominate energy in a very similar 'specified location' fashion, as is demonstrated by the infinite information of quantum teleportation, to satisfy what is needed to explain the missing dark matter.
Colossians 1:17
Moreover, the fact that simple quantum entanglement shows 'coordinated universal control' of entangled photons of energy, by transcendent information, regardless of distance, satisfies a major requirement for the entity which must explain the missing Dark Energy. i.e. The transcendent entity, needed to explain Dark Energy, must explain why the entire space of the universe is expanding in such a finely-tuned, coordinated, degree, and would have to employ a mechanism of control very similar to what we witness in the quantum entanglement experiment.
Job 9:8
He stretches out the heavens by Himself and walks on the waves of the sea.
Thus 'infinite transcendent information' provides a coherent picture of overarching universal control, and specificity, that could possibly unify gravity with the other forces. It very well may be possible to elucidate, mathematically, the overall pattern God has chosen to implement infinite information in this universe. The following article backs up this assertion:
Is Unknown Force In Universe Acting On Dark Matter?
Excerpt: Is Unknown Force In Universe Acting On Dark Matter?
Excerpt: It is possible that a non-gravitational fifth force is ruling the dark matter with an invisible hand, leaving the same fingerprints on all galaxies, irrespective of their ages, shapes and sizes.” ,,Dr Famaey added, “If we account for our observations with a modified law of gravity, it makes perfect sense to replace the effective action of hypothetical dark matter with a force closely related to the distribution of visible matter.”
Dark Matter Halos of Disk Galaxies
Excerpt: Dark matter’s properties can only be inferred indirectly by observing the motions of the stars and gas (of a galaxy).
"I discovered that nature was constructed in a wonderful way, and our task is to find out its mathematical structure"
Albert Einstein - The Einstein Factor - Reader's Digest
Special Relativity - Time Dilation and Length Contraction - video
Moreover time, as we understand it, would come to a complete stop at the speed of light. To grasp the whole 'time coming to a complete stop at the speed of light' concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same 'thought experiment' that gave Einstein his breakthrough insight into e=mc2.
Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video
Light and Quantum Entanglement Reflect Some Characteristics Of God - video
"I've just developed a new theory of eternity."
Albert Einstein - The Einstein Factor - Reader's Digest
Richard Swenson - More Than Meets The Eye, Chpt. 12
Experimental confirmation of Time Dilation
It is also very interesting to note that we have two very different qualities of ‘eternality of time’ revealed by our time dilation experiments;
Time Dilation - General and Special Relativity - Chuck Missler - video
Time dilation
Excerpt: Time dilation: special vs. general theories of relativity:
1. --In special relativity (or, hypothetically far from all gravitational mass), clocks that are moving with respect to an inertial system of observation are measured to be running slower. (i.e. For any observer accelerating, hypothetically, to the speed of light, time, as we understand it, will come to a complete stop).
i.e. As with any observer accelerating to the speed of light, it is found that for any observer falling into the event horizon of a black hole, that time, as we understand it, will come to a complete stop for them. — But of particular interest to the ‘eternal framework’ found for General Relativity at black holes;… It is interesting to note that entropic decay (Randomness), which is the primary reason why things grow old and eventually die in this universe, is found to be greatest at black holes. Thus the ‘eternality of time’ at black holes can rightly be called ‘eternalities of decay and/or eternalities of destruction’.
Entropy of the Universe - Hugh Ross - May 2010
Roger Penrose – How Special Was The Big Bang?
i.e. Black Holes are found to be ‘timeless’ singularities of destruction and disorder rather than singularities of creation and order such as the extreme order we see at the creation event of the Big Bang. Needless to say, the implications of this ‘eternality of destruction’ should be fairly disturbing for those of us who are of the ‘spiritually minded' persuasion!
Matthew 10:28
On the Mystery, and Plasticity, Of Space-Time
Space-Time and Our Place In It
It is very interesting to note that this strange higher dimensional, eternal, framework for time, found in special relativity, and general relativity, finds corroboration in Near Death Experience testimonies:
Mickey Robinson - Near Death Experience testimony
Dr. Ken Ring - has extensively studied Near Death Experiences
'Earthly time has no meaning in the spirit realm. There is no concept of before or after. Everything - past, present, future - exists simultaneously.' - Kimberly Clark Sharp - NDE Experiencer
'There is no way to tell whether minutes, hours or years go by. Existence is the only reality and it is inseparable from the eternal now.' - John Star - NDE Experiencer
What Will Heaven be Like? by Rich Deem
Excerpt: Since heaven is where God lives, it must contain more physical and temporal dimensions than those found in this physical universe that God created. We cannot imagine, nor can we experience in our current bodies, what these extra dimensions might be like.
It is also very interesting to point out that the 'light at the end of the tunnel', reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a 'hypothetical' observer moves towards the ‘higher dimension’ of the speed of light, with the ‘light at the end of the tunnel’ reported in very many Near Death Experiences: (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.)
Traveling At The Speed Of Light - Optical Effects - video
Here is the interactive website, with link to the relativistic math at the bottom of the page, related to the preceding video;
Seeing Relativity
The NDE and the Tunnel - Kevin Williams' research conclusions
Near Death Experience - The Tunnel - video
Near Death Experience – The Tunnel, The Light, The Life Review – video
As well, as with the tunnel being present in heavenly NDE's, we also have mention of tunnels in hellish NDE testimonies;
A man, near the beginning of this video, gives testimony of falling down a 'tunnel' in the transition stage from this world to hell:
Hell - A Warning! - video
The man, in this following video, also speaks of 'tumbling down' a tunnel in his transition stage to hell:
Bill Wiese on Sid Roth – video
As well, as with the scientifically verified tunnel for special relativity, we also have scientific confirmation of extreme ‘tunnel curvature’, within space-time, to a eternal ‘event horizon’ at black holes;
Space-Time of a Black hole
Akiane Kramarik - Child Prodigy -
Artwork homepage - -
Music video -
As a side light to this, leading quantum physicist Anton Zeilinger has followed in John Archibald Wheeler's footsteps (1911-2008) by insisting reality, at its most foundational level, is 'information'.
John Archibald Wheeler
Why the Quantum? It from Bit? A Participatory Universe?
Prof Anton Zeilinger speaks on quantum physics. at UCT - video
Zeilinger's principle
In the beginning was the bit - New Scientist
Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle.
'Quantum Magic' Without Any 'Spooky Action at a Distance' - June 2011
Quantum Entanglement and Teleportation - Anton Zeilinger - video
It should be noted that the popular science fiction conception of the universe being 'merely' a computer simulation (as in 'Matrix' movies), a conception drawn from the fact that 'material' reality is now shown to be reducible to information, is far too simplistic in its conception:
Quantum Computing Promises New Insights, Not Just Supermachines - Scott Aaronson - December 2011
Excerpt: And yet, even though useful quantum computers might still be decades away, many of their payoffs are already arriving. For example, the mere possibility of quantum computers has all but overthrown a conception of the universe that scientists like Stephen Wolfram have championed. That conception holds that, as in the “Matrix” movies, the universe itself is basically a giant computer, twiddling an array of 1’s and 0’s in essentially the same way any desktop PC does.
Quantum computing has challenged that vision by showing that if “the universe is a computer,” then even at a hard-nosed theoretical level, it’s a vastly more powerful kind of computer than any yet constructed by humankind. Indeed, the only ways to evade that conclusion seem even crazier than quantum computing itself: One would have to overturn quantum mechanics, or else find a fast way to simulate quantum mechanics using today’s computers.
Here are some more interesting videos that also arrive at a 'information basis' for reality from a slightly different perspective:
A Very Unusual Proof for the Existence of God - video - (Collapse of wave function)
You Are Made Of Information - video - (Don't believe me? Any ontology other than information monism leads to self-contradiction)
Continued comments:
The restriction imposed by our physical limitations of us ever accessing complete infinite information to our temporal space-time framework/dimension (Wheeler; Zeilinger) does not detract, in any way, from the primacy and dominion of the infinite transcendent information framework that is now established by the quantum teleportation experiment as the primary reality of our reality. Of note: All of this evidence meshes extremely well with the theistic postulation of God possessing infinite and perfect knowledge. This seems like a fitting place for this following quote and verse:
William Blake
Psalm 19:1-2
As well it should be noted that, counter-intuitive to materialistic thought (and to every kid who has ever taken a math exam), a computer does not consume energy during computation but will only consume energy when information is erased from it. This counter-intuitive fact is formally known as Landauer's Principle. i.e. Erasing information is a thermodynamically irreversible process that increases the entropy of a system. i.e Only irreversible operations consume energy. Reversible computation does not use up energy. Unfortunately the computer will eventually run out of information storage space and must begin to 'irreversibly' erase the information it has previously gathered (Bennett: 1982) and thus a computer must eventually use energy. i.e. A 'material' computer must eventually obey the second law of thermodynamics for its computation.
Landauer's principle
Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008).
It should be noted that Rolf Landauer himself maintained that information in a computer was 'physical'. He held that information in a computer was merely an 'emergent' property of the material basis of a computer, and thus he held that the information programmed into a computer was not really 'real'. Landauer held this 'materialistic' position in spite of a objection from Roger Penrose that information is indeed real and has its own independent existence separate from a computer. Landauer held this 'materialistic' position since he held that 'it takes energy to erase information from a computer therefore information is 'merely physical'. Yet now the validity of that fairly narrowly focused objection from Landauer, to the reality of 'transcendent information' encoded within the computer, has been brought into question. i.e. Landauer's Principle may not be nearly as 'ironclade' as Landauer had originally believed.
Scientists show how to erase information without using energy - January 2011
Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, "Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.", Vaccaro explained.
This following research provides far more solid falsification for Rolf Landauer's contention that information encoded in a computer is merely physical (merely 'emergent' from a material basis) since he believed it always required energy to erase it;
Quantum knowledge cools computers: New understanding of entropy - June 2011
Excerpt: No heat, even a cooling effect;
Further comments:
"Those devices (computers) can yield only approximations to a structure (of information) that has a deep and "computer independent" existence of its own." - Roger Penrose - The Emperor's New Mind - Pg 147
Norbert Weiner - MIT Mathematician - Father of Cybernetics
Yet even without the falsification of Rolf Landauer's contention, that information was merely 'physical', this ability of a computer to 'compute answers', without ever hypothetically consuming energy, is very suggestive that the answers/truth already exist in reality, and in fact, when taken to its logical conclusion, is very suggestive to the postulation of John 1:1 that 'Logos' is ultimately the foundation of our 'material' reality in the first place.
John 1:1-3
(of note: 'Word' in Greek is 'Logos', and is the root word from which we get our word 'Logic')
This strange anomaly between lack of energy consumption and the computation of information appears to hold for the human mind as well.
Appraising the brain's energy budget:
Excerpt: In the average adult human, the brain represents about 2% of the body weight. Remarkably, despite its relatively small size, the brain accounts for about 20% of the oxygen and, hence, calories consumed by the body. This high rate of metabolism is remarkably constant despite widely varying mental and motoric activity. The metabolic activity of the brain is remarkably constant over time.
Excerpt: Although Lennox considered the performance of mental arithmetic as "mental work", it is not immediately apparent what the nature of that work in the physical sense might be if, indeed, there be any. If no work or energy transformation is involved in the process of thought, then it is not surprising that cerebral oxygen consumption is unaltered during mental arithmetic.
The preceding experiments are very unexpected to materialists since materialists hold that 'mind' is merely a 'emergent property' of the physical processes of the material brain.
In further note: Considering computers can't pass this following test for creating new information,,,
"... no operation performed by a computer can create new information."
-- Douglas G. Robertson, "Algorithmic Information Theory, Free Will and the Turing Test," Complexity, Vol.3, #3 Jan/Feb 1999, pp. 25-34.
Evolutionary Informatics - William Dembski & Robert Marks
Information. What is it? - Robert Marks - video
Estimating Active Information in Adaptive Mutagenesis
Robert Marks
,,Whereas humans can fairly easily pass the test for creating new information,,
"So, to sum up: computers can reshuffle specifications and perform any kind of computation implemented in them. They are mechanical, totally bound by the laws of necessity (algorithms), and non conscious. Humans can continuously create new specification, and also perform complex computations like a computer, although usually less efficiently. They can create semantic output, make new unexpected inferences, recognize and define meanings, purposes, feelings, and functions, and certainly conscious representations are associated with all those kinds of processes."
Uncommon Descent blogger - gpuccio
,,,thus these findings strongly imply that we humans have a 'higher informational component' to our being,, i.e. these findings offer another line of corroborating evidence which is very suggestive to the idea that humans have a mind which is transcendent of the physical brain and which is part of a 'unique soul from God'. Moreover this unique mind that each human has seems to be capable of a special and intimate communion with God that is unavailable to other animals, i.e. we are capable of communicating information with "The Word" as described in John 1:1.
I also liked this insight, from a computer programmer with a PhD in Physics, about a fundamental difference between human consciousness and computer programs:
The simple fact is this, despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don’t even know how to do it. I can write computer programs that have bugs and don’t perform what I thought they were going to do; I can write computer programs that make pseudo-random choices. I do not know how to write a program that disobeys. I would contend it can’t be done. But the ability to disobey the Creator is the essence of consciousness. Otherwise it’s just complicated programming with random choices.
Also of related interest is Dr. Werner Gitt's lecture on information:
In The Beginning Was Information - Werner Gitt - video
You may download Dr. Gitt's book, In The Beginning Was Information, at this website:
Thus now, with the mathematical definition of functional information in place for molecular biology, and with 'infinite transcendent information' shown to be 'conserved' and 'consciousness' found to be foundational to our reality, and with Genetic Entropy outlined as the primary principle for biological adaptations, Intelligent Design can now be scientifically tested against any materialistic theory of blind chance proposing a certain system arose by random material processes and was not the handiwork of God.
I would like to point out that when a molecular sub-system of a biological organism passes the probability threshold of one in 10^150 orders of magnitude (that’s a one with 150 zeros to the right) then it is considered, by very stringent guidelines which allow for far more 'quantum events' than will ever happen in the universe, to be overwhelmingly impossible for the universe to ever account for the system arising by chance alone. (Dembski, Abel)
Here is how the base line for the universe's probabilistic resources are calculated:
Signature in the Cell - Book Review - Ken Peterson
Excerpt: "there are about 10 to the 80th elementary particles in our observable universe. Assuming a Big Bang about 13 billion years ago, there have been about 10 to the 16th seconds of time. Finally, if we take the time required for light to travel one Plank length we will have found “the shortest time in which any physical effect can occur.” This turns out to be 10 to the minus 43rd seconds. Or turning it around we can say that the most interactions possible in a second is 10 to the 43rd. Thus, the “probabilistic resources” of the universe would be to multiply the total number of seconds by the total number of interactions per second by the total number of particles theoretically interacting. The math turns out to be 10 to the 139th."
The Universal Plausibility Metric (UPM) & Principle (UPP) - Abel - Dec. 2009
Excerpt: Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes.",,,
cΩu = Universe = 10^13 reactions/sec X 10^17 secs X 10^78 atoms = 10^108
cΩg = Galaxy = 10^13 X 10^17 X 10^66 atoms = 10^96
cΩs = Solar System = 10^13 X 10^17 X 10^55 atoms = 10^85
cΩe = Earth = 10^13 X 10^17 X 10^40 atoms = 10^70
Programming of Life - Probability - Defining Probable, Possible, Feasible etc.. - video
New Peer-Reviewed Paper Demolishes Fallacious Objection: “Aren’t There Vast Eons of Time for Evolution?” - Dec. 2009
This 'universal limit' for functional information generation is generously set at 140 Functional Information Bits (Fits) by Kirk Durston. The molecular sub-system is considered to be irreducibly complex in its functional information content, and thus it is considered to be intelligently designed. Though irreducible complexity is primarily used by Intelligent Design proponents for deducing design in molecular biology, the concept of irreducible complexity can also be applied, in an overarching form, to the entire universe to find if man is indeed God's primary purpose for creating this universe. Thus irreducible complexity can also be used to verify the anthropic hypothesis.
Irreducible Complexity and the Anthropic Principle - John Clayton - video
The following are some basic questions that need to be answered, to find if either the anthropic hypothesis or some materialistic hypothesis is correct.
I. What evidence is found for the universe's ability to support life?
II. What evidence is found for the earths ability to support life?
III. What evidence is found for the first life on earth?
IV. What evidence is found for the appearance of all species of life on earth, and is man the last species to appear on earth?
V. What evidence is found for God's personal involvement with man?
Before we start answering these five basic questions, I would like to reiterate, as clearly as possible, that any 'solid material atomic' foundation for this universe, which was the primary postulation of materialism in the first place, has now been completely crushed by our present understanding of quantum mechanics. Little do most people realize there is actually no solid indestructible particle, at all, at the basis of our reality in the atom somewhere. Each and every sub-atomic particle in the atom, (proton, neutron, electron etc..) is subject to the laws of quantum mechanics. Quantum mechanics is about as far away from the solid material particle/atom, that materialism had predicted as the basis of reality, as can be had. These following videos and articles make this point clear:
Uncertainty Principle - The 'Uncertain Non-Particle' Basis Of Material Reality - video and article
Electron diffraction
Excerpt: The de Broglie hypothesis, formulated in 1926, predicts that particles should also behave as waves. De Broglie's formula was confirmed three years later for electrons (which have a rest-mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their work.
As well, many of the actions of the electron blatantly defy out concepts of time and space:
The Electron - The Supernatural Basis of Reality - video
Electron entanglement near a superconductor and bell inequalities
Excerpt: The two electrons of these pairs have entangled spin and orbital degrees of freedom.,,, We formulate Bell-type inequalities in terms of current-current cross-correlations associated with contacts with varying magnetization orientations. We find maximal violation (as in photons) when a superconductor is the particle source. (i.e. electrons have a 'non-local', beyond space and time, cause sustaining them.)
Double-slit experiment
Quantum Mechanics – Quantum Results, Theoretical Implications Of Quantum Mechanics
"Atoms are not things"
Werner Heisenberg
Niels Bohr
This following article is interesting for it shows that very small quantum events can have dramatic effects on large objects:
How 'spooky' quantum mechanical laws may affect everyday objects (Update) - July 2010
Excerpt: "The difference in size between the two parts of the system is really extreme," Blencowe explained. "To give a sense of perspective, imagine that the 10,000 electrons correspond to something small but macroscopic, like a flea. To complete the analogy, the crystal would have to be the size of Mt. Everest. If we imagine the flea jumping on Mt. Everest to make it move, then the resulting vibrations would be on the order of meters!"
What blows most people away, when they first encounter quantum mechanics, is the quantum foundation of our material reality blatantly defies our concepts of time and space. Most people consider defying time and space to be a 'miraculous & supernatural' event. I know I certainly do! There is certainly nothing within quantum mechanics that precludes miracles from being possible:
How can an Immaterial God Interact with the Physical Universe? (Alvin Plantinga) - video
This 'miraculous & supernatural' foundation for our physical reality can easily be illuminated by the famous 'double slit' experiment. (It should be noted the double slit experiment was originally devised, in 1801, by a Christian polymath named Thomas Young). Though I've listed this preceding video before, it is well worth revisiting it here:
Dr. Quantum - Double Slit Experiment & Entanglement - video
Double-slit experiment
Excerpt: In 1999 objects large enough to see under a microscope, buckyball (interlocking carbon atom) molecules (diameter about 0.7 nm, nearly half a million times that of a proton), were found to exhibit wave-like interference.
This following site offers a more formal refutation of materialism:
It should also be noted that the 'uncertainty principle' for 3-D material particles is extended even to the point of not even being able to determine the exact radius for an electron that is at complete rest:
PhysForum Science
It would also like to point out that the hardest, most solid, indestructible 'thing' in a material object, such as a rock, are not any of the wave/particles in any of the atoms of a rock, but are the unchanging, transcendent, universal, constants which exercise overriding 'non-chaotic' dominion of all the wave/particle quantum events in the atoms of the rock. i.e. It is the unchanging stability of the universal 'transcendent information' constants, which have not varied one iota from the universe's creation as far as scientists can tell, that allows a rock to be 'rock solid' in the first place.
What is Truth?
Stability of Coulomb Systems in a Magnetic Field - Charles Fefferman
Testing Creation Using the Proton to Electron Mass Ratio
As well, it seems fairly obvious the actions observed in the double slit experiment, as well as other experiments, are only possible if our reality has its actual basis in a 'higher transcendent dimension':
Explaining The Unseen Higher Dimension - Dr. Quantum - Flatland - video
These following videos and articles on Dark Energy and Matter put the another nail in the coffin for the materialistic philosophy (as if it was not already completely falsified):
The abstract of the September 2006 Report of the Dark Energy Task Force says: “Dark energy appears to be the dominant component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our (materialistic) theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”
The Mathematical Anomaly Of Dark Matter - video
Dark matter halo
Excerpt: The dark matter halo is the single largest part of the Milky Way Galaxy as it covers the space between 100,000 light-years to 300,000 light-years from the galactic center. It is also the most mysterious part of the Galaxy. It is now believed that about 95% of the Galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the Galaxy's matter and energy in any way except through gravity. The dark matter halo is the location of nearly all of the Milky Way Galaxy's dark matter, which is more than ten times as much mass as all of the visible stars, gas, and dust in the rest of the Galaxy.
Gas rich galaxies confirm prediction of modified gravity theory (MOND) - February 2011
Excerpt: Almost everyone agrees that on scales of large galaxy clusters and up, the Universe is well described by dark matter - dark energy theory. However, according to McGaugh this cosmology does not account well for what happens at the scales of galaxies and smaller. "MOND is just the opposite," he said. "It accounts well for the 'small' scale of individual galaxies, but MOND doesn't tell you much about the larger universe.
Hubble Finds Ring of Dark Matter - video
The Elusive "non-Material" Foundation For Gravity:
Study Sheds Light On Dark Energy - video
Hugh Ross PhD. - Scientific Evidence For Dark Energy - video
What The Universe Is Made Of?: - Pie Chart Graph
96% Invisible "Stuff" vs. 4% Visible Material (Of Note: as of 2008 visible matter only accounts for less than .27% of everything that exists in the universe)
Dark Matter:
Despite comprehensive maps of the nearby universe that cover the spectrum from radio to gamma rays, we are only able to account for 10% of the mass that must be out there.(actually it is now known to be only .27%) "It's a fairly embarrassing situation to admit that we can't find 90 percent of the universe." Astronomer Bruce H. Margon
Table 2.1
Inventory of All the Stuff That Makes Up the Universe (Visible vs. Invisible)
Dark Energy 72.1%
Exotic Dark Matter 23.3%
Ordinary Dark Matter 4.35%
Ordinary Bright Matter (Stars) 0.27%
Planets 0.0001%
Invisible portion - Universe 99.73%
Visible portion - Universe .27%
of note: The preceding 'inventory' of the universe is updated to the second and third releases of the Wilkinson Microwave Anisotropy Probe's (WMAP) results in 2006 & 2008; (Why The Universe Is The Way It Is; Hugh Ross; pg. 37)
Now that the materialistic philosophy is thoroughly deprived of any empirical validation, for its primary tenet of a solid particle/atom at the basis of our temporal reality, let's look at the five questions I listed earlier, starting with the first question and working our way to the last.
Romans 1:20
To answer our first question (What evidence is found for the universe's ability to support life?) we will look at the universe and see how its 'parts' are put together. Let's start with carbon. Carbon is shown to be the only element, from the periodic table of elements, by which the complex molecules of life in this universe may be built. The carbon atom is a marvel in and of itself. Carbon is the sixth element on the periodic table and makes up two tenths of one percent of the earths crust. It is the backbone of which all life is built or can be built. It makes up 18% of the mass of our body. In its pure form it is recognized as soot, pencil lead or diamonds. Diamonds are the hardest substance known. Carbon fiber is the strongest fiber known. Carbon fiber is used in the construction of high performance airplanes, tennis rackets and bicycles, just to name a few. Man-made carbon-based molecules have allowed breakthroughs in low temperature super-conductors. Carbon-60 is a recent discovery, from the 1980's, called the buckyball. It is a molecule of sixty interlocking carbon atoms and is the roundest substance known in all molecular science. Graphene, which is a more recent 'revolutionary' discovery within the last decade, is a remarkably flat molecule made of carbon atoms arranged in hexagonal rings, and is the thinnest material possible. Graphene is made of ordinary carbon atoms arranged in a "chicken-wire" lattice. These layers, sometimes just a single atom thick, conduct electricity with virtually no resistance, very little heat generation -- and less power consumption than silicon. Graphene conducts electricity better than any other known material at room temperature and is ten times stronger than steel. Graphene promises to greatly outperform silicon in computer chips in the near future. Carbon has the unique ability to form long chains of complex molecules that have a high degree of stability. Stable complex molecules are required to build sugars, to build DNA, to build RNA, to build amino acids, to build proteins, to build cells, and finally, to build all living organisms on earth. Substances formed around carbon far out-number all other substances combined. No other element comes close to forming the wide variety of stable compounds as does carbon. Yet if it were not for this unique ability to form complex molecules, life could not exist. Organic chemistry, the study of carbon compounds, and their profuse and intricate behavior, is a dedicated science in its own right.
The only element similar to carbon, which has the necessary atomic structure to form the macro (large) molecules needed for life, is silicon. Yet silicon, though having the correct atomic structure, is severely limited in its ability to make complex macro-molecules. Silicon-based molecules are comparatively unstable and sometimes highly reactive. Thus from this, and many other evidences against silicon, carbon is found to be the only element from which life in this universe may be built. Carbon and other 'heavy' elements also provides one, of several, reasons why the universe must be as old and as large as it is. 'Heavy' elements did not form in the Big Bang. Thus, they had to be synthesized in stars and exploded into space before they were available to form a planet on which carbon-based life could exist. Carbon is the first of the 'heavy' elements that is exclusively formed in the interiors of stars. All the elements below carbon were exclusively, or semi-exclusively, formed within the Big Bang of the universe. The delicate balance at which carbon is synthesized in stars is truly a work of art. Fred Hoyle (1915-2001), a famed astrophysicist, is the scientist who established the nucleo-synthesis of heavier elements within stars as mathematically valid in 1946. Years after Sir Fred discovered the stunning precision with which carbon is synthesized in stars he stated:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? ... I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. -
Sir Fred Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.
Sir Fred also stated:
Sir Fred Hoyle - "The Universe: Past and Present Reflections." Engineering and Science, November, 1981. pp. 8–12
Michael Denton - We Are Stardust - Uncanny Balance Of The Elements - Atheist Fred Hoyle's conversion to a Deist/Theist - video
Peer-Reviewed Paper Argues for an Engineered Universe - January 2012 - podcast
God's Creation - The Miracle Of Carbon & Water - video
What could make a scientist who was such a staunch atheist, as Hoyle was before his discoveries, make such a statement? The reason he made such a statement is because Hoyle was expertly trained in the exacting standards of mathematics. He knew numbers cannot lie when correctly used and interpreted. What he found was a staggering numerical balance to the many independent universal constants needed to synthesize carbon in stars. These independent constants were of such a high degree of precision as to leave no room for blind chance whatsoever. Thus, with no wiggle room for the blind chance of materialism, Fred Hoyle had to admit the evidence he found was compelling to the proposition of intelligent design by a infinitely powerful, and transcendent, Creator. Let's look at some of these exacting mathematical standards, and finely tuned universal constants, to see the precision of 'intelligent design' he saw for the math, and for the foundational building blocks of the transcendent universal 'information' constants, for this universe.
Sometimes atheists will appeal to chaos theory to explain how complexity can arise from simplicity. A vivid example of what they are proposing is here:
Mandelbrot Set Zoom - video
Yet in criticism to complexity arising from simplicity, as is naturally assumed in chaos theory, it should be noted that, at the very least, two very different equations are in play here. One equation governs micro actions of the universe, and the other equation governs the macro shape of the universe: The following is the equation that governs 'micro' actions:
Finely Tuned Big Bang, Elvis In The Multiverse, and the Schroedinger Equation - Granville Sewell - audio
i.e. the Materialist is at a complete loss to explain why this should be so, whereas the Christian Theist presupposes such ‘transcendent’ control of our temporal, material, reality,,,
John 1:1
of note; 'the Word' is translated from the Greek word ‘Logos’. Logos happens to be the word from which we derive our modern word ‘Logic’.
The following is the very 'different' equation that is found to govern the 'macro' structure of the universe:
0 = 1 + e ^(i*pi) — Euler
Believe it or not, the five most important numbers in mathematics are tied together, through the complex domain in Euler's number, And that points, ever so subtly but strongly, to a world of reality beyond the immediately physical. Many people resist the implications, but there the compass needle points to a transcendent reality that governs our 3D 'physical' reality.
God by the Numbers - Connecting the constants
Excerpt: The final number comes from theoretical mathematics. It is Euler's (pronounced "Oiler's") number: e*pi*i. This number is equal to -1, so when the formula is written e*pi*i+1 = 0, it connects the five most important constants in mathematics (e, pi, i, 0, and 1) along with three of the most important mathematical operations (addition, multiplication, and exponentiation). These five constants symbolize the four major branches of classical mathematics: arithmetic, represented by 1 and 0; algebra, by i; geometry, by pi; and analysis, by e, the base of the natural log. e*pi*i+1 = 0 has been called "the most famous of all formulas," because, as one textbook says, "It appeals equally to the mystic, the scientist, the philosopher, and the mathematician."
(of note; Euler's Number (equation) is more properly called Euler's Identity in math circles.)
Moreover Euler’s Identity, rather than just being the most enigmatic equation in math, finds striking correlation to how our 3D reality is actually structured,,,
The following picture, Bible verse, and video are very interesting since, with the discovery of the Cosmic Microwave Background Radiation (CMBR), the universe is found to actually be a circular sphere which 'coincidentally' corresponds to the circle of pi within Euler's identity:
Picture of CMBR
Proverbs 8:26-27
While as yet He had not made the earth or the fields, or the primeval dust of the world. When He prepared the heavens, I was there, when He drew a circle on the face of the deep,
The Known Universe by AMNH – video - (please note the 'centrality' of the Earth in the universe in the video)
The flatness of the ‘entire’ universe, which 'coincidentally' corresponds to the diameter of pi in Euler’s identity, is found on this following site; (of note this flatness of the universe is an extremely finely tuned condition for the universe that could have, in reality, been a multitude of different values than 'flat'):
Did the Universe Hyperinflate? – Hugh Ross – April 2010
This following video, which I've listed previously, shows that the universe also has a primary characteristic of expanding/growing equally in all places,, which 'coincidentally' strongly corresponds to the 'e' in Euler's identity. 'e' is the constant that is used in all sorts of equations of math for finding what the true rates of growth and decay are for any given mathematical problem trying to find as such in this universe:
Every 3D Place Is Center In This Universe – 4D space/time – video
This following video shows how finely tuned the '4-Dimensional' expansion of the universe is (1 in 10^120);
Fine Tuning Of Dark Energy and Mass of the Universe - Hugh Ross - video
Towards the end of the following video, Michael Denton speaks of the square root of negative 1 being necessary to understand the foundational quantum behavior of this universe. The square root of -1 is also 'coincidentally' found in Euler's identity:
I find it extremely strange that the enigmatic Euler's identity, which was deduced centuries ago, would find such striking correlation to how reality is actually found to be structured by modern science. In pi we have correlation to the 'sphere of the universe' as revealed by the Cosmic Background radiation, as well pi correlates to the finely-tuned 'geometric flatness' within the 'sphere of the universe' that has now been found. In 'e' we have the fundamental constant that is used for ascertaining exponential growth in math that strongly correlates to the fact that space-time is 'expanding/growing equally' in all places of the universe. In the square root of -1 we have what is termed a 'imaginary number', which was first proposed to help solve equations like x2+ 1 = 0 back in the 17th century, yet now, as Michael Denton pointed out in the preceding video, it is found that the square root of -1 is required to explain the behavior of quantum mechanics in this universe. The correlation of Euler's identity, to the foundational characteristics of how this universe is constructed and operates, points overwhelmingly to a transcendent Intelligence, with a capital I, which created this universe! It should also be noted that these universal constants, pi,e, and square root -1, were at first thought by many to be completely transcendent of any material basis, to find that these transcendent constants of Euler's identity in fact 'govern' material reality, in such a foundational way, should be enough to send shivers down any mathematicians spine.
Further discussion relating Euler's identity to General Relativity and Quantum Mechanics:
further notes:
in the equation e^pi*i + 1 = 0
,,,we find that pi is required in;
General Relativity (Einstein’s Equation)
,,,and we also find that the square root of negative 1 is required in;
Quantum Mechanics (Schrödinger’s Equations)
,,and we also find that e is required for;
e is required here in wave equations, in finding the distribution of prime numbers, in electrical theory, and is also found to be foundational to trigonometry.,,,
The various uses and equations of 'e' are listed at the bottom of the following page:
Stanford University mathematics professor - Dr. Keith Devlin
Here is a very well done video, showing the stringent 'mathematical proofs' of Euler's Identity:
Euler's identity - video
The mystery doesn't stop there, this following video shows how pi and e are found in Genesis 1:1 and John 1:1
Euler's Identity - God Created Mathematics - video
This following website, and video, has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1:
Fascinating Bible code – Pi and natural log – Amazing – video (of note: correct exponent for base of Nat Log found in John 1:1 is 10^40, not 10^65 as stated in the video)
The golden ratio (tau) is seen in some surprising areas of mathematics. The ratio of consecutive Fibonacci numbers (1, 1, 2, 3, 5, 8, 13 . . ., each number being the sum of the previous two numbers) approaches the golden ratio, as the sequence gets infinitely long. The sequence is sometimes defined as starting at 0, 1, 1, 2, 3
Fibonacci Numbers – The Fingerprint of God - video - (See video description for a look at Euler’s Identity)
Golden Ratio in Human Body - video
Of note natural log e (which is found in Euler's identity and John 1:1) is also found to be necessary for calculating 'growth' of the 'golden spiral' of the Fibonacci number;
The Logarithmic Spiral
1. r increases proportionally and remains in proportion with the golden ratio as theta increases if we define the equation as above, multiplied by e^(a*phi). The reasons for this are more thoroughly discussed by Mukhopadhyay.
What Tau (The Golden Ratio; Fibonacci Number) Sounds Like - golden ratio set to music
The following, somewhat related, article is very interesting;
The Man Who Draws Pi - A Case of Acquired Savant Syndrome and Synesthesia Following a Brutal Assault:
Excerpt: “Everything that exists has geometry”, says JP, who acquired amazing mathematical abilities after a mugging incident in 2002. He was hit hard on the head, and he now experiences reality as mathematical fractals describable by equations. Light bouncing off a shiny car explodes into a fractal overlaying reality, the outer boundaries of objects are tangents, tiny pieces that change angles relative to one another and turn into picture frames of fractals during motion, and the boundaries of clouds and liquids are spiraling lines.,,, Mathematicians and physicists were taken aback: Some of JP’s drawings depict equations in math that hitherto were only presentable in graph form. Others depict actual electron interference patterns.,,, Despite his lack of prior training, JP is the only person in the world to have ever handdrawn meticulously accurate approximations of mathematical fractals using only straight lines. He can predict the vectors for prime numbers in his drawings, and his drawing of hf = mc^2, which contains all the style elements of his earliest drawings, is remarkably similar to an actual picture of electron interference patterns, which he found years after first drawing the pattern (see Fig 7, 8).
Romans 11:33
The following is just a cool video that makes you wonder:
What pi sounds like when put to music – cool video
This following video is very interesting for revealing how difficult it was for mathematicians to actually 'prove' that mathematics was even true in the first place:
Georg Cantor - The Mathematics Of Infinity - video
Godel's story on the incompleteness theorem can be picked up here in part 7 of the preceding video:
BBC-Dangerous Knowledge (Part 7-10)
Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century
Excerpt: Gödel’s Incompleteness Theorem says:
A Biblical View of Mathematics - Vern Poythress - doctorate in theology, PhD in Mathematics (Harvard)
Excerpt: only on a thoroughgoing Biblical basis can one genuinely understand and affirm the real agreement about mathematical truths.
Taking God Out of the Equation - Biblical Worldview - by Ron Tagliapietra - January 1, 2012
Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties.
1. Validity . . . all conclusions are reached by valid reasoning.
2. Consistency . . . no conclusions contradict any other conclusions.
3. Completeness . . . all statements made in the system are either true or false.
The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem.
Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation.
Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3).
The God of the Mathematicians - Goldman
Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” - Kurt Gödel - (Gödel is considered by many to be the greatest mathematician of the 20th century)
Presuppositional Apologetics - easy to use interactive website
Random Chaos vs. Uniformity Of Nature - Presuppositional Apologetics - video
Presuppositional Apologetics (1 of 5) - video - Atheist vs. Christian debate on the street (Law of Non-Contradiction featured prominently in the debate)
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description)
Why should the human mind be able to comprehend reality so deeply? - referenced article
No nontrivial formal utility has ever been observed to arise as a result of either chance or necessity. - David L. Abel:
Excerpt: Decision nodes, logic gates and configurable switch settings can theoretically be set randomly or by invariant law, but no nontrivial formal utility has ever been observed to arise as a result of either. Language, logic theory, mathematics, programming, computation, algorithmic optimization, and the scientific method itself all require purposeful choices at bona fide decision nodes.
BRUCE GORDON: Hawking's irrational arguments - October 2010
Excerpt: What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science.
This 'lack of a guarantee', for trusting our perceptions and reasoning in science to be trustworthy in the first place, even extends into evolutionary naturalism itself;
Should You Trust the Monkey Mind? - Joe Carter
Alvin Plantinga - Science and Faith Conference - video
Philosopher Sticks Up for God
Excerpt: Theism, with its vision of an orderly universe superintended by a God who created rational-minded creatures in his own image, “is vastly more hospitable to science than naturalism,” with its random process of natural selection, he (Plantinga) writes. “Indeed, it is theism, not naturalism, that deserves to be called ‘the scientific worldview.’”
~ Alvin Plantinga
Can atheists trust their own minds? - William Lane Craig On Alvin Plantinga's Evolutionary Argument Against Naturalism - video
The following interview is sadly comical as a evolutionary psychologist realizes that neo-Darwinism can offer no guarantee that our faculties of reasoning will correspond to the truth, not even for the truth that he is purporting to give in the interview, (which begs the question of how was he able to come to that particular truthful realization, in the first place, if neo-Darwinian evolution were actually true?);
Evolutionary guru: Don't believe everything you think - October 2011
Evolutionary Psychologist: Absolutely.
Related article;
Evolutionary Guru Deceives Himself - October 12, 2011
Of related note:
"nobody to date has yet found a demarcation criterion according to which Darwin(ism) can be described as scientific" - Imre Lakatos (November 9, 1922 – February 2, 1974) a philosopher of mathematics and science, quote was as stated in 1973 LSE Scientific Method Lecture
Science and Pseudoscience - Imre Lakatos - exposing Darwinism as a ‘degenerate science program’, as a pseudoscience, using Lakatos's rigid criteria
CS Lewis – Mere Christianity
"But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?" - Charles Darwin - Letter To William Graham - July 3, 1881
The ultimate irony is that this philosophy implies that Darwinism itself is just another meme, competing in the infectivity sweepstakes by attaching itself to that seductive word “science.” Dawkins ceaselessly urges us to be rational, but he does so in the name of a philosophy that implies that no such thing as rationality exists because our thoughts are at the mercy of our genes and memes. The proper conclusion is that the Dawkins poor brain has been infected by the Darwin meme, a virus of the mind if ever there was one, and we wonder if he will ever be able to find the cure.
~ Phillip Johnson
Is Randomness really the rational alternative to the ‘First Mover’ of Theists?
John Lennox - Science Is Impossible Without God - Quotes - video remix
Absolute Truth - Frank Turek - video
This following video humorously reveals the bankruptcy that atheists have in trying to ground beliefs within a materialistic, genetic reductionism, worldview;
John Cleese – The Scientists – humorous video
The following study is not surprising after realizing atheists have no solid basis within their worldview for grounding their claims about absolute truth;
Look Who's Irrational Now
It is also interesting to point out that this ‘inconsistent identity’, that was pointed out by Alvin Plantinga, which leads to the failure of neo-Darwinists to be able to make absolute truth claims for their beliefs, is what also leads to the failure of neo-Darwinists to be able to account for objective morality, in that neo-Darwinists cannot maintain a consistent identity towards a stable, unchanging, cause for objective morality within their lives;
The Knock-Down Argument Against Atheist Sam Harris' moral argument – William Lane Craig – video
Stephen Meyer - Morality Presupposes Theism (1 of 4) - video
Top Ten Reasons We Know the New Testament is True – Frank Turek – video – November 2011
(41:00 minute mark – Despite what is commonly believed (of being 'good enough' to go to heaven, in reality both Mother Teresa and Hitler fall short of the moral perfection required to meet the perfection of God’s objective moral code)
Objective Morality – The Objections – Frank Turek – video
This following short video clearly shows, in a rather graphic fashion, the ‘moral dilemma' that atheists face when trying to ground objective morality;
Cruel Logic – video
Description; A brilliant serial killer videotapes his debates with college faculty victims. The topic of his debate with his victim: His moral right to kill them.
"Atheists may do science, but they cannot justify what they do. When they assume the world is rational, approachable, and understandable, they plagiarize Judeo-Christian presuppositions about the nature of reality and the moral need to seek the truth. As an exercise, try generating a philosophy of science from hydrogen coming out of the big bang. It cannot be done. It’s impossible even in principle, because philosophy and science presuppose concepts that are not composed of particles and forces. They refer to ideas that must be true, universal, necessary and certain." Creation-Evolution Headlines
Atheism cannot ground Morality or Science
As well, as should be blatantly obvious, mathematics cannot be grounded in a materialistic worldview;
Mathematics is the language with which God has written the universe.
Galileo Galilei
The Unreasonable Effectiveness of Mathematics in the Natural Sciences - Eugene Wigner
The Underlying Mathematical Foundation Of The Universe -Walter Bradley - video
How the Recent Discoveries Support a Designed Universe - Dr. Walter L. Bradley - paper
The Five Foundational Equations of the Universe and Brief Descriptions of Each:
— Albert Einstein
“… if nature is really structured with a mathematical language and mathematics invented by man can manage to understand it, this demonstrates something extraordinary. The objective structure of the universe and the intellectual structure of the human being coincide.” – Pope Benedict XVI
This following site has a brief discussion on the fact that 'transcendent math' is not an invention of man but that transcendent math actually dictates how 'reality' is constructed and operates:
"The reason that mathematics is so effective in capturing, expressing, and modeling what we call empirical reality is that there is a ontological correspondence between the two - I would go so far as to say that they are the same thing."
Richard Sternberg - Pg. 8 How My Views On Evolution Evolved
The following site lists the unchanging constants of the universe:
The numerical values of the transcendent universal constants in physics, which are found for gravity which holds planets, stars and galaxies together; for the weak nuclear force which holds neutrons together; for electromagnetism which allows chemical bonds to form; for the strong nuclear force which holds protons together; for the cosmological constant of space/energy density which accounts for the universe’s expansion; and for many other constants which are universal in their scope, 'just so happen' to be the exact numerical values they need to be in order for life, as we know it, to be possible in this universe. A more than slight variance in the value of any individual universal constant, over the entire age of the universe, would have undermined the ability of the entire universe to have life as we know it. To put it mildly, this is a irreducibly complex condition.
Finely Tuned Gravity (1 in 10^40 tolerance; which is just one inch of tolerance allowed on a imaginary ruler stretching across the diameter of the entire universe) - video
Anthropic Principle - God Created The Universe - Michael Strauss PhD. - video
Can Life Be Merely an Accident? (Dr. Robert Piccioni - Fine Tuning) - video
Finely Tuned Universe - video
The Case For The Creator - Lee Strobel - video
This following site has a rigorously argued defense of the fine-tuning(teleological) argument:
The Teleological Argument: An Exploration of the Fine-Tuning of the Universe - ROBIN COLLINS
Here are a few sites that list the finely tuned universal constants:
Fine-Tuning For Life In The Universe
Evidence for the Fine Tuning of the Universe
Here is a defense against Victor Stenger's “no fine-tuning” claims:
Many of Victor Stenger’s “no fine-tuning” claims dubbed “highly problematic” (in new peer reviewed paper) - January 2012
Here is a layman friendly review of the preceding paper:
Is fine-tuning a fallacy? - January 2012
Psalm 119:89-90
On and on through each universal constant scientists analyze, they find such unchanging precision from the universe's creation.
As a side note to this, it seems even the 'exotic' virtual photons, which fleetingly pop into and out of existence, are tied directly to the anthropic principle through the 1 in 10^120 cosmological constant for dark energy:
Abstract: We introduce a new model for dark energy in the Universe in which a small cosmological constant is generated by ordinary electromagnetic vacuum energy. The corresponding virtual photons exist at all frequencies but switch from a gravitationally active phase at low frequencies to a gravitationally inactive phase at higher frequencies via a Ginzburg–Landau type of phase transition. Only virtual photons in the gravitationally active state contribute to the cosmological constant. A small vacuum energy density, consistent with astronomical observations, is naturally generated in this model. We propose possible laboratory tests for such a scenario based on phase synchronization in superconductors.
Shining new light on dark energy with galaxy clusters - December 2010
Excerpt: "Each model for dark energy makes a prediction that you should see this many clusters, with this particular mass, this particular distance away from us," Sehgal said. Sehgal tested these predictions by using data from the most massive galaxy clusters. The results support the standard, vacuum-energy model for dark energy.
Further note:
Virtual Particles, Anthropic Principle & Special Relativity - Michael Strauss PhD. Particle Physics - video
Here is an interesting experiment accomplished with 'virtual' particles:
Researchers create light from 'almost nothing' - June 2011
Of interest to the unchanging nature of the transcendent universal 'information' constants which govern this universe, it should be noted that the four primary forces/constants of the universe (gravity, electromagnetism, strong and weak nuclear forces) are said to be 'mediated at the speed of light' by mass-less 'mediator bosons', yet the speed of light constant is shown to be transcendent of any underlying material basis in the first place.
GRBs Expand Astronomers' Toolbox - Nov. 2009
Excerpt: a detailed analysis of the GRB (Gamma Ray Burst) in question demonstrated that photons of all energies arrived at essentially the same time. Consequently, these results falsify any quantum gravity models requiring the simplest form of a frothy space.
I would also like to point out that since time, as we understand it, comes to a complete stop at the speed of light this gives these four fundamental universal constants the characteristic of being timeless, and thus unchanging, as far as the temporal mass of this universe is concerned. In other words, we should not a-prori expect that which is timeless in nature to ever change in value. Yet contrary to what would seem to be so obvious about the a-piori stability of constants that we should expect, when scientists actually measure for variance in the fundamental constants they always end up being 'surprised' by the stability they find even though it is not to be a-priori expected:
Latest Test of Physical Constants Affirms Biblical Claim - Hugh Ross - September 2010
This following site discusses the many technical problems they had with the paper that recently (2010) tried to postulate variance within the fine structure constant:
Psalm 119:89-91
According to the materialistic philosophy, there are no apparent reasons why the value of each transcendent universal constant could not have varied dramatically from what they actually are. In fact, the presumption of materialism expects a fairly large amount of flexibility, indeed chaos, in the underlying constants for the universe, since the constants themselves are postulated to randomly 'emerge' from some, as far as I can tell, completely undefined material basis at the Big Bang.
All individual constants are of such a high degree of precision as to defy human comprehension, much less comparison to the most precise man-made machine (1 in 10^22 - gravity wave detector). For example, the cosmological constant (dark energy) is balanced to 1 part in 10^120 and the mass density constant is balanced to 1 part in 10^60.
To clearly illustrate the stunning, incomprehensible, degree of fine-tuning we are dealing with in the universe, Dr. Ross has used the illustration of adding or subtracting a single dime's worth of mass in the observable universe, during the Big Bang, would have been enough of a change in the mass density of the universe to make life impossible in this universe. This word picture he uses, with the dime, helps to demonstrate a number used to quantify that fine-tuning of mass for the universe, namely 1 part in 10^60 for mass density. Compared to the total mass of the observable universe, 1 part in 10^60 works out to about a tenth part of a dime, if not smaller.
Where Is the Cosmic Density Fine-Tuning? - Hugh Ross
Actually, 1 in 10 to the 60th for the fine-tuning of the mass density for the universe may be equal to just 1 grain of sand instead of a tenth of a dime!
As well it turns out even the immense size of the universe is necessary for life:
Evidence for Belief in God - Rich Deem
Excerpt: Isn't the immense size of the universe evidence that humans are really insignificant, contradicting the idea that a God concerned with humanity created the universe? It turns out that the universe could not have been much smaller than it is in order for nuclear fusion to have occurred during the first 3 minutes after the Big Bang. Without this brief period of nucleosynthesis, the early universe would have consisted entirely of hydrogen. Likewise, the universe could not have been much larger than it is, or life would not have been possible. If the universe were just one part in 10^59 larger, the universe would have collapsed before life was possible. Since there are only 10^80 baryons in the universe, this means that an addition of just 10^21 baryons (about the mass of a grain of sand) would have made life impossible. The universe is exactly the size it must be for life to exist at all.
Here is a video of Astrophysicist Hugh Ross explaining the anthropic cosmological principle behind the immense size of the universe as well as behind the ancient age of the universe:
We Live At The Right Time In Cosmic History - Hugh Ross - video
Here is a lesser quality video on the same subject:
We Exist At The Right Time In Cosmic History – Hugh Ross – video
I think this following music video and Bible verse sum up nicely what these transcendent universal constants are telling us about reality:
My Beloved One - Inspirational Christian Song - video
Hebrews 11:3
Although 1 part in 10^120 and 1 part in 10^60 far exceeds, by many orders of magnitude, the highest tolerance ever achieved in any man-made machine, which is 1 part in 10^22 for a gravity wave detector, according to esteemed British mathematical physicist Roger Penrose (1931-present), the odds of one particular individual constant, the 'original phase-space volume' of the universe, required such precision that the "Creator’s aim must have been to an accuracy of 1 part in 10^10^123”. This number is gargantuan. If this number were written out in its entirety, 1 with 10^123 zeros to the right, it could not be written on a piece of paper the size of the entire visible universe, even if a number were written down on each sub-atomic particle in the entire universe, since the universe only has 10^80 sub-atomic particles in it.
Roger Penrose discusses initial entropy of the universe. - video
Excerpt: "The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the "source" of the Second Law (Entropy)."
How special was the big bang? - Roger Penrose
Excerpt: This now tells us how precise the Creator's aim must have been: namely to an accuracy of one part in 10^10^123.
(from the Emperor’s New Mind, Penrose, pp 339-345 - 1989)
As well, contrary to speculation of 'budding universes' arising from Black Holes, Black Hole singularities are completely opposite the singularity of the Big Bang in terms of the ordered physics of entropic thermodynamics. In other words, Black Holes are singularities of destruction, and disorder, rather than singularities of creation and order.
Roger Penrose - How Special Was The Big Bang?
Entropy of the Universe - Hugh Ross - May 2010
Evolution is a Fact, Just Like Gravity is a Fact! UhOh!
This 1 in 10^10^123 number, for the time-asymmetry of the initial state of the 'ordered entropy' for the universe, also lends strong support for 'highly specified infinite information' creating the universe since;
Gilbert Newton Lewis - Eminent Chemist
Tom Siegfried, Dallas Morning News, 5/14/90 - Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article
This staggering level of precision, for each individual universal constant scientists can measure, is exactly why many theoretical physicists have suggested the existence of a 'super-calculating intellect' to account for this fine-tuning. This is precisely why the anthropic hypothesis has gained such a strong foothold in many scientific circles. American geneticist Robert Griffiths jokingly remarked about these recent developments;
"If we need an atheist for a debate, I go to the philosophy department. The physics department isn't much use anymore."
Further comments by leading scientists in astrophysics:
Nobel Prize winning Physicist Charles Townes
Physicist and Nobel laureate Arno Penzias
Michael Turner - (Astrophysicist at Fermilab)
John O'Keefe (astronomer at NASA)
Alan Sandage (preeminent Astronomer)
(NASA Astronomer Robert Jastrow, God and the Astronomers, p. 116.)
"Is he worthy to be called a man who attributes to chance, not to an intelligent cause, the constant motion of the heavens, the regular courses of the stars, the agreeable proportion and connection of all things, conducted with so much reason that our intellect itself is unable to estimate it rightly? When we see machines move artificially, as a sphere, a clock, or the like, do we doubt whether they are the productions of reason? -
Cicero (45 BC)
Proverbs 8:29-30
"When He marked out the foundations of the earth, then I was beside Him as a master craftsman;"
Here is a somewhat related foot note refuting the materialistic inspired conjecture of rapid inflation at the initial phase of the Big Bang (turns out rapid inflation was initially postulated to 'smooth away' the 'problems' of fine tuning):
One of cosmic (Rapid) inflation theory’s creators now questions own theory - April 2011
Excerpt: (Rapid) Inflation adds a whole bunch of really unlikely metaphysical assumptions — a new force field that has a never-before-observed particle called the “inflaton”, an expansion faster than the speed of light, an interaction with gravity waves which are themselves only inferred– just so that it can explain the unlikely contingency of a finely-tuned big bang.
But instead of these extra assumptions becoming more-and-more supported, the trend went the opposite direction, with more-and-more fine-tuning of the inflation assumptions until they look as fine-tuned as Big Bang theories. At some point, we have “begged the question”. Frankly, the moment we add an additional free variable, I think we have already begged the question. In a Bayesean comparison of theories, extra variables reduce the information content of the theory, (by the so-called Ockham factor), so these inflation theories are less, not more, explanatory than the theory they are supposed to replace.,,, after 20 years of work, if we haven’t made progress, but have instead retreated, it is time to cut bait.
The only other theory possible for the universe’s creation, other than a God-centered hypothesis, is some purposeless materialistic theory based on blind chance. Materialistic blind chance tries to escape being completely crushed, by the overwhelming weight of evidence for design in the fine-tuning of the universe, by appealing to an infinity of other un-testable universes in which all other possibilities have been played out. Yet there is absolutely no hard physical evidence to support this blind chance conjecture. In fact, the 'infinite multiverse' conjecture suffers from some very serious, and deep, flaws of logic.
The Absurdity of Inflation, String Theory & The Multiverse - Dr. Bruce Gordon - video
The End Of Materialism? - Dr. Bruce Gordon
* In the multiverse, anything can happen for no reason at all.
* In other words, the materialist is forced to believe in random miracles as a explanatory principle.
* In a Theistic universe, nothing happens without a reason. Miracles are therefore intelligently directed deviations from divinely maintained regularities, and are thus expressions of rational purpose.
* Scientific materialism is (therefore) epistemically self defeating: it makes scientific rationality impossible.
As well, this hypothetical infinite multiverse obviously begs the question of exactly which laws of physics, arising from which material basis, are telling all the other natural laws in physics what, how and when, to do the many precise unchanging things they do in these other universes? Exactly where is this universe creating machine to be located? Moreover, if an infinite number of other possible universes must exist in order to explain the fine tuning of this one, then why is it not also infinitely possible for a infinitely powerful and transcendent Creator to exist? Using the materialist same line of reasoning for an infinity of multiverses to 'explain away' the extreme fine-tuning of this one we can thusly surmise; If it is infinitely possible for God to exist then He, of 100% certainty, must exist no matter how small the probability is of His existence in one of these other infinity of universes, and since He certainly must exist in some possible world then he must exist in all possible worlds since all possibilities in all universes automatically become subject to Him since He is, by definition, transcendent and infinitely Powerful.,,, To clearly illustrate the level of absurdity of what materialists now consider their cutting edge science: The materialistic conjecture of an infinity of universes to 'explain away' the fine tuning of this one also insures the 100% probability of the existence of Pink Unicorns no matter how small the probability is of them existing (Though since a pink unicorn is a 'contingent being', instead of a 'necessary being' like God, this means that pink unicorns will only exist in 'some' possible worlds in the multiverse scenario). Thus it is self-evident that the atheistic materialists have painted themselves into a inescapable corner of logical absurdities in trying to find an escape from the Theistic implications we are finding for the fine-tuning of this universe.
The preceding argument has actually been made into a formal philosophical proof:
The Ontological Argument (The Introduction) - video
Ontological Argument For God From The Many Worlds Hypothesis - William Lane Craig - video
God Is Not Dead Yet – William Lane Craig – Page 4
The ontological argument. Anselm’s famous argument has been reformulated and defended by Alvin Plantinga, Robert Maydole, Brian Leftow, and others. God, Anselm observes, is by definition the greatest being conceivable. If you could conceive of anything greater than God, then that would be God. Thus, God is the greatest conceivable being, a maximally great being. So what would such a being be like? He would be all-powerful, all-knowing, and all-good, and he would exist in every logically possible world. But then we can argue:
6. Therefore, a maximally great being exists.
7. Therefore, God exists.
Now it might be a surprise to learn that steps 2–7 of this argument are relatively uncontroversial. Most philosophers would agree that if God’s existence is even possible, then he must exist. So the whole question is: Is God’s existence possible? The atheist has to maintain that it’s impossible that God exists. He has to say that the concept of God is incoherent, like the concept of a married bachelor or a round square. But the problem is that the concept of God just doesn’t appear to be incoherent in that way. The idea of a being which is all-powerful, all knowing, and all-good in every possible world seems perfectly coherent. And so long as God’s existence is even possible, it follows that God must exist.
I like the concluding comment about the ontological argument from the following Dr. Plantinga video:
"God then is the Being that couldn't possibly not exit."
Ontological Argument – Dr. Plantinga (3:50 minute mark)
As weird as it may sound, this following video refines the Ontological argument into a proof that, because of the characteristic of ‘maximally great love’, God must exist in more than one person:
The Ontological Argument for the Triune God - video
Here are some more resources outlining the absurdity of the multiverse conjecture:
The Multiverse Gods, final part - Robert Sheldon - June 2011
Excerpt: And so in our long journey through the purgatory of multiverse-theory, we discover as we previously discovered for materialism, there are two solutions, and only two. Either William Lane Craig is correct and multiverse-theory is just another ontological proof a personal Creator, or we follow Nietzsche into the dark nihilism of the loss of reason. Heaven or hell, there are no other solutions.,_final_part.thtml
Atheism In Crisis - The Absurdity Of The Multiverse - video
Multiverse and the Design Argument - William Lane Craig
Michael Behe has a profound answer to the infinite multiverse argument in “Edge of Evolution”. If there are infinite universes, then we couldn’t trust our senses, because it would be just as likely that our universe might only consist of a human brain that pops into existence which has the neurons configured just right to only give the appearance of past memories. It would also be just as likely that we are floating brains in a lab, with some scientist feeding us fake experiences. Those scenarios would be just as likely as the one we appear to be in now (one universe with all of our experiences being “real”). Bottom line is, if there really are an infinite number of universes out there, then we can’t trust anything we perceive to be true, which means there is no point in seeking any truth whatsoever.
“The multiverse idea rests on assumptions that would be laughed out of town if they came from a religious text.” Gregg Easterbrook
Here is a more formal refutation of the multiverse conjecture;
Bayesian considerations on the multiverse explanation of cosmic fine-tuning - V. Palonen
Conclusions: ,,, The self-sampling assumption approach by Bostrom was shown to be inconsistent with probability theory. Several reasons were then given for favoring the ‘this universe’ (TU) approach and main criticisms against TU were answered. A formal argument for TU was given based on our present knowledge. The main result is that even under a multiverse we should use the proposition “this universe is fine-tuned” as data, even if we do not know the ‘true index’ 14 of our universe. It follows that because multiverse hypotheses do not predict fine-tuning for this particular universe any better than a single universe hypothesis, multiverse hypotheses are not adequate explanations for fine-tuning. Conversely, our data on cosmic fine-tuning does not lend support to the multiverse hypotheses. For physics in general, irrespective of whether there really is a multiverse or not, the common-sense result of the above discussion is that we should prefer those theories which best predict (for this or any universe) the phenomena we observe in our universe.
Another escape that materialists have postulated was a slightly constrained 'string-theoretic' multiverse. The following expert shows why the materialistic postulation of 'string theory' is, for all intents and purposes of empirical science, a complete waste of time and energy:
Peter Woit, a PhD. in theoretical physics and a lecturer in mathematics at Columbia, points out—again and again—that string theory, despite its two decades of dominance, is just a hunch aspiring to be a theory. It hasn't predicted anything, as theories are required to do, and its practitioners have become so desperate, says Woit, that they're willing to redefine what doing science means in order to justify their labors.
This Week’s Hype - August 2011
Excerpt: ‘It’s well-known that one can find Stephen Hawking’s initials, and just about any other pattern one can think of somewhere in the CMB data.,, So, the bottom line is that they see nothing, but a press release has been issued about how wonderful it is that they have looked for evidence of a Multiverse, without mentioning that they found nothing.’ – Peter Woit PhD.
Here is another entry from Professor Peter Woit's blog where he has been fairly busy showing the failure of string theory to pass any of the experimental tests that have been proposed and put to any of its predictions:
String Theory Fails Another Test, the “Supertest” - December 2010
Excerpt: It looks like string theory has failed the “supertest”. If you believe that string theory “predicts” low-energy supersymmetry, this is a serious failure.
This Week’s Hype – November 3, 2011 by Peter Woit (Ph.D. in theoretical physics and a lecturer in mathematics at Columbia)
Excerpt: the LHC has turned out to be dud, producing no black holes or extra dimensions,
SUSY Still in Hiding - Prof. Peter Woit - Columbia University - February 2012
Excerpt: The LHC (Large Haldron Collider) has done an impressive job of investigating and leaving in tatters the SUSY/extra-dimensional speculative universe that has dominated particle theory for much of the last thirty years, and this is likely to be one of its main legacies. These fields will undoubtedly continue to play a large role in particle theory, no matter how bad the experimental situation gets, as their advocates argue “Never, never, never give up!”, but fewer and fewer people will take them seriously.
The Ultimate Guide to the Multiverse - Peter Woit - November 2011
Excerpt: The multiverse propaganda machine has now been going full-blast for more than eight years, since at least 2003 or so, and I’m beginning to wonder “what’s next?”. Once your ideas about theoretical physics reach the point of having a theory that says nothing at all, there’s no way to take this any farther. You can debate the “measure problem” endlessly in academic journals, but the cover stories about how you have revolutionized physics can only go on so long before they reach their natural end of shelf-life. This has gone on longer than I’d ever have guessed, but surely it has to end sooner or later, - Peter Woit - Senior Lecturer at Columbia University
Integral challenges physics beyond Einstein - June 30, 2011
Excerpt: However, Integral’s observations are about 10,000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.,,, “This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.
“string theory, while dazzling, has outrun any conceivable experiment that could verify it”
Excerpt: string theory, while dazzling, has outrun any conceivable experiment that could verify it—there’s zero proof that it describes how nature works.
Though to be fair, a subset of the math of the string hypothesis did get lucky with a interesting 'after the fact' prediction (post-diction) of a already known phenomena. (But this is very similar to finding an arrow on a wall, drawing a circle around it, and then declaring that you hit a bulls-eye!):
A first: String theory predicts an experimental result:
Despite this contrived 'after the fact' postdiction of a physical phenomena, string theory is constantly suffering severe setbacks in other areas, thus string theory has yet to even establish itself as a legitimate line of inquiry within science.
Testing Creation Using the Proton to Electron Mass Ratio
Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,, For the first time, limits on the possible variability of the electron to proton mass ratio are low enough to constrain dark energy models that “invoke rolling scalar fields,” that is, some kind of cosmic quintessence. They also are low enough to eliminate a set of string theory models in physics. That is these limits are already helping astronomers to develop a more detailed picture of both the cosmic creation event and of the history of the universe. Such achievements have yielded, and will continue to yield, more evidence for the biblical model for the universe’s origin and development.
As well, even if the whole of string theory were to have been found to be true, it would have done nothing to help the materialist, and in reality, would have only added another level of 'finely tuned complexity' for us to deal with without ever truly explaining the origination of that logically coherent complexity (Logos) of the string theory in the first place.,,, Bruce Gordon, after a thorough analysis of the entire string theory framework, states the following conclusion on page 72 of Robert J. Spitzer's book 'New Proofs For The Existence Of God':
'it is clear that the string landscape hypothesis is a highly speculative construction built on shaky assumptions and,,, requires meta-level fine-tuning itself." - Bruce Gordon
Sean Carroll channels Giordano Bruno - Robert Sheldon - November 2011
Excerpt: 'In fact, on Lakatos' analysis, both String Theory and Inflation are clearly "degenerate science programs".'
This following article illustrates just how far string theory would miss the mark of explaining the fine-tuning we see even if it were found to be true:
Baron Münchhausen and the Self-Creating Universe:
Roger Penrose has calculated that the entropy of the big bang itself, in order to give rise to the life-permitting universe we observe, must be fine-tuned to one part in e10exp(123)≈10^10exp(123). Such complex specified conditions do not arise by chance, even in a string-theoretic multiverse with 10^500 different configurations of laws and constants, so an intelligent cause may be inferred. What is more, since it is the big bang itself that is fine-tuned to this degree, the intelligence that explains it as an effect must be logically prior to it and independent of it – in short, an immaterial intelligence that transcends matter, energy and space-time. (of note: 10^10^123 minus 10^500 is still, for all practical purposes, 10^10^123)
Infinitely wrong - Sheldon - November 2010
Excerpt: So you see, they gleefully cry, even [1 / 10^(10^123)] x ∞ = 1! Even the most improbable events can be certain if you have an infinite number of tries.,,,Ahh, but does it? I mean, zero divided by zero is not one, nor is 1/∞ x ∞ = 1. Why? Well for starters, it assumes that the two infinities have the same cardinality.
On Signature in the Cell, Robert Saunders Still Doesn't Get It - Jonathan M. - December 2011
Excerpt: On the issue of fine tuning, Saunders appeals to the famous anthropic argument, noting, 'The fine-tuning argument has always seemed to me to be somewhat tautologous. Had the constants been different, we would not be here to look at the Universe and its physical constants. We have a sample size of 1. Exactly 1.'
William Lane Craig has effectively countered this argument:
'[S]uppose you are dragged before a firing squad of 100 trained marksmen, all of them with rifles aimed at your heart, to be executed. The command is given; you hear the deafening sound of the guns. And you observe that you are still alive, that all of the 100 marksmen missed! Now while it is true that, "You should not be surprised that you do not observe that you are dead," nonetheless it is equally true that, "You should be surprised that you do observe that you are alive."
Since the firing squad's missing you altogether is extremely improbable, the surprise expressed is wholly appropriate, though you are not surprised that you do not observe that you are dead, since if you were dead you could not observe it.
Stephen Hawking created quite a stir with his book 'The Grand Design', in Sept. 2010, by claiming that M-theory, the dubious, and completely unsubstantiated, step child of string theory, eliminated the need for God to explain the origin of the universe. Many physicists objected to Hawking's claim, but perhaps the best argument against Hawking's claim is Hawking's very own words:
Hawking gave the game away for his 'omnipotent' claims for M-theory with this quote that he gave in response to a question from Larry King at the beginning of a interview King had with Hawking about his book:
Larry King: “If you could time travel would you go forward or backward?”
Stephen Hawking: “I would go forward and find if M-theory is indeed the theory of everything.”
Larry King and others; “Quietly laugh”
So here we have Hawking making sweeping claims with a theory that, by his own admission, is not even shown to be a complete 'theory of everything' in the first place. Further critiques of Hawking's 'omnipotent' M-theory, by leading experts in the field, can be found on the following site, as well the video of the interview between King and Hawking:
Barr on Hawking - Barry Arrington - September 2010
of related note:
Cosmologists Forced to “In the Beginning” - January 2011
Excerpt: In New Scientist today, Lisa Grossman reported on ideas presented at a conference entitled “State of the Universe” convened last week in honor of Stephen Hawking’s 70th birthday. Some birthday; he got “the worst presents ever,” she said: “two bold proposals posed serious threats to our existing understanding of the cosmos.” Of the two, the latter is most serious: a presentation showing reasons why “the universe is not eternal, resurrecting the thorny question of how to kick-start the cosmos without the hand of a supernatural creator.” It is well-known that Hawking has preferred a self-existing universe. Grossman quotes him saying, “‘A point of creation would be a place where science broke down. One would have to appeal to religion and the hand of God,’ Hawking told the meeting, at the University of Cambridge, in a pre-recorded speech.”
William Lane Craig: The Origins of the Universe - Has Hawking Eliminated God? Cambridge October 2011 - video lecture
This following quote, in critique of Hawking's book, is from Roger Penrose who worked closely with Stephen Hawking in the 1970's and 80's:
'What is referred to as M-theory isn’t even a theory. It’s a collection of ideas, hopes, aspirations. It’s not even a theory and I think the book is a bit misleading in that respect. It gives you the impression that here is this new theory which is going to explain everything. It is nothing of the sort. It is not even a theory and certainly has no observational (evidence),,, I think the book suffers rather more strongly than many (other books). It’s not a uncommon thing in popular descriptions of science to latch onto some idea, particularly things to do with string theory, which have absolutely no support from observations.,,, They are very far from any kind of observational (testability). Yes, they (the ideas of M-theory) are hardly science." – Roger Penrose – former close colleague of Stephen Hawking – in critique of Hawking’s new book ‘The Grand Design’ the exact quote in the following video clip:
Roger Penrose Debunks Stephen Hawking's New Book 'The Grand Design' - video
As a interesting sidelight to Penrose debunking Hawking's theory for how the universe began, it seems that Roger Penrose's own pet 'non-theistic' theory, for how the universe began without the need for God, also humorously fails under scrutiny:
Mr Hoyle, call your office - Robert Sheldon - November 2010
Excerpt: I think I understand what Penrose is saying, and the truly weird thing about it is that I was introduced to this theory from a DC comic book circa 1967, whereas Sir Roger only just discovered it in 2007.
BRUCE GORDON: Hawking's irrational arguments - October 2010
Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that "nothing" is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency - a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what "breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale.
Here is the last power-point slide of the preceding video:
The End Of Materialism?
Many times a atheist will object to Theism by saying something along the lines of this following quote by a prominent atheist:
Yet for a atheist/materialist to say that science can ONLY study law-like events that can faithfully be predicted, time after time, is sheer hypocrisy on the part of the atheist, for indeed the atheist himself holds that strictly random, non-regular, non-law-like, indeed 'CHAOTIC' events are responsible for why the universe, and all life in it, originated, and ‘evolves’, in the first place. The atheist’s own worldview, far from demanding regularity in nature, demands that random, and thus by definition ‘non-predictable’, events be at the base of all reality and of all life. Needless to say, being ‘non-predictably random’ is the exact polar opposite of the predictability of science that atheists accuse Theists of violating when Theists posit the rational Mind of God for the origin of the universe and/or all life in it. In truth, the atheist is just extremely prejudiced as to exactly what, or more precisely WHOM, he, or she, will allow to be the source for the random, irregular, non-predicatable, non-law-like, events that they themselves require to be at the very basis of the creation events of the universe and all life in it.,,, Moreover, unlike atheistic neo-Darwinian evolution, which continually requires these non-predictable, non-law like, random events, to continually be present within the base of reality (which is the antithesis of ‘science’ according to the atheist's own criteria for excluding any Theistic answer to ever be plausible), Intelligent Design finds itself only requiring that this seemingly ‘random’, top down, implementation of novel genetic, and body plan, information at the inception of each new parent species, with all sub-speciation events thereafter, from the parent species, following a law-like adherence to the principle of genetic entropy. A principle that happens to be in accordance with perhaps the most rigorously established law in science, the second law of thermodynamics, as well as in accordance with the law of Conservation of Information as laid out by Dr. Dembski and Marks.
The following is a humorous account of the preceding:
Blackholes- The neo-Darwinists ultimate ‘god of randomness’ which can create all life in the universe (according to them)
Further notes:
The Effect of Infinite Probabilistic Resources on ID and Science (Part 2) - Eric Holloway - July 2011
Excerpt:,, since orderly configurations drop off so quickly as our space of configurations approach infinity, then this shows that infinite resources actually make it extremely easy to discriminate in favor of ID (Intelligent Design) when faced with an orderly configuration. Thus, intelligent design detection becomes more effective as the probabilistic resources increase.
What Would The World Look Like If Atheism Were Actually True? – video
When Nothing Created Everything? A humorous account of the atheist's creation myth
Materialists also 'use to' try to find a place for random blind chance to hide by proposing a universe which expands and contracts (recycles) infinitely. Even at first glance, the 'recycling universe' conjecture suffers so many questions from the second law of thermodynamics (entropy) as to render it effectively implausible as a serious theory, but now the recycling universe conjecture has been totally crushed by the hard evidence for a 'flat' universe found by the 'BOOMERANG' experiment.
Refutation Of Oscillating Universe - Michael Strauss PhD. - video:
Evidence For Flat Universe - Boomerang Project
Did the Universe Hyperinflate? - Hugh Ross - April 2010
Excerpt: Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe's curvature parameter, ½k, exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both measurements confirm the universe indeed manifests zero or very close to zero geometric curvature,,,
Einstein's 'Biggest Blunder' Turns Out to Be Right - November 2010
Excerpt: By providing more evidence that the universe is flat, the findings bolster the cosmological constant model for dark energy over competing theories such as the idea that the general relativity equations for gravity are flawed. "We have at this moment the most precise measurements of lambda that a single technique can give," Marinoni said.
A 'flat universe', which is actually another very surprising finely-tuned 'coincidence' of the universe, means this universe, left to its own present course of accelerating expansion due to Dark Energy, will continue to expand forever, thus fulfilling the thermodynamic equilibrium of the second law to its fullest extent (entropic 'Heat Death' of the universe).
The Future of the Universe
Excerpt: After all the black holes have evaporated, (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons and positrons will fly from place to place, hardly ever encountering each other. It will be cold, and dark, and there is no known process which will ever change things. --- Not a happy ending.
The End Of Cosmology? - Lawrence M. Krauss and Robert J. Scherrer
Psalm 102:25-27
Big Rip
Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.
Thermodynamic Argument Against Evolution - Thomas Kindell - video
entire video:
Does God Exist? The End Of Christianity - Finding a Good God in an Evil World - video
Romans 8:18-21
The only hard evidence there is, the stunning precision found in the transcendent universal constants, points overwhelmingly to intelligent design by a transcendent Creator who originally established what the transcendent universal constants of physics could and would do during the creation of the universe. The hard evidence left no room for the blind chance of natural laws in this universe. Thus, materialism was forced into appealing to an infinity of un-testable universes for it was left with no footing in this universe. These developments in science make it seem like materialism was cast into the abyss of nothingness in so far as rationally explaining the fine-tuning of the universe.
As well as the universe having a transcendent beginning, thus confirming the Theistic postulation in Genesis 1:1, the following recent discovery of a 'Dark Age' for the early universe uncannily matches up with the Bible passage in Job 38:4-11.
For the first 400,000 years of our universe’s expansion, the universe was a seething maelstrom of energy and sub-atomic particles. This maelstrom was so hot, that sub-atomic particles trying to form into atoms would have been blasted apart instantly, and so dense, light could not travel more than a short distance before being absorbed. If you could somehow live long enough to look around in such conditions, you would see nothing but brilliant white light in all directions. When the cosmos was about 400,000 years old, it had cooled to about the temperature of the surface of the sun. The last light from the "Big Bang" shone forth at that time. This "light" is still detectable today as the Cosmic Background Radiation.
This 400,000 year old “baby” universe entered into a period of darkness. When the dark age of the universe began, the cosmos was a formless sea of particles. By the time the dark age ended, a couple of hundred million years later, the universe lit up again by the light of some of the galaxies and stars that had been formed during this dark era. It was during the dark age of the universe that the heavier chemical elements necessary for life, carbon, oxygen, nitrogen and most of the rest, were first forged, by nuclear fusion inside the stars, out of the universe’s primordial hydrogen and helium.
It was also during this dark period of the universe the great structures of the modern universe were first forged. Super-clusters, of thousands of galaxies stretching across millions of light years, had their foundations laid in the dark age of the universe. During this time the infamous “missing dark matter”, was exerting more gravity in some areas than in other areas; drawing in hydrogen and helium gas, causing the formation of mega-stars. These mega-stars were massive, weighing in at 20 to more than 100 times the mass of the sun. The crushing pressure at their cores made them burn through their fuel in only a million years. It was here, in these short lived mega-stars under these crushing pressures, the chemical elements necessary for life were first forged out of the hydrogen and helium. The reason astronomers can’t see the light from these first mega-stars, during this dark era of the universe’s early history, is because the mega-stars were shrouded in thick clouds of hydrogen and helium gas. These thick clouds prevented the mega-stars from spreading their light through the cosmos as they forged the elements necessary for future life to exist on earth. After about 200 million years, the end of the dark age came to the cosmos. The universe was finally expansive enough to allow the dispersion of the thick hydrogen and helium “clouds”. With the continued expansion of the universe, the light, of normal stars and dwarf galaxies, was finally able to shine through the thick clouds of hydrogen and helium gas, bringing the dark age to a close. (How The Stars Were Born - Michael D. Lemonick),9171,1376229-2,00.html
Job 26:10
Job 38:4-11
“Where were you when I laid the foundations of the earth? Tell me if you have understanding. Who determined its measurements? Surely you know! Or who stretched a line upon it? To what were its foundations fastened? Or who laid its cornerstone, When the morning stars sang together, and all the sons of God shouted for joy? Or who shut in the sea with doors, when it burst forth and issued from the womb; When I made the clouds its garment, and thick darkness its swaddling band; When I fixed my limit for it, and set bars and doors; When I said, ‘This far you may come but no farther, and here your proud waves must stop!"
Hidden Treasures in the Book of Job - video
History of The Universe Timeline- Graph Image
As a sidelight to this, every class of elements that exists on the periodic table of elements is necessary for complex carbon-based life to exist on earth. The three most abundant elements in the human body, Oxygen, Carbon, Hydrogen, 'just so happen' to be the most abundant elements in the universe, save for helium which is inert. A truly amazing coincidence that strongly implies 'the universe had us in mind all along'. Even uranium the last naturally occurring 'stable' element on the period table of elements is necessary for life. The heat generated by the decay of uranium is necessary to keep a molten core in the earth for an extended period of time, which is necessary for the magnetic field surrounding the earth, which in turn protects organic life from the harmful charged particles of the sun. As well, uranium decay provides the heat for tectonic activity and the turnover of the earth's crustal rocks, which is necessary to keep a proper mixture of minerals and nutrients available on the surface of the earth, which is necessary for long term life on earth. (Denton; Nature's Destiny). These following articles and videos give a bit deeper insight into the crucial role that individual elements play in allowing life:
The Elements: Forged in Stars - video
Michael Denton - We Are Stardust - Uncanny Balance Of The Elements - Fred Hoyle Atheist to Deist/Theist - video
The Role of Elements in Life Processes
Periodic Table - Interactive web page for each element
Periodic Table - with stability, and native state, of elements listed
To answer our second question (What evidence is found for the earth's ability to support life?) we will consider many 'life-enabling characteristics', for the galaxy, sun, moon and earth, which establish that the earth is extremely unique in its ability to host advanced life in this universe. Again, the presumption of materialistic blind chance being the only reasonable cause must be dealt with. As opposed to the anthropic hypothesis which starts off by presuming the earth is extremely unique in this universe, materialism begins by presuming planets that are able to support life are fairly common in this universe. In fact astronomer Frank Drake (1930-present) proposed, in 1961, advanced life should be fairly common in the universe. He developed a rather crude equation called the 'Drake equation'. He plugged in some rather optimistic numbers and reasoned that ten worlds with advanced life should be in our Milky Way galaxy alone. One estimate of his worked out to roughly one trillion worlds with advanced life throughout the entire universe. Much to the disappointment of Star Trek fans, the avalanche of recent scientific evidence has found the probability of finding another planet with the ability to host advanced life in this universe is not nearly as likely as astronomer Frank Drake had originally predicted.
First, our solar system is not nearly as haphazard as some materialists would have us believe:
Weird Orbits of Neighbors Can Make 'Habitable' Planets Not So Habitable - May 2010
Thank God for Jupiter - July 2010
Excerpt: The July 16, 1994 and July 19, 2009 collision events on Jupiter demonstrate just how crucial a role the planet plays in protecting life on Earth. Without Jupiter’s gravitational shield our planet would be pummeled by frequent life-exterminating events. Yet Jupiter by itself is not an adequate shield. The best protection is achieved via a specific arrangement of several gas giant planets. The most massive gas giant must be nearest to the life support planet and the second most massive gas giant the next nearest, followed by smaller, more distant gas giants. Together Jupiter, Saturn, Uranus, and Neptune provide Earth with this ideal shield.
Of Gaps, Fine-Tuning and Newton’s Solar System - Cornelius Hunter - July 2011
Excerpt: The new results indicate that the solar system could become unstable if diminutive Mercury, the inner most planet, enters into a dance with Jupiter, the fifth planet from the Sun and the largest of all. The resulting upheaval could leave several planets in rubble, including our own. Using Newton’s model of gravity, the chances of such a catastrophe were estimated to be greater than 50/50 over the next 5 billion years. But interestingly, accounting for Albert Einstein’s minor adjustments (according to his theory of relativity), reduces the chances to just 1%.
Milankovitch Cycle Design - Hugh Ross - August 2011
Excerpt: In all three cases, Waltham proved that the actual Earth/Moon/solar system manifests unusually low Milankovitch levels and frequencies compared to similar alternative systems. ,,, Waltham concluded, “It therefore appears that there has been anthropic selection for slow Milankovitch cycles.” That is, it appears Earth was purposely designed with slow, low-level Milankovitch cycles so as to allow humans to exist and thrive.
Astrobiology research is revealing the high specificity and interdependence of the local parameters required for a habitable environment. These two features of the universe make it unlikely that environments significantly different from ours will be as habitable. At the same time, physicists and cosmologists have discovered that a change in a global parameter can have multiple local effects. Therefore, the high specificity and interdependence of local tuning and the multiple effects of global tuning together make it unlikely that our tiny island of habitability is part of an archipelago. Our universe is a small target indeed.
Astronomer Guillermo Gonzalez - P. 625, The Nature of Nature
Among Darwin Advocates, Premature Celebration over Abundance of Habitable Planets - September 2011
Excerpt: Today, such processes as planet formation details, tidal forces, plate tectonics, magnetic field evolution, and planet-planet, planet-comet, and planet-asteroid gravitational interactions are found to be relevant to habitability.,,, What's more, not only are more requirements for habitability being discovered, but they are often found to be interdependent, forming a (irreducibly) complex "web." This means that if a planetary system is found not to satisfy one of the habitability requirements, it may not be possible to compensate for this deficit by adjusting a different parameter in the system. - Guillermo Gonzalez
In fact when trying to take into consideration all the different factors necessary to make life possible on any earth-like planet, we learn some very surprising things:
Privileged Planet Principle - Michael Strauss - video
Privileged Planet Principle - Scot Pollock (Notes In Description) - video
There are many independent characteristics required to be fulfilled for any planet to host advanced carbon-based life. Two popular books have recently been written, 'The Privileged Planet' by Guillermo Gonzalez and 'Rare Earth' by Donald Brownlee, indicating the earth is extremely unique in its ability to host advanced life in this universe. Privileged Planet, which holds that any life supporting planet in the universe will also be 'privileged' for observation of the universe, has now been made into a excellent video.
The Privileged Planet - video
Privileged Planet - Observability Correlation - Gonzalez and Richards - video
The very conditions that make Earth hospitable to intelligent life also make it well suited to viewing and analyzing the universe as a whole.
- Jay Richards
A few videos of related 'observability correlation' interest;
Continued notes:
Our Privileged Planet (1 of 5) - Guillermo Gonzalez - video lecture
Guillermo Gonzalez & Stephen Meyer on Coral Ridge - video (Part 1)
Guillermo Gonzalez & Stephen Meyer on Coral Ridge - video (Part 2)
Fine Tuning Of The Universe - Privileged Planet (Notes In Description) - video
There is also a well researched statistical analysis of the many independent 'life-enabling characteristics' that gives strong mathematical indication that the earth is extremely unique in its ability to support complex life in this universe and shows, from a naturalistic perspective, that a life permitting planet is extremely unlikely to 'accidentally emerge' in the universe. The statistical analysis, which is actually a extreme refinement of the Drake's probability equation, is dealt with by astro-physicist Dr. Hugh Ross (1945-present) in his paper 'Probability for Life on Earth'.
Probability For Life On Earth - List of Parameters, References, and Math - Hugh Ross
A few of the items in Dr. Ross's "life-enabling characteristics" list are; Planet location in a proper galaxy's 'habitable zone'; Parent star size; Surface gravity of planet; Rotation period of planet; Correct chemical composition of planet; Correct size for moon; Thickness of planets’ crust; Presence of magnetic field; Correct and stable axis tilt; Oxygen to nitrogen ratio in atmosphere; Proper water content of planet; Atmospheric electric discharge rate; Proper seismic activity of planet; Many complex cycles necessary for a stable temperature history of planet; Translucent atmosphere; Various complex, and inter-related, cycles for various elements etc.. etc.. I could go a lot further in details for there are a total of 322 known parameters on his list, (816 in his updated list), which have to be met for complex life to be possible on Earth, or on a planet like Earth. Individually, these limits are not that impressive but when we realize ALL these limits have to be met at the same time and not one of them can be out of limits for any extended period of time, then the condition becomes 'irreducibly complex' and the probability for a world which can host advanced life in this universe becomes very extraordinary. Here is the final summary of Dr. Hugh Ross's 'conservative' estimate for the probability of another life-hosting world in this universe.
Probability for occurrence of all 322 parameters =10^-388
Dependency factors estimate =10^96
Longevity requirements estimate =10^14
Probability for occurrence of all 322 parameters = 10^-304
Maximum possible number of life support bodies in universe =10^22
Dr. Hugh Ross, and his team, have now drastically refined this probability of 1 in 10^304 to a staggering probability of 1 in 10^1054:
Does the Probability for ETI = 1?
Excerpt; On the Reasons To Believe website we document that the probability a randomly selected planet would possess all the characteristics intelligent life requires is less than 10^-304. A recent update that will be published with my next book, Hidden Purposes: Why the Universe Is the Way It Is, puts that probability at 10^-1054.
Linked from Appendix C from Dr. Ross's book, 'Why the Universe Is the Way It Is';
Probability for occurrence of all 816 parameters ≈ 10^-1333
dependency factors estimate ≈ 10^324
longevity requirements estimate ≈ 10^45
Probability for occurrence of all 816 parameters ≈ 10^-1054
Maximum possible number of life support bodies in observable universe ≈ 10^22
Thus, less than 1 chance in 10^1032 exists that even one such life-support body would occur anywhere in the universe without invoking divine miracles.
Hugh Ross - Evidence For Intelligent Design Is Everywhere (10^-1054) - video
Isaiah 40:28
Hugh Ross - Four Main Research Papers
"This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent Being. … This Being governs all things, not as the soul of the world, but as Lord over all; and on account of his dominion he is wont to be called “Lord God” παντοκρατωρ [pantokratòr], or “Universal Ruler”… The Supreme God is a Being eternal, infinite, absolutely perfect."
Sir Isaac Newton - Quoted from what many consider the greatest science masterpiece of all time, his book "Principia"
Related note:
The Loneliest Planet - ALAN HIRSHFELD - December 2011
Excerpt: While he cannot prove a galaxy-wide absence of other civilizations, he presents an array of modern, research-based evidence that renders that conclusion eminently reasonable.
The following is another surprising Privileged Planet parameter which fairly recently came to light:
Cosmic Rays Hit Space Age High
Excerpt: "The entire solar system from Mercury to Pluto and beyond is surrounded by a bubble of solar magnetism called "the heliosphere."
The Protective Boundaries of our Solar System - NASA IBEX - video
Many people simply presume that solar system formation is fairly well understood by science but that simply is not the case:
Are Saturn’s Rings Evolving? July - 2010
Excerpt: Not all is well in theories of planet formation, though. Astrobiology Magazine complained this week that many of the exoplanets discovered around other stars do not fit theories of the origin of the solar system.
Planet-Making Theories Don’t Fit Extrasolar Planets;
Excerpt: “The more new planets we find, the less we seem to know about how planetary systems are born, according to a leading planet hunter.” We cannot apply theories that fit our solar system to other systems:
Medium size worlds upset “Earth is not unique” planet modelling - January 2012
Excerpt: But what has puzzled observers and theorists so far is the high proportion of planets — roughly one-third to one-half — that are bigger than Earth but smaller than Neptune. These ‘super-Earths’ are emerging as a new category of planet — and they could be the most numerous of all (see ‘Super-Earths rising’). Their very existence upsets conventional models of planetary formation and, furthermore, most of them are in tight orbits around their host star, precisely where the modellers say they shouldn’t be.
The solar systems that scientists are currently finding, in our corner of the universe, simply do not match the 'predictions':
Exoplanet Hunters Fail Predictions – August 2010
Excerpt: There are so many surprises in this field—almost nothing is turning out as we expected. There are Jupiter-mass planets in three-day orbits. There are planets with masses that are between those of the terrestrial planets in our solar system and the gas giants in the outer part of our solar system. There are Jupiter-mass planets with hugely inflated radii—at densities far lower than what we thought were possible for a gas-giant planet. There are giant planets with gigantic solid cores that defy models of planet formation, which say there shouldn’t be enough solids available in a protoplanetary disk to form a planet that dense. There are planets with tilted orbits. There are planets that orbit the poles of their stars, in so-called circumpolar orbits. There are planets that orbit retrograde—that is, they orbit in the opposite direction of their star’s rotation. There are systems of planets that are in configurations that are hard to describe given our understanding of planet formation. For instance, some planets are much too close to one another.
But a lot of those surprises have to do with the fact that we have only one example of a planetary system—our solar system—to base everything on, right?
What’s interesting is that we’ve found very little that resembles our example.
Did cosmic collisions make habitable planets rare? - August 2011
Excerpt: Most of the planets in our own solar system, including Earth, have relatively circular orbits and are lined up along a plane that isn't tilted much from the sun's equator. They also orbit in the same direction around the sun as our star spins. But many other solar systems are not so neatly ordered, harboring planets that move in the opposite direction of their stars' spin on highly tilted orbits.
Man in the moon looking younger - August 17, 2011
Excerpt: "The extraordinarily young age of this lunar sample either means that the Moon solidified significantly later than previous estimates, or that we need to change our entire understanding of the Moon's geochemical history," Carlson said.
Habitable Zones Constrained by Tides
Excerpt: “I think that the chances for life existing on exoplanets in the traditional habitable zone around low-mass stars are pretty bleak, when considering tidal effects,” lead researcher Rene Heller remarked. “If you want to find a second Earth, it seems that you need to look for a second Sun.”
New Definition Could Further Limit Habitable Zones Around Distant Suns: - June 2009
... liquid water is essential for life, but a planet also must have plate tectonics to pull excess carbon from its atmosphere and confine it in rocks to prevent runaway greenhouse warming. Tectonics, or the movement of the plates that make up a planet's surface, typically is driven by radioactive decay in the planet's core, but a star's gravity can cause tides in the planet, which creates more energy to drive plate tectonics.... Barnes added, "The bottom line is that tidal forcing is an important factor that we are going to have to consider when looking for habitable planets."
Tidal forces could squeeze out planetary water - February 2012
Excerpt: Alien planets might experience tidal forces powerful enough to remove all their water, leaving behind hot, dry worlds like Venus, researchers said. These findings might significantly affect searches for habitable exoplanets, scientists explained. Although some planets might dwell in regions around their star friendly enough for life as we know it, they could actually be lifelessly dry worlds. ,,, After a tidal Venus loses all its water and becomes uninhabitable, the tides could alter its orbit so that it no longer experiences tidal heating. As such, it might no longer appear like a tidal Venus, but look just like any other world in its star's habitable zone, fooling researchers into thinking it is potentially friendly for life, even though it has essentially been sterilized.
A Renewed Concern: Flares and Astrobiology - January 2011
Many Stars Are Planet Destroyers - September 2010
As well, tectonic activity, which is itself finely tuned for life on earth, is not nearly as well understood by science as many people think:
Dominant paradigms in science and their attendant anomalies - David Tyler - July 2010
Excerpt: The relative contributions made by these different forces have been much discussed by scientists developing plate tectonic theory. However, firm conclusions have not been reached. If there is any consensus, it is that boundary forces are more significant than drag forces, and that slab pull is more significant than ridge push.
As well, the prevailing 'impact theory', for how our life-enabling moon is hypothesized to have been formed, is not nearly as well established as some people think:
Researchers discover water on the moon is widespread, similar to Earth's - July 2010
In related note about water on Mars:
Surface of Mars an unlikely place for life after 600-million-year drought, say scientists - February 2012
Excerpt: The results of the soil analysis at the Phoenix site suggest the surface of Mars has been arid for hundreds of millions of years, despite the presence of ice and the fact that previous research has shown that Mars may have had a warmer and wetter period in its earlier history more than three billion years ago. The team also estimated that the soil on Mars had been exposed to liquid water for at most 5,000 years since its formation billions of years ago. They also found that Martian and Moon soil is being formed under the same extremely dry conditions.
Satellite images and previous studies have proven that the soil on Mars is uniform across the planet, which suggests that the results from the team's analysis could be applied to all of Mars. This implies that liquid water has been on the surface of Mars for far too short a time for life to maintain a foothold on the surface.
In further evidence for the 'Privileged Planet'; relative element abundances, complex symbiotic chemistry, water, and the fine tuning of light for carbon based life on earth, all display extraordinary characteristics of design which also lend strong support to the Privileged Planet principle:
It is found that not only must the right chemicals be present on earth to have life, the chemicals must also be present on the earth in 'specific abundances'.
Elemental Evidence of Earth’s Divine Design - Hugh Ross PhD. - April 2010
Table: Earth’s Anomalous Abundances - Page 8
The twenty-five elements listed below must exist on Earth in specific abundances for advanced life and/or support of civilization to be possible. For each listed element the number indicates how much more or less abundant it is, by mass, in Earth’s crust, relative to magnesium’s abundance, as compared to its average abundance in the rest of the Milky Way Galaxy, also relative to the element magnesium. Asterisks denote “vital poisons,” essential elements that if too abundant would be toxic to advanced life, but if too scarce would fail to provide the quantities of nutrients essential for advanced life. The water measure compares the amount of water in and on Earth relative to the minimum amount the best planet formation models would predict for a planet the mass of Earth orbiting a star identical to the Sun at the same distance from the Sun.
carbon* 1,200 times less
nitrogen* 2,400 times less
fluorine* 50 times more
sodium* 20 times more
aluminum 40 times more
phosphorus* 4 times more
sulfur* 60 times less
potassium* 90 times more
calcium 20 times more
titanium 65 times more
vanadium* 9 times more
chromium* 5 times less
nickel* 20 times less
cobalt* 5 times less
selenium* 30 times less
yttrium 50 times more
zirconium 130 times more
niobium 170 times more
molybdenum* 5 times more
tin* 3 times more
iodine* 3 times more
gold 5 times less
lead 170 times more
uranium 16,000 times more
thorium 23,000 times more
water 250 times less
Compositions of Extrasolar Planets - July 2010
Excerpt: ,,,the presumption that extrasolar terrestrial planets will consistently manifest Earth-like chemical compositions is incorrect. Instead, the simulations revealed “a wide variety of resulting planetary compositions.
Chances of Exoplanet Life ‘Impossible’? Or ’100 percent’? - February 2011
Excerpt: Howard Smith, an astrophysicist at Harvard University, made the headlines earlier this year when he announced, rather pessimistically, that aliens will unlikely exist on the extrasolar planets we are currently detecting. “We have found that most other planets and solar systems are wildly different from our own. They are very hostile to life as we know it,” “Extrasolar systems are far more diverse than we expected, and that means very few are likely to support life,” he said.
Elements of ExoPlanets - February 2012
Excerpt: "I was expecting some subtle changes in our stellar evolution models in terms of the surface temperature and brightness — I was not looking for such a dramatic change in the lifetimes of the stars," said study lead author Patrick Young,
The stunning long term balance of the necessary chemicals for life, on the face of the earth, is a wonder in and of itself:
Chemical Cycles:
Long term chemical balance is essential for life on earth. Complex symbiotic chemical cycles keep the amount of elements on the earth surface in relatively perfect balance and thus in steady supply to the higher life forms that depend on them to remain stable. This is absolutely essential for the higher life forms to exist on Earth for any extended period of time.
Carbon and Nitrogen Cycles - music video
Carbon Cycle - Illustration
When we look at water, the most common substance on earth and in our bodies, we find many odd characteristics which clearly appear to be designed. These oddities are absolutely essential for life on earth. Some simple life can exist without the direct energy of sunlight, some simple life can exist without oxygen; but no life can exist without water. Water is called a universal solvent because it has the unique ability to dissolve a far wider range of substances than any other solvent. This 'universal solvent' ability of water is essential for the cells of living organisms to process the wide range of substances necessary for life. Another oddity is water expands as it becomes ice, by an increase of about 9% in volume. Thus, water floats when it becomes a solid instead of sinking. This is an exceedingly rare ability. Yet if it were not for this fact, lakes and oceans would freeze from the bottom up. The earth would be a frozen wasteland, and human life would not be possible. Water also has the unusual ability to pull itself into very fine tubes and small spaces, defying gravity. This is called capillary action. This action is essential for the breakup of mineral bearing rocks into soil. Water pulls itself into tiny spaces on the surface of a rock and freezes; it expands and breaks the rock into tinier pieces, thus producing soil. Capillary action is also essential for the movement of water through soil to the roots of plants. It is also essential for the movement of water from the roots to the tops of the plants, even to the tops of the mighty redwood trees,,,
Towering Giants Of Teleological Beauty - October 2010
,,,Capillary action is also essential for the circulation of the blood in our very own capillary blood vessels. Water's melting and boiling point are not where common sense would indicate they should be when we look at its molecular weight. The three sister compounds of water all behave as would be predicted by their molecular weight. Oddly, water just happens to have melting and boiling points that are of optimal biological utility. The other properties of water we measure, like its specific slipperiness (viscosity) and its ability to absorb and release more heat than any other natural substance, have to be as they are in order for life to be possible on earth. Even the oceans have to be the size they are in order to stabilize the temperature of the earth so human life may be possible. On and on through each characteristic we can possibly measure water with, it turns out to be required to be almost exactly as it is or complex life on this earth could not exist. No other liquid in the universe comes anywhere near matching water in its fitness for life (Denton: Nature's Destiny).
Here is a more complete list of the anomalous life enabling properties of water:
Anomalous life enabling properties of water
Water's remarkable capabilities - December 2010 - Peer Reviewed
Excerpt: All these traits are contained in a simple molecule of only three atoms. One of the most difficult tasks for an engineer is to design for multiple criteria at once. ... Satisfying all these criteria in one simple design is an engineering marvel. Also, the design process goes very deep since many characteristics would necessarily be changed if one were to alter fundamental physical properties such as the strong nuclear force or the size of the electron.
Water's quantum weirdness makes life possible - October 2011
Excerpt: They found that the hydrogen-oxygen bonds were slightly longer than the deuterium-oxygen ones, which is what you would expect if quantum uncertainty was affecting water’s structure. “No one has ever really measured that before,” says Benmore.
We are used to the idea that the cosmos’s physical constants are fine-tuned for life. Now it seems water’s quantum forces can be added to this “just right” list.
Water cycle song - music video
Although water is semi-famous for its many mysterious and 'miraculous' characteristics that enable physical life to be possible on earth. This following article goes even deeper than the 'science of water' to reveal many mysterious 'spiritual characteristics' of water found in the Bible that enable a deeper 'spiritual life' to even be possible.
WATER, as a metaphor (in the Bible)
Visible light is also incredibly fine-tuned for life to exist. Though visible light is only a tiny fraction of the total electromagnetic spectrum coming from the sun, it happens to be the "most permitted" portion of the sun's spectrum allowed to filter through the our atmosphere. All the other bands of electromagnetic radiation, directly surrounding visible light, happen to be harmful to organic molecules, and are almost completely absorbed by the atmosphere. The tiny amount of harmful UV radiation, which is not visible light, allowed to filter through the atmosphere is needed to keep various populations of single cell bacteria from over-populating the world (Ross; The size of light's wavelengths and the constraints on the size allowable for the protein molecules of organic life, also seem to be tailor-made for each other. This "tailor-made fit" allows photosynthesis, the miracle of sight, and many other things that are necessary for human life. These specific frequencies of light (that enable plants to manufacture food and astronomers to observe the cosmos) represent less than 1 trillionth of a trillionth (10^-24) of the universe's entire range of electromagnetic emissions. Like water, visible light also appears to be of optimal biological utility (Denton; Nature's Destiny).
Extreme Fine Tuning of Light for Life and Scientific Discovery - video
Fine Tuning Of Universal Constants, Particularly Light - Walter Bradley - video
Fine Tuning Of Light to the Atmosphere, to Biological Life, and to Water - graphs
Intelligent Design - Light and Water - video
Proverbs 3:19
"The Lord by wisdom founded the earth: by understanding He established the heavens;"
The scientific evidence clearly indicates the earth is extremely unique in this universe in its ability to support life. These facts are rigorously investigated and cannot be dismissed out of hand as some sort of glitch in accurate information. Here materialism can offer no competing theory of blind chance which can offset the overwhelming evidence for the earth's apparent intelligent design which enables her to host complex life. A materialist can only assert we are extremely 'lucky'. This is some kind of fantastic luck materialists believe. The odds of another life-supporting earth 'just so happening' in this universe (1 in 10^1054) are not even remotely as good as the odds a blind man would have in finding one pre-selected grain of sand, which has been hidden in all vast expanses of deserts and beaches of the world, with only one try, and then the blind man repeatedly finding the grain of sand, first time every time, several times over! These fantastic odds against another life-supporting world 'just so happening' in this universe have not even been refined to their final upper limits yet. The odds will only get far worse for the atheistic materialist.,,, When faced with such staggering odds against life 'just so happening' elsewhere in the universe, I find the Search for Extra-Terrestrial Intelligence by SETI to be amusing:
SETI - Search For Extra-Terrestrial Intelligence receives message from God,,,,, Almost - video
I find it strange that the SETI (Search for Extra-Terrestrial Intelligence) organization spends millions of dollars vainly searching for signs of extra-terrestrial life in this universe, when all anyone has to do to make solid contact with THE primary 'extra-terrestrial intelligence' of the entire universe is to pray with a sincere heart. God certainly does not hide from those who sincerely seek Him. Actually communicating with the Creator of the universe is certainly a lot more exciting than not communicating with some little green men that in all probability do not even exist, unless of course, God decided to create them!
Isaiah 45:18-19
For thus says the Lord, who created the heavens, who is God, who formed the earth and made it, who established it, who did not create it in vain, who formed it to be inhabited: “I am the Lord, and there is no other. I have not spoken in secret, in a dark place of the earth; I did not say to the seed of Jacob, ‘seek me in vain’; I, the Lord speak righteousness, I declare things that are right.”
“When I was young, I said to God, 'God, tell me the mystery of the universe.' But God answered, 'That knowledge is for me alone.' So I said, 'God, tell me the mystery of the peanut.' Then God said, 'Well George, that's more nearly your size.' And he told me.”
George Washington Carver
Inventors - George Washington Carver
Excerpt: "God gave them to me" he (Carver) would say about his ideas, "How can I sell them to someone else?"
Hearing God – Are We Listening? – video
To answer our third question (What evidence is found for the first life on earth?) we will look at the evidence for the first appearance of life on earth and the chemical activity of the first bacterial life on earth. Once again, the presumption of materialistic blind chance being the only reasonable cause must be dealt with.
First and foremost, we now have evidence for photosynthetic life suddenly appearing on earth, as soon as water appeared on the earth, in the oldest sedimentary rocks ever found on earth.
The Sudden Appearance Of Photosynthetic Life On Earth - video
Team Claims It Has Found Oldest Fossils By NICHOLAS WADE - August 2011
Excerpt: Rocks older than 3.5 billion years have been so thoroughly cooked as to destroy all cellular structures, but chemical traces of life can still be detected. Chemicals indicative of life have been reported in rocks 3.5 billion years old in the Dresser Formation of Western Australia and, with less certainty, in rocks 3.8 billion years old in Greenland.
Earliest (Bacteria) fossils found in Australia, 3.4 bya
Dr. Hugh Ross - Origin Of Life Paradox - video
Archaean Microfossils and the Implications for Intelligent Design - August 2011
Excerpt: This dramatically limits the amount of time, and thus the probablistic resources, available to those who wish to invoke purely unguided and purposeless material processes to explain the origin of life.
Could Impacts Jump-Start the Origin of Life? - Hugh Ross - article
Moreover, the atmosphere is found not to be 'reducing' on the early earth as is commonly taught in materialistic origin of life scenarios;
Time to end speculation about a reducing atmosphere for the early Earth - David Tyler - December 2011
Excerpt: Using zircons dated to almost 4.4 Ga, the researchers have analysed their redox state (a measure of the degree of oxygenation of the mineral).,,, "[In] this issue, Trail et al. report their analysis of the sole mineral survivors of the Hadean, zircon samples more than 4 billion years old. Their findings allowed them to determine the 'fugacity' of oxygen in Hadean magmatic melts, a quantity that acts as a measure of magmatic redox conditions. Unexpectedly, the zircons record oxygen fugacities identical to those in the present-day mantle, leading the authors to conclude that Hadean volcanic gases were as highly oxidized as those emitted today."
Late Heavy Bombardment - graph
Life - Its Sudden Origin and Extreme Complexity - Dr. Fazale Rana - video
The evidence scientists have discovered in the geologic record is stunning in its support of the anthropic hypothesis. The oldest sedimentary rocks on earth, known to science, originated underwater (and thus in relatively cool environs) 3.86 billion years ago. Those sediments, which are exposed at Isua in southwestern Greenland, also contain the earliest chemical evidence (fingerprint) of 'photosynthetic' life [Nov. 7, 1996, Nature]. This evidence had been fought by materialists since it is totally contrary to their evolutionary theory. Yet, Danish scientists were able to bring forth another line of geological evidence to substantiate the primary line of geological evidence for photo-synthetic life in the earth’s earliest sedimentary rocks.
U-rich Archaean sea-floor sediments from Greenland - indications of +3700 Ma oxygenic photosynthesis (2003)
Moreover, evidence for 'sulfate reducing' bacteria has been discovered alongside the evidence for photosynthetic bacteria:
When Did Life First Appear on Earth? - Fazale Rana - December 2010
Excerpt: The primary evidence for 3.8 billion-year-old life consists of carbonaceous deposits, such as graphite, found in rock formations in western Greenland. These deposits display an enrichment of the carbon-12 isotope. Other chemical signatures from these formations that have been interpreted as biological remnants include uranium/thorium fractionation and banded iron formations. Recently, a team from Australia argued that the dolomite in these formations also reflects biological activity, specifically that of sulfate-reducing bacteria.
Thus we now have fairly conclusive evidence for bacterial life in the oldest sedimentary rocks ever found by scientists on earth.
On the third page of this following site there is a illustration that shows some of the interdependent, ‘life-enabling’, biogeochemical complexity of different types of bacterial life on Earth.,,,
Microbial Mat Ecology – Image on page 92 (third page down)
,,,Please note, that if even one type of bacteria group did not exist in this complex cycle of biogeochemical interdependence, that was illustrated on the third page of the preceding site, then all of the different bacteria would soon die out. This essential biogeochemical interdependence, of the most primitive different types of bacteria that we have evidence of on ancient earth, makes the origin of life ‘problem’ for neo-Darwinists that much worse. For now not only do neo-Darwinists have to explain how the ‘miracle of life’ happened once with the origin of photosynthetic bacteria, but they must now also explain how all these different types bacteria, that photosynthetic bacteria are dependent on, in this irreducibly complex biogeochemical web, miraculously arose just in time to supply the necessary nutrients, in their biogeochemical link in the chain, for photosynthetic bacteria to continue to survive. As well, though not clearly illustrated in the illustration on the preceding site, please note that a long term tectonic cycle, of the turnover the Earth’s crustal rocks, must also be fine-tuned to a certain degree with the bacteria and thus plays a important ‘foundational’ role in the overall ecology of the biogeochemical system that must be accounted for as well.
As a side issue to these complex interdependent biogeochemical relationships, of the 'simplest' bacteria on Earth, that provide the foundation for a 'friendly' environment on Earth that is hospitable to higher lifeforms above them to eventually appear on earth, it is interesting to note man's failure to build a miniature, self-enclosed, ecology in which humans could live for any extended periods of time.
Biosphere 2 – What Went Wrong?
Excerpt: Other Problems
Biosphere II’s water systems became polluted with too many nutrients. The crew had to clean their water by running it over mats of algae, which they later dried and stored.
Also, as a symptom of further atmospheric imbalances, the level of dinitrogen oxide became dangerously high. At these levels, there was a risk of brain damage due to a reduction in the synthesis of vitamin B12.
The simplest photosynthetic life on earth is exceedingly complex, too complex to happen by accident even if the primeval oceans had been full of pre-biotic soup.
The Miracle Of Photosynthesis - electron transport - video
Electron transport and ATP synthesis during photosynthesis - Illustration
There is actually a molecular motor, that surpasses man made motors in engineering parameters, that is integral to the photosynthetic process:
Evolution vs ATP Synthase - Molecular Machine - video
The ATP Synthase Enzyme - an exquisite motor necessary for first life - video
The photosynthetic process is clearly a irreducible complex condition:
"There is no question about photosynthesis being Irreducibly Complex. But it’s worse than that from an evolutionary perspective. There are 17 enzymes alone involved in the synthesis of chlorophyll. Are we to believe that all intermediates had selective value? Not when some of them form triplet states that have the same effect as free radicals like O2. In addition if chlorophyll evolved before antenna proteins, whose function is to bind chlorophyll, then chlorophyll would be toxic to cells. Yet the binding function explains the selective value of antenna proteins. Why would such proteins evolve prior to chlorophyll? and if they did not, how would cells survive chlorophyll until they did?" Uncommon Descent Blogger
Evolutionary biology: Out of thin air John F. Allen & William Martin:
The measure of the problem is here: “Oxygenetic photosynthesis involves about 100 proteins that are highly ordered within the photosynthetic membranes of the cell."
Of note: anoxygenic (without oxygen) photosynthesis is even more of a complex chemical pathway than oxygenic photosynthesis is:
"Remarkably, the biosynthetic routes needed to make the key molecular component of anoxygenic photosynthesis are more complex than the pathways that produce the corresponding component required for the oxygenic form."; - Fazale Rana
Also of note: Anaerobic organisms, that live without oxygen, and most viruses are quickly destroyed by direct contact with oxygen.
In what I find to be a very fascinating discovery, it is found that photosynthetic life, which is an absolutely vital link that all higher life on earth is dependent on for food, uses ‘non-local’ quantum mechanical principles to accomplish photosynthesis. Moreover, this is direct evidence that a non-local, beyond space-time mass-energy, cause must be responsible for ‘feeding’ all life on earth, since all higher life on earth is eventually completely dependent on the non-local ‘photosynthetic energy’ in which to live their lives on this earth:
Non-Local Quantum Entanglement In Photosynthesis - video with notes in description
Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Gregory S. Engel, Nature (12 April 2007)
Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated.,,,, This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path. ---- Conclusion? Obviously Photosynthesis is a brilliant piece of design by "Someone" who even knows how quantum mechanics works.
Quantum Mechanics at Work in Photosynthesis: Algae Familiar With These Processes for Nearly Two Billion Years - Feb. 2010
Excerpt: "We were astonished to find clear evidence of long-lived quantum mechanical states involved in moving the energy. Our result suggests that the energy of absorbed light resides in two places at once -- a quantum superposition state, or coherence -- and such a state lies at the heart of quantum mechanical theory.",,, "It suggests that algae knew about quantum mechanics nearly two billion years before humans," says Scholes.
Life Masters Physics - Feb. 2010
Excerpt: Collini et al.2 report evidence suggesting that a process known as quantum coherence ‘wires’ together distant molecules in the light-harvesting apparatus of marine cryptophyte algae.,,,“Intriguingly, recent work has documented that light-absorbing molecules in some photosynthetic proteins capture and transfer energy according to quantum-mechanical probability laws instead of classical laws at temperatures up to 180 K,”. ,,, “This contrasts with the long-held view that long-range quantum coherence between molecules cannot be sustained in complex biological systems, even at low temperatures.”
Materialists have tried to get around this crushing evidence for the sudden appearance of extremely complex, and elegant, photosynthetic life in the oldest sedimentary rocks ever found on earth by suggesting life could have originated in extreme conditions at hydrothermal vents. Yet, ignoring the fact that hydrothermal vents were themselves submerged in the water that produced the earliest sedimentary rocks that we find evidence for photosynthetic life in, the materialists are once again betrayed by the empirical evidence:
Refutation Of Hyperthermophile Origin Of Life scenario
Excerpt: While life, if appropriately designed, can survive under extreme physical and chemical conditions, it cannot originate under those conditions. High temperatures are especially catastrophic for evolutionary models. The higher the temperature climbs, the shorter the half-life for all the crucial building block molecules,
The origin of life--did it occur at high temperatures?
Excerpt: Prebiotic chemistry points to a low-temperature origin because most biochemicals decompose rather rapidly at temperatures of 100 degrees C (e.g., half-lives are 73 min for ribose, 21 days for cytosine, and 204 days for adenine).
Chemist explores the membranous origins of the first living cell:
Excerpt: Conditions in geothermal springs and similar extreme environments just do not favor membrane formation, which is inhibited or disrupted by acidity, dissolved salts, high temperatures, and calcium, iron, and magnesium ions. Furthermore, mineral surfaces in these clay-lined pools tend to remove phosphates and organic chemicals from the solution. "We have to face up to the biophysical facts of life," Deamer said. "Hot, acidic hydrothermal systems are not conducive to self-assembly processes."
Nick Lane Takes on the Origin of Life and DNA - Jonathan McLatchie - July 2010
Excerpt: numerous problems abound for the hydrothermal vent hypothesis for the origin of life,,,, For example, as Stanley Miller has pointed out, the polymers are "too unstable to exist in a hot prebiotic environment." Miller has also noted that the RNA bases are destroyed very quickly in water when the water boils. Intense heating also has the tendency to degrade amino acids such as serine and threonine. A more damning problem lies in the fact that the homochirality of the amino acids is destroyed by heating.
Of course, accounting for the required building blocks is an interesting problem, but from the vantage of ID proponents, it is only one of many problems facing materialistic accounts of the origin of life. After all, it is the sequential arrangement of the chemical constituents -- whether that happens to be amino acids in proteins, or nucleotides in DNA or RNA -- to form complex specified information (a process which requires the production of specified irregularity), which compellingly points toward the activity of rational deliberation (Intelligence).
Origin-of-Life Theorists Fail to Explain Chemical Signatures in the Cell - Casey Luskin - February 15, 2012
Excerpt: (Nick) Lane also notes that the study has a significant conceptual flaw. "To suggest that the ionic composition of primordial cells should reflect the composition of the oceans is to suggest that cells are in equilibrium with their medium, which is close to saying that they are not alive," Lane says. "Cells require dynamic disequilibrium -- that is what being alive is all about.",,, Our uniform experience affirms that specified information-whether inscribed hieroglyphics, written in a book, encoded in a radio signal, or produced in a simulation experiment-always arises from an intelligent source, from a mind and not a strictly material process.
(Stephen Meyer - Signature in the Cell, p. 347)
Besides hydrothermal vents, it is also commonly, and erroneously, presumed in many grade school textbooks that life slowly arose in a primordial ocean of pre-biotic soup. Yet there are no chemical signatures in the geologic record indicating that a ocean of this pre-biotic soup ever existed. In fact, as stated earlier, the evidence indicates that complex photosynthetic life appeared on earth as soon as water appeared on earth with no chemical signature whatsoever of prebiotic activity.
The Primordial Soup Myth:
Excerpt: "Accordingly, Abelson(1966), Hull(1960), Sillen(1965), and many others have criticized the hypothesis that the primitive ocean, unlike the contemporary ocean, was a "thick soup" containing all of the micromolecules required for the next stage of molecular evolution. The concept of a primitive "thick soup" or "primordial broth" is one of the most persistent ideas at the same time that is most strongly contraindicated by thermodynamic reasoning and by lack of experimental support." - Sidney Fox, Klaus Dose on page 37 in Molecular Evolution and the Origin of Life.
New Research Rejects 80-Year Theory of 'Primordial Soup' as the Origin of Life - Feb. 2010
"Despite bioenergetic and thermodynamic failings the 80-year-old concept of primordial soup remains central to mainstream thinking on the origin of life, But soup has no capacity for producing the energy vital for life."
William Martin - an evolutionary biologist
Moreover, water is considered a 'universal solvent' which is a very thermodynamic obeying and thus origin of life defying fact.
Abiogenic Origin of Life: A Theory in Crisis - Arthur V. Chadwick, Ph.D.
Excerpt: The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of prebiological evolution. There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. A thermodynamic analysis of a mixture of protein and amino acids in an ocean containing a 1 molar solution of each amino acid (100,000,000 times higher concentration than we inferred to be present in the prebiological ocean) indicates the concentration of a protein containing just 100 peptide bonds (101 amino acids) at equilibrium would be 10^-338 molar. Just to make this number meaningful, our universe may have a volume somewhere in the neighborhood of 10^85 liters. At 10^-338 molar, we would need an ocean with a volume equal to 10^229 universes (100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000) just to find a single molecule of any protein with 100 peptide bonds. So we must look elsewhere for a mechanism to produce polymers. It will not happen in the ocean.
Professor Arthur E. Wilder-Smith "Any amounts of polypeptide which might be formed will be broken down into their initial components (amino acids) by the excess of water. The ocean is thus practically the last place on this or any other planet where the proteins of life could be formed spontaneously from amino acids. Yet nearly all text-books of biology teach this nonsense to support evolutionary theory and spontaneous biogenesis ... Has materialistic Neo-Darwinian philosophy overwhelmed us to such an extent that we forget or overlook the well-known facts of science and of chemistry in order to support this philosophy? ... Without exception all Miller's amino acids are completely unsuitable for any type of spontaneous biogenesis. And the same applies to all and any randomly formed substances and amino acids which form racemates. This statement is categorical and absolute and cannot be affected by special conditions."
A Substantial Conundrum Confronting The Chemical Origin Of Life - August 2011
Excerpt: 1. Peptide bond formation is an endothermic reaction. This means that the reaction requires the absorption of energy: It does not take place spontaneously.
2. Peptide bond formation is a condensation reaction. It hence involves the net removal of a water molecule. So not only can this reaction not happen spontaneously in an aqueous medium, but, in fact, the presence of water inhibits the reaction.
Sea Salt only adds to this thermodynamic problem:
...even at concentrations seven times weaker than in today’s oceans. The ingredients of sea salt are very effective at dismembering membranes and preventing RNA units (monomers) from forming polymers any longer than two links (dimers). Creation Evolution News - Sept. 2002
The following article and videos have a fairly good overview of the major problems facing any naturalistic Origin Of Life scenario:
On the Origin of Life - The Insurmountable Problems Of Chemistry - Charles Thaxton PhD. - video
Evolution's Fatal Flaw - The Origin Of Life - Chris Ashcraft PhD - video
Evolutionary Assumptions - Life from dead chemicals? - video
"Shut up," Coyne Explained - January 2012
Excerpt: Coyne writes that Kuhn's criticisms of current origin-of-life research are "absurdly funny" -- even though such research (into the origin of life) has not led to the abiotic formation of a single functional protein, much less a living cell.
Though the 1953 Miller-Urey experiment is often touted, by evolutionists, as evidence that life can spontaneously arise on the primitive earth, evolutionists always seem to fail to mention the severe problems that were found with the 1953 Miller-Urey experiment in which, among other severe problems, only a few of the building blocks for proteins, amino-acids, were ever actually produced in minute quantities in an artificial environment:
Miller-Urey Experiment
Excerpt: While successful in trapping some amino acids, this is now recognized as not being analogous to the real natural world - there are no known or even hypothesized protective traps observed in nature. What they made was 85% tar, 13% carboxylic acid add, (both toxic to life) and only 2% amino acids. Problem: mostly only 2 of the 20 different amino acids life needed were produced, and they are much more likely to bond with the tar or acid than they are with each other. Half of the amino acids were right-handed and half were left-handed. This is a problem because all proteins are left-handed and even the smallest proteins have 70-100 amino acids all in the precise order.
Rare Amino Acid Challenge to the Origin of Life
Excerpt: Granted that on early Earth arginine and lysine are either totally missing or available only at such extremely low abundance levels as to be irrelevant, and recognizing that arginine-and-lysine-containing proteins are essential for the crucial protein-DNA interactions, naturalistic explanations for the origin of life are ruled out.
Programming of Life - Amino Acids - video
The problem of 'left handed' homochirality found in the Miller-Urey experiment is of no small concern to any Origin Of Life scenario put forth by evolutionists:
Dr. Charles Garner on the problem of Chirality in nature and Origin of Life Research - audio
Origin Of Life - Problems With Proteins - Homochirality - Charles Thaxton PhD. - video
Homochirality and Darwin - Robert Sheldon - April 2010
Excerpt: there is no abiotic path from a racemic solution to a stereo-active solution of amino acid(s) that doesn't involve a biotic chiral agent, be it chiral beads or Louis Pasteur himself. Like many critiques of ID, the problem with these "Darwinist" solutions is that they always smuggle in some information, in this case, chiral agents.
Homochirality and Darwin: part 2 - Robert Sheldon - May 2010
Excerpt: With regard to the deniers who think homochirality is not much of a problem, I only ask whether a solution requiring multiple massive magnetized black-hole supernovae doesn't imply there is at least a small difficulty to overcome? A difficulty, perhaps, that points to the non-random nature of life in the cosmos?
Left-Handed Amino Acids Explained Naturally? Not by a long shot! - January 2010
The severity of the homochirality problem begins to highlight the number one question facing any Origin Of Life research. Namely, "Where is the specified complexity (information) coming from?" Even this recent 'evolution friendly' article readily admitted the staggering level of 'specified complexity' (information) being dealt with in the first cell:
Was our oldest ancestor a proton-powered rock? - Oct. 2009
Excerpt: “There is no doubt that the progenitor of all life on Earth, the common ancestor, possessed DNA, RNA and proteins, a universal genetic code, ribosomes (the protein-building factories), ATP and a proton-powered enzyme for making ATP. The detailed mechanisms for reading off DNA and converting genes into proteins were also in place. In short, then, the last common ancestor of all life looks pretty much like a modern cell.”
So much for 'simple' life!
Colossians 1:16
I think David Abel, the director of the Gene Emergence Project, does a very good job of highlighting just how crucial 'information' is to Origin of Life research:
Chance and necessity do not explain the origin of life: Trevors JT, Abel DL.
Excerpt: Minimal metabolism would be needed for cells to be capable of growth and division. All known metabolism is cybernetic--that is, it is programmatically and algorithmically organized and controlled.
Does New Scientific Evidence About the Origin of Life Put an End to Darwinian Evolution? - Stephen Meyer - 4 part video
The "simplest" life currently found on the earth that is able to exist outside of a test tube, the parasitic Mycoplasmal, has between a 0.56-1.38 megabase genome which results in drastically reduced biosynthetic capabilities and explains their dependence on a host. Yet even with this 'reduced complexity' we find that even the 'simplest' life on earth exceeds man's ability to produce such complexity in his computer programs or in his machines:
Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information - David L. Abel and Jack T. Trevors - Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
Mycoplasma Genitalium - The "Simplest" Life On Earth - video
First-Ever Blueprint of 'Minimal Cell' Is More Complex Than Expected - Nov. 2009
There’s No Such Thing as a ‘Simple’ Organism - November 2009
Excerpt: In short, there was a lot going on in lowly, supposedly simple M. pneumoniae, and much of it is beyond the grasp of what’s now known about cell function.
Simplest Microbes More Complex than Thought - Dec. 2009
Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes."
On top of the fact that we now know the genetic code of the simplest organism ever found on Earth is a highly advanced 'logic' code, which far surpasses man's ability to devise such codes, we also know for a fact no operation of logic ever performed by a computer will ever increase the algorithmic code inherent in a computer's program, i.e. Bill Gates will never use random number generators and selection software to write more highly advanced computer codes:
The 'simplest' life possible, by experiment, requires several hundred distinct interlocking protein types which further interlock with several hundred distinct genes in the DNA, which are all further interlocked with RNA, and the associated protein machinery, in an irreducibly complex manner that defies all attempts to reduce its complexity further.
William Dembski calls the DNA, RNA, Protein interlock problem :
"Irreducible Complexity on steroids".
Biological function and the genetic code are interdependent: Voie:
Life never ceases to astonish scientists as its secrets are more and more revealed. In particular the origin of life remains a mystery:
Chaos, Solitons and Fractals, 2006, Vol 28(4), 1000-1004.
Journey Inside The Cell - DNA to mRNA to Proteins - Stephen Meyer - Signature In The Cell - video
Recently Craig Venter, of deciphering the human genome fame, created quite a stir, in the public imagination, by claiming to have 'created' synthetic life. The fact is that the claim was a gross exaggeration of what Venter's group had actually accomplished, for the truth is that they did not truly 'create' anything, not even a single protein or gene, they merely copied information that was already present in life before:
Is Craig Venter’s Synthetic Cell Really Life? - July 2010
Excerpt: David Baltimore was closer to the truth when he told the New York Times that the researchers had not created life so much as mimicked it. It might be still more accurate to say that the researchers mimicked one part and borrowed the rest.
Stephen Meyer Discusses Craig Venter's "Synthetic Life" on CBN - video
Aside from the small, but impressive, technical feat of Venter's work, in reality researchers can't even say with 100% certainty what the minimal gene set for a genome is, much less are they anywhere near creating life from scratch in the laboratory:
John I. Glass et al., "Essential Genes of a Minimal Bacterium," PNAS, USA103 (2006): 425-30.
Excerpt: "An earlier study published in 1999 estimated the minimal gene set to fall between 265 and 350. A recent study making use of a more rigorous methodology estimated the essential number of genes at 382.,,, Given the evolutionary path of extreme genome reduction taken by M. genitalium, it is likely that all its 482 protein-coding genes are in some way necessary for effective growth in its natural habitat"
Life’s Minimum Complexity Supports ID - Fazale Rana - November 2011
Excerpt page 16: The Stanford investigators determined that the essential genome of C. crescentus consisted of just over 492,000 base pairs (genetic letters), which is close to 12 percent of the overall genome size. About 480 genes comprise the essential genome, along with nearly 800 sequence elements that play a role in gene regulation.,,, When the researchers compared the C. crescentus essential genome to other essential genomes, they discovered a limited match. For example, 320 genes of this microbe’s basic genome are found in the bacterium E. coli. Yet, of these genes, over one-third are nonessential for E. coli. This finding means that a gene is not intrinsically essential. Instead, it’s the presence or absence of other genes in the genome that determine whether or not a gene is essential.,,
The following study highlights the inherent fallacy in gene deletion/knockout experiments that have led many scientists astray in the past as to underestimating what the minimal genome for life should actually be:
Minimal genome should be twice the size - 2006
Excerpt: “Previous attempts to work out the minimal genome have relied on deleting individual genes in order to infer which genes are essential for maintaining life,” said Professor Laurence Hurst from the Department of Biology and Biochemistry at the University of Bath. “This knock out approach misses the fact that there are alternative genetic routes, or pathways, to the production of the same cellular product. “When you knock out one gene, the genome can compensate by using an alternative gene. “But when you repeat the knock out experiment by deleting the alternative, the genome can revert to the original gene instead. “Using the knock-out approach you could infer that both genes are expendable from the genome because there appears to be no deleterious effect in both experiments.
Mouse Genome Knockout Experiment
Jonathan Wells on Darwinism, Science, and Junk DNA - November 2011
Excerpt: Mice without “junk” DNA. In 2004, Edward Rubin] and a team of scientists at Lawrence Berkeley Laboratory in California reported that they had engineered mice missing over a million base pairs of non-protein-coding (“junk”) DNA—about 1% of the mouse genome—and that they could “see no effect in them.”
But molecular biologist Barbara Knowles (who reported the same month that other regions of non-protein-coding mouse DNA were functional) cautioned that the Lawrence Berkeley study didn’t prove that non-protein-coding DNA has no function. “Those mice were alive, that’s what we know about them,” she said. “We don’t know if they have abnormalities that we don’t test for.”And University of California biomolecular engineer David Haussler said that the deleted non-protein-coding DNA could have effects that the study missed. “Survival in the laboratory for a generation or two is not the same as successful competition in the wild for millions of years,” he argued.
In 2010, Rubin was part of another team of scientists that engineered mice missing a 58,000-base stretch of so-called “junk” DNA. The team found that the DNA-deficient mice appeared normal until they (along with a control group of normal mice) were fed a high-fat, high-cholesterol diet for 20 weeks. By the end of the study, a substantially higher proportion of the DNA-deficient mice had died from heart disease. Clearly, removing so-called “junk” DNA can have effects that appear only later or under other circumstances.
The probabilities against life 'spontaneously' originating are simply overwhelming:
In fact Dean Kenyon, who was a leading Origin Of Life researcher as well as a college textbook author on the subject in the 1970s, admitted after years of extensive research:
"We have not the slightest chance for the chemical evolutionary origin of even the simplest of cells".
Origin Of Life? - Probability Of Protein And The Information Of DNA - Dean Kenyon - video
Probability Of A Protein and First Living Cell - Chris Ashcraft - video (notes in description)
Stephen Meyer - Proteins by Design - Doing The Math - video
Signature in the Cell - Book Review - Ken Peterson
Excerpt: If we assume some minimally complex cell requires 250 different proteins then the probability of this arrangement happening purely by chance is one in 10 to the 164th multiplied by itself 250 times or one in 10 to the 41,000th power.
In fact years ago Fred Hoyle arrived at approximately the same number, one chance in 10^40,000, for life spontaneously arising. From this number, Fred Hoyle compared the random emergence of the simplest bacterium on earth to the likelihood “a tornado sweeping through a junkyard might assemble a Boeing 747 therein”. Fred Hoyle also compared the chance of obtaining just one single functioning protein molecule, by chance combination of amino acids, to a solar system packed full of blind men solving Rubik’s Cube simultaneously.
Professor Harold Morowitz shows the Origin of Life 'problem' escalates dramatically over the 1 in 10^40,000 figure when working from a thermodynamic perspective,:
Dr. Don Johnson lays out some of the probabilities for life in this following video:
Probabilities Of Life - Don Johnson PhD. - 38 minute mark of video
a typical functional protein - 1 part in 10^175
the required enzymes for life - 1 part in 10^40,000
a living self replicating cell - 1 part in 10^340,000,000
Programming of Life - Probability of a Cell Evolving - video
Programming of Life - video playlist:
Also of interest is the information content that is derived in a cell when working from a thermodynamic perspective:
“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong
'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica."
of note: The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent quantum information/entanglement constraining a 'simple' life form to be so far out of thermodynamic equilibrium.
"Is there a real connection between entropy in physics and the entropy of information? ....The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental..." Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]
For calculations, from the thermodynamic perspective, please see the following site:
Moleular Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley
Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz' deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures.
Further comments:
The Theist holds the Intellectual High-Ground - March 2011
Excerpt: To get a range on the enormous challenges involved in bridging the gaping chasm between non-life and life, consider the following: “The difference between a mixture of simple chemicals and a bacterium, is much more profound than the gulf between a bacterium and an elephant.” (Dr. Robert Shapiro, Professor Emeritus of Chemistry, NYU)
Scientists Prove Again that Life is the Result of Intelligent Design - Rabbi Moshe Averick - August 2011
Excerpt: “To go from bacterium to people is less of a step than to go from a mixture of amino acids to a bacterium.” - Dr. Lynn Margulis
Here is a related article with several more excellent quotes, by leading origin of life researchers, commenting on the 'problem' that the origin of life presents to 'science' (actually it is only a problem for atheists who 'believe' that 'science' equates strictly to their reductive materialistic view of reality):
Faye Flam: Atheist Writer Who is Long on Graciousness, Long on Civility… Short on Reason, Short on Scientific Realities - Rabbi Averick
The following videos give a small glimpse into how the probabilities are calculated for the origin of life:
The Origin of Life - Lecture On Probability - John Walton - Professor Of Chemistry - short video
Entire Video:
Protein Molecules and "Simple" Cells - video
Further comment:
Ilya Prigogine was an eminent chemist and physicist who received two Nobel Prizes in chemistry. Regarding the probability of life originating by accident, he said:
Ilya Prigogine, Gregoire Nicolis, and Agnes Babloyantz, Physics Today 25, pp. 23-28. (Sourced Quote)
Anyone who has debated an evolutionist, over the probability of life spontaneous arising, knows that it can be quite frustrating because eventually the person will realize that many times the evolutionist will not be reasonable on the matter and that he is operating on nothing but blind faith that life can spontaneously arise by unintelligent processes. Here are a few more links relating to the (im)probability of life:
Probability's Nature and Nature's Probability: A Call to Scientific Integrity - Donald E. Johnson
Excerpt: "one should not be able to get away with stating “it is possible that life arose from non-life by ...” or “it’s possible that a different form of life exists elsewhere in the universe” without first demonstrating that it is indeed possible (non-zero probability) using known science. One could, of course, state “it may be speculated that ... ,” but such a statement wouldn’t have the believability that its author intends to convey by the pseudo-scientific pronouncement."
Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 11
Excerpt: It is estimated that the simplest life form would require at least 382 protein-coding genes. Using our estimate in Case Four of 700 bits of functional information required for the average protein, we obtain an estimate of about 267,000 bits for the simplest life form. Again, this is well above Inat and it is about 10^80,000 times more likely that ID (Intelligent Design) could produce the minimal genome than mindless natural processes.
Could Chance Arrange the Code for (Just) One Gene?
"our minds cannot grasp such an extremely small probability as that involved in the accidental arranging of even one gene (10^-236)."
Even the low end 'hypothetical' probability estimate given by a evolutionist, for life spontaneously arising, is fantastically impossible:
General and Special Evidence for Intelligent Design in Biology:
- The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of: - two rRNAs with a total size of at least 1000 nucleotides - ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides - at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required. In the above notation, n = 1800, resulting in E < 10^-1018. That is, the chance of life occurring by natural processes is 1 in 10 followed by 1018 zeros. (Koonin's intent was to show that short of postulating a multiverse of an infinite number of universes (Many Worlds), the chance of life occurring on earth is vanishingly small.)
The cosmological model of eternal inflation and the transition from chance to biological evolution in the history of life - Eugene V Koonin
Origin of life both one of the hardest and most important problems in science - November 2011
It should be stressed that Dr. Koonin tries to account for the origination of the massive amount of functional information, required for the Origin of Life, by trying to access an 'unelucidated and undirected' mechanism of Quantum Mechanics called 'Many Worlds in one'(He is trying to invoke a ‘materialistic miracle’). Besides Dr. Koonin ignoring the fact that Quantum Events, on a whole, are strictly restricted to the transcendent universal laws/constants of the universe, including and especially the second law of thermodynamics, for as far back in time in the universe as we can 'observe', it is also fair to note, in criticism to Dr. Koonin’s scenario, that appealing to the undirected infinite probabilistic resource, of the quantum mechanics of the Many Worlds scenario, actually greatly increases the amount of totally chaotic information one would expect to see generated 'randomly' on the earth. In fact the Many Worlds scenario actually greatly increases the likelihood we would witness total chaos, instead of order, surrounding us as the following video points out:
Finely Tuned Big Bang, Elvis In Many Worlds, and the Schroedinger Equation – Granville Sewell – audio
Though Koonin appeals to a 'modern version' of Many Worlds, called 'Many Worlds in One' (Alex Vilenkin), 'Many Worlds' was originally devised because of the inability of materialistic scientists to find adequate causation for quantum wave collapse in the first place (i.e. that is, adequate causation that did not involve God!):
Quantum mechanics
Perhaps some may say Everett’s Many Worlds in not absurd, if so, then in some other parallel universe, where Elvis happens to now be president of the United States, they actually do think that the Many Worlds conjecture is absurd! That type of 'flexible thinking' within science I find to be completely absurd! And that one 'Elvis' example from Many Worlds is just small potatoes to the levels of absurdity that we would actually witness in reality if Many Worlds were actually true.
Though Eugene Koonin is correct to recognize that the infinite probabilistic resource postulated in ‘Many Worlds’ does not absolutely preclude the sudden appearance of massive amounts of functional information on the earth, he is very incorrect to disregard the ‘Logos’ of John 1:1 needed to correctly specify the ‘precisely controlled mechanism of implementation’ for the massive amounts of complex functional and specified information witnessed abruptly and mysteriously appearing in the first life on earth, nor for the mysterious appearing of the subsequent 'sudden' appearances of life on earth. i.e. Koonin must sufficiently account for the 'cause' for the 'effect' he wants to explain. And as I have noted previously, Stephen Meyer clearly points out that the only known cause now in operation, sufficient to explain the generation of the massive amounts of functional information we find in life, is intelligence:
Stephen C. Meyer – What is the origin of the digital information found in DNA? – August 2010 - video
Evolutionist Koonin's estimate of 1 in 10 followed by 1018 zeros, for the probability of the simplest self-replicating molecule 'randomly occurring', is a fantastically large number. The number, 10^1018, if written out in its entirety, would be a 1 with one-thousand-eighteen zeros following to the right! The universe itself is estimated to have only 1 with 80 zeros following to the right particles in it. This is clearly well beyond the 10^150 universal probability bound set by William Dembski and is thus clearly a irreducibly complex condition. Basically Koonin, in appealing to a never before observed 'materialistic miracle' from the 'Many Worlds' hypothesis, clearly illustrates that the materialistic argument essentially appears to be like this:
Premise One: No materialistic cause of specified complex information is known.
Conclusion: Therefore, it must arise from some unknown materialistic cause
On the other hand, Stephen Meyer describes the intelligent design argument as follows:
“Conclusion: Intelligent design constitutes the best, most causally adequate, explanation for the information in the cell.”
There remains one and only one type of cause that has shown itself able to create functional information like we find in cells, books and software programs -- intelligent design. We know this from our uniform experience and from the design filter -- a mathematically rigorous method of detecting design. Both yield the same answer. (William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, p. 90 (InterVarsity Press, 2010).)
Stephen Meyer - The Scientific Basis for the Intelligent Design Inference - video
Though purely material processes have NEVER shown the ability to produce ANY functional information whatsoever (Abel - Null Hypothesis), Darwinists are adamant that material processes produced more information, of a much higher level of integrated complexity than man can produce, than is contained in a very large library:
“Again, this is characteristic of all animal and plant cells. Each nucleus … contains a digitally coded database larger, in information content, than all 30 volumes of the Encyclopaedia Britannica put together. And this figure is for each cell, not all the cells of a body put together. … When you eat a steak, you are shredding the equivalent of more than 100 billion copies of the Encyclopaedia Britannica.”
(Dawkins R., “The Blind Watchmaker [1986], Penguin: London, 1991, reprint, pp.17-18. Emphasis in original)
When faced with the staggering impossibilities of random material processes ever generating any functional information, Evolutionists will sometimes make the claim that infinite monkeys banging away on infinite typewriters could produce the entire works of Shakespeare. Well someone humorously put that 'hypothesis' to the test:
Monkey Theory Proven Wrong:
The following is a very interesting 'origin of first self-replicating molecule' interview with one of the top chemists in America today:
On The Origin Of Life And God - Henry F. Schaefer, III PhD. - video
Further comments:
Intelligent Design or Evolution? Stuart Pullen
The chemical origin of life is the most vexing problem for naturalistic theories of life's origins. Despite an intense 50 years of research, how life can arise from non-life through naturalistic processes is as much a mystery today as it was fifty years ago, if not more.
Szostak on Abiogenesis: Just Add Water - Cornelius Hunter - Aug. 2009
Excerpt: "While Szostak and Ricardo may sound scientific with their summary of the abiogenesis research, the article is firmly planted in the non scientific evolution genre where evolution is dogmatically mandated to be a fact. Consequently, the bar is lowered dramatically as the silliest of stories pass as legitimate science."
Along these lines of 'silliest of stories' passing for rigorous science in origin of life research:
Grandma Gets Sexy Idea for Origin of Life - August 2010
Excerpt: In the video clip, she suggested that it might be possible some day to get good evidence for her ideas on the origin of life, implying that evidence has not yet been a primary concern.
SETI Ignorance Gets Stronger - December 2010
Excerpt: Steve Benner, an origin of life researcher, “used the analogy of a steel chain with a tinfoil link to illustrate that the arsenate ion said to replace phosphate in the bacterium’s DNA forms bonds that are orders of magnitude less stable.”
Pumice and the Origin of Life - October 17, 2011
Excerpt: However, the reactions required are not simple reactions, and the steps involved, even using a substrate such as pumice, are still too numerous and specific to have happened by chance. It appears highly unlikely that pumice is capable of solving the problem of the origin of life.
"In my opinion, there is no basis in known chemistry for the belief that long sequences of reactions can organize spontaneously -- and every reason to believe that they cannot. The problem of achieving sufficient specificity, whether in consisting of or occurring within a water-based system, aqueous solution, or on the surface of a mineral, is so severe that the chance of closing a cycle of reactions as complex as the reverse citric acid cycle, for example, is negligible." Leslie Orgel, 1998
By the way, there is a one million dollar 'Origin-of-Life' prize being offered:
"The Origin-of-Life Prize" ® (hereafter called "the Prize") will be awarded for proposing a highly plausible mechanism for the spontaneous rise of genetic instructions in nature sufficient to give rise to life.
To reiterate, the problem for the origin of life clearly turns out to be explaining where the information came from in the first place:
Origin of life theorist Bernd-Olaf Kuppers in his book "Information and the Origin of Life".
Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009.
Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome.
So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail.
Of Molecules and (Straw) Men: Stephen Meyer Responds to Dennis Venema's Review of Signature in the Cell - Stephen C. Meyer October 9, 2011
Excerpt of Conclusion: The origin-of-life scenarios that Venema cites as alternatives to intelligent design lack biochemical plausibility and do not account for the ultimate origin of biological information.
"Monkeys Typing Shakespeare" Simulation Illustrates Combinatorial Inflation Problem - October 2011
Excerpt: In other words, Darwinian evolution isn't going to be able to produce fundamentally new protein folds. In fact, it probably wouldn't even be able to produce a single 9-character string of nucleotides in DNA, if that string would not be retained by selection until all 9 nucleotides were in place.
Natural selection cannot explain the origin of life
Paul Davies
“The existence of a genome and the genetic code divides the living organisms from nonliving matter. There is nothing in the physico-chemical world that remotely resembles reactions being determined by a sequence and codes between sequences.”,,,"The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences.”
Hubert P. Yockey: Information Theory, Evolution, and the Origin of Life, page 2 and 5
H.P. Yockey also notes in the Journal of Theoretical Biology:
"Self Organization Origin of Life Scenarios and Information Theory," J. Theoret. Biol.
Norbert Weiner - MIT Mathematician - Father of Cybernetics
Programming of Life - October 2010
Excerpt: "Evolutionary biologists have failed to realize that they work with two more or less incommensurable domains: that of information and that of matter... These two domains will never be brought together in any kind of the sense usually implied by the term ‘reductionism.'... Information doesn't have mass or charge or length in millimeters. Likewise, matter doesn't have bytes... This dearth of shared descriptors makes matter and information two separate domains of existence, which have to be discussed separately, in their own terms."
George Williams - Evolutionary Biologist
The simplest, non-parasitic, bacteria ever found on earth is constructed with over a million individual protein molecules divided into hundreds of different protein types. Protein molecules are made from one dimensional sequences of the 20 different L-amino acids that can be used as building blocks for proteins. (there are hundreds of amino acids found in nature, but only 20 are commonly used in life). These one dimensional sequences of amino acids fold into highly complex three-dimensional structures. The proteins vary in length of sequences of amino acids. The average sequence of a typical protein is about 300 to 400 amino acids long. Yet many crucial proteins are thousands of amino acids long. Titin, which helps in the contraction of striated muscle tissues, consists of 34,350 amino acids, and is the largest known protein. Some proteins are now shown to be absolutely irreplaceable in their specific biological/chemical reactions for the first cell:
Without enzyme, biological reaction essential to life takes 2.3 billion years: UNC study:
"Phosphatase speeds up reactions vital for cell signalling by 10^21 times. Allows essential reactions to take place in a hundreth of a second; without it, it would take a trillion years!" Jonathan Sarfati
Programming of Life - Proteins & Enzymes - video
Book Review: Creating Life in the Lab: How New discoveries in Synthetic Biology Make a Case for the Creator - Rich Deem - January 2011
Excerpt: Despite all this "intelligent design," the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, "is it reasonable to think that undirected evolutionary processes routinely accomplished this task?"
Research group develops more efficient artificial enzyme - November 2011
Excerpt: Though the artificial enzyme is still many orders of magnitude less efficient than nature’s way of doing things, it is far more efficient than any other artificial process to date, a milestone that gives researchers hope that they will one day equal nature’s abilities.
When they try to heat solutions to get around these prohibitive reaction times they run into the competing problem of product stability:
Is the Origin of Life in Hot Water? - December 2010
Excerpt: Heating a reaction does nothing for product stability. Cooling a reaction makes the reaction rate problems worse.
To reiterate what was quoted before, amino acids don't even have a tendency to chemically bond with each other, despite over fifty years of experimentation trying to get the amino acids to bond naturally. The odds of just one protein of 150 amino acids, overcoming the barriers presented by chemical bonding and forming a functional protein spontaneously, have been calculated at less than 1 in 10^164 (Meyer, Signature In The Cell). On top of the fact that nature cannot 'naturally' produce proteins, the limit to man's ability to 'intelligently' form a single synthetic amino acid chain (a protein), using all his intelligence and lab equipment, is currently severely constrained to about 70-100 amino acids:
Peptide synthesis
"typically peptides and proteins in the range of 70~100 amino acids are pushing the limits of synthetic accessibility. Synthetic difficulty also is sequence dependent; typically amyloid peptides and proteins are difficult to make."(To make larger proteins requires “non-natural” peptide bonds - (Chemical Synthesis Of Proteins - 2005))
On top of that, Doug Axe has shown that only 1 in 10^77 of any proteins 'randomly' formed would perform any beneficial biological function. The rest of the sequences would be totally useless for any meaningful function in the cell. Even a child knows you cannot put any piece of a puzzle anywhere in a puzzle. You must have the required piece in the required place.
Doug Axe Knows His Work Better Than Steve Matheson
Excerpt: Regardless of how the trials are performed, the answer ends up being at least half of the total number of password possibilities, which is the staggering figure of 10^77 (written out as 100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000). Armed with this calculation, you should be very confident in your skepticism, because a 1 in 10^77 chance of success is, for all practical purposes, no chance of success. My experimentally based estimate of the rarity of functional proteins produced that same figure, making these likewise apparently beyond the reach of chance.
Evolution vs. Functional Proteins - Doug Axe - Video
Dennis Venema, a theistic evolutionist, had tried to challenge Doug Axe's work on the extreme rarity of functional proteins, and here is what one of the very papers said, that Venema tried to use to supposedly refute Axe:
Responding to Venema - Casey Luskin - October 2011
Excerpt: However, these experiments do not really model the evolution that occurs through gradual, step-by-step changes, with all intermediates being fully foldable proteins (Blanco et al., 1999). To create such an evolutionarily relevant path from all-a to all-b domains would be the next challenge for protein designers.
Axe Diagram for finding a functional protein domain out of all sequence space:
The y-axis can be seen as representing enzyme activity, and the x-axis represents all possible amino acid sequences. Enzymes sit at the peak of their fitness landscapes (Point A). There are extremely high levels of complex and specified information in proteins--informational sequences which point to intelligent design.
And how do Darwinists deal with the astronomical improbabilities that are stacked against them for explaining the novel origination of even just one required protein by 'natural' means? At least as far as the public is concerned??? Well by deception of course!
Back to School Part VIII
Excerpt: Amazingly evolutionists think hemoglobin’s special amino acid sequence encoding for this machine is no different than any random list, such (as) a list of birthdays. To be sensible Johnson’s and Losos’ analogy would need the list of birthdays to provide something fantastic, such as the answers to the biology class final exam.
Proteins Did Not Evolve Even According to the Evolutionist’s Own Calculations but so What, Evolution is a Fact - Cornelius Hunter - July 2011
Excerpt: For instance, in one case evolutionists concluded that the number of evolutionary experiments required to evolve their protein (actually it was to evolve only part of a protein and only part of its function) is 10^70 (a one with 70 zeros following it). Yet elsewhere evolutionists computed that the maximum number of evolutionary experiments possible is only 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude.
Kirk Durston has done work on defining how much functional information resides in proteins:
Does God Exist? - Argument From Molecular Biology - Proteins - Kirk Durston - short video
Intelligent Design - Kirk Durston - Lecture video
Measuring the functional sequence complexity of proteins - 2007: Kirk K Durston, David KY Chiu, David L Abel, Jack T Trevors
In this paper, we provide a method to measure functional sequence complexity (in proteins).
Conclusion: This method successfully distinguishes between order, randomness, and biological function (for proteins).
Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 10 - 11
Case Three: an average 300 amino acid protein:
Excerpt: It is reasonable, therefore, to estimate the functional information required for the average 300 amino acid protein to be around 700 bits of information. I(Ex) > Inat and ID (Intelligent Design) is 10^155 times more probable than mindless natural processes to produce the average protein.
"a very rough but conservative result is that if all the sequences that define a particular (protein) structure or fold-set where gathered into an area 1 square meter in area, the next island would be tens of millions of light years away."
Kirk Durston
Axe's work substantiates, and extends, previous work that was done at Massachusetts Institute of Technology (MIT):
Experimental Support for Regarding Functional Classes of Proteins to be Highly Isolated from Each Other: - Michael Behe
"From actual experimental results it can easily be calculated that the odds of finding a folded protein are about 1 in 10 to the 65 power (Sauer, MIT).,,, The odds of finding a marked grain of sand in the Sahara Desert three times in a row are about the same as finding one new functional protein structure. Rather than accept the result as a lucky coincidence, most people would be certain that the game had been fixed.”
Michael J. Behe, The Weekly Standard, June 7, 1999
Even the low end estimate, for functional proteins given by evolutionists (1 in 10^12), is very rare:
Fancy footwork in the sequence space shuffle - 2006
"Estimates for the density of functional proteins in sequence space range anywhere from 1 in 10^12 to 1 in 10^77. No matter how you slice it, proteins are rare. Useful ones are even more rare."
It is interesting to note that the 1 in 10^12 (trillion) estimate for functional proteins (Szostak), though still very rare and of insurmountable difficulty for a materialist to use in any evolutionary scenario, was arrived at by an in-vitro (out of living organism) binding of ANY random proteins to the 'universal' ATP energy molecule.
How Proteins Evolved - Cornelius Hunter - December 2010
Excerpt: Comparing ATP binding with the incredible feats of hemoglobin, for example, is like comparing a tricycle with a jet airplane. And even the one in 10^12 shot, though it pales in comparison to the odds of constructing a more useful protein machine, is no small barrier. If that is what is required to even achieve simple ATP binding, then evolution would need to be incessantly running unsuccessful trials. The machinery to construct, use and benefit from a
potential protein product would have to be in place, while failure after failure results. Evolution would make Thomas Edison appear lazy, running millions of trials after millions of trials before finding even the tiniest of function.
The entire episode of Szostak’s failed attempt to establish the legitimacy of the 1 in 10^12 functional protein number from a randomly generated library of proteins can be read here::
This following paper was the paper that put the final nail in the coffin for Szostak's work:
A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells
Excerpt: "Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATP-binding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division."
Here is a very interesting comment by Jack Szostak himself:
The Origin of Life on Earth
Dr. Jack Szostak - Nobel Laureate and leading Origin of Life researcher who, despite the evidence he sees first hand, still believes 'life' simply 'emerged' from molecules
Further defence of Dr. Axe's work on the rarity of proteins:
Axe (2004) And The Evolution Of Protein Folds - March 2011
On top of the fact that Origin of Life researcher Jack Szostak, and others, failed to generate any biologically relevant proteins, from a library of trillions of randomly generated proteins, proteins have now been shown to have a ‘Cruise Control’ mechanism, which works to ‘self-correct’ the integrity of the protein structure from any random mutations imposed on them.
Proteins with cruise control provide new perspective:
Cruise Control permeating the whole of the protein structure??? This is an absolutely fascinating discovery. The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced mathematical information must reside 'transcendentally' along the entirety of the protein structure, in order to achieve such control of the overall protein structure. This fact gives us clear evidence that there is far more functional information residing in proteins than meets the eye. Moreover this ‘oneness’ of cruise control, within the protein structure, can only be achieved through quantum computation/entanglement principles, and is inexplicable to the reductive materialistic approach of neo-Darwinism! For a sample of the equations that must be dealt with, to 'engineer' even a simple process control loop like cruise control for a single protein, please see this following site:
PID controller
It is in realizing the staggering level of engineering that must be dealt with to achieve ‘cruise control’ for each individual protein, along the entirety of the protein structure, that it becomes apparent even Axe’s 1 in 10^77 estimate for rarity of finding specific functional proteins within sequence space is far, far too generous. In fact probabilities over various ‘specific’ configurations of material particles simply do not even apply, at all, since the 'cause' of the non-local quantum information does not reside within the material particles in the first place (i.e. falsification of local realism; Alain Aspect). Here is corroborating evidence that 'protein specific' quantum information/entanglement resides in functional proteins:
Quantum states in proteins and protein assemblies:
In fact since quantum entanglement falsified reductive materialism/local realism (Alain Aspect) then finding quantum entanglement/information to be ‘protein specific’ is absolutely shattering to any hope that materialists had in whatever slim probabilities there were, since a ‘transcendent’ cause must be supplied which is specific to each unique protein structure. Materialism is simply at a complete loss to supply such a 'non-local' transcendent cause, whereas Theism has always postulated a transcendent cause for life!
Though the authors of the 'cruise control' paper tried to put a evolution friendly spin on the 'cruise control' evidence, for finding a highly advanced 'Process Control Loop' at such a base molecular level, before natural selection even has a chance to select for any morphological novelty of a protein, this limit to variability is very much to be expected as a Intelligent Design/Genetic Entropy feature, and is in fact a very constraining thing to the amount of variation we should reasonably expect from any 'kind' of species in the first place.
Here are some more articles highlighting the extreme rarity of functional proteins:
The Case Against a Darwinian Origin of Protein Folds - Douglas Axe, Jay Richards - audio
The following site offers a short summary of the 'Darwinian shortcuts' that failed to overcome Axe's finding for the rarity of protein folds:
Shortcuts to new protein folds - October 2010
Excerpt: Axe concludes that all of these putative shortcuts are dead ends. The Darwinian search mechanism is not capable of finding new protein folds by random sampling and all the shortcuts to new folds are dead ends.
Here are articles that clearly illustrate that the protein evidence, no matter how crushing to Darwinism, is always crammed into the Darwinian framework by Evolutionists:
The Hierarchy of Evolutionary Apologetics: Protein Evolution Case Study - Cornelius Hunter - January 2011
Here is a critique of the failed attempt to evolve a 'fit' protein to replace a protein in a virus which had a gene knocked out:
New Genes: Putting the Theory Before the Evidence - January 2011
Excerpt: What they discovered was that the evolutionary process could produce only tiny improvements to the virus’ ability to infect a host. Their evolved sequences showed no similarity to the native sequence which is supposed to have evolved. And the best virus they could produce, even with the vast majority of the virus already intact, was several orders of magnitude weaker than nature’s virus.
The theory, even by the evolutionist’s own reckoning, is unworkable. Evolution fails by a degree that is incomparable in science. Scientific theories often go wrong, but not by 27 orders of magnitude. And that is conservative.
Here is a fairly good defense of the rarity of protein folds, from a blogger called gpuccio, from the best Darwinian objections that could be mustered against it:
Signature In The Cell - Review
Our most advanced supercomputers pale in comparison to this assumption, of a universe full of chemical laboratories, that has been generously granted to evolutionists for 'randomly' finding a functional protein in sequence space:
"SimCell," anyone?
"Unfortunately, Schulten's team won't be able to observe virtual protein synthesis in action. Even the fastest supercomputers can only depict such atomic complexity for a few dozen nanoseconds." - cool cellular animation videos on the site
Instead of us just looking at the probability of finding a single 'simple' functional protein molecule by chance, (a solar system full of blind men solving the Rubik’s Cube simultaneously (Hoyle), let’s also look at the complexity which goes into crafting the shape of just one protein molecule. Complexity will give us a better indication if a protein molecule is indeed the handi-work of an infinitely powerful Creator.
Francis Collins on Making Life
Excerpt: 'We are so woefully ignorant about how biology really works. We still don't understand how a particular DNA sequence—when we just stare at it—codes for a protein that has a particular function. We can't even figure out how that protein would fold—into what kind of three-dimensional shape. And I would defy anybody who is going to tell me that they could, from first principles, predict not only the shape of the protein but also what it does.' - Francis Collins - Former Director of the Human Genome Project
Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator - Fazale Rana
Excerpt of Review: ‘Another interesting section of Creating Life in the Lab is one on artificial enzymes. Biological enzymes catalyze chemical reactions, often increasing the spontaneous reaction rate by a billion times or more. Scientists have set out to produce artificial enzymes that catalyze chemical reactions not used in biological organisms. Comparing the structure of biological enzymes, scientists used super-computers to calculate the sequences of amino acids in their enzymes that might catalyze the reaction they were interested in. After testing dozens of candidates,, the best ones were chosen and subjected to “in vitro evolution,” which increased the reaction rate up to 200-fold. Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, “is it reasonable to think that undirected evolutionary processes routinely
accomplished this task?”
In the year 2000 IBM announced the development of a new super-computer, called Blue Gene, which was 500 times faster than any supercomputer built up until that time. It took 4-5 years to build. Blue Gene stands about six feet high, and occupies a floor space of 40 feet by 40 feet. It cost $100 million to build. It was built specifically to better enable computer simulations of molecular biology. The computer performs one quadrillion (one million billion) computations per second. Despite its speed, it was estimated to take one entire year for it to analyze the mechanism by which JUST ONE “simple” protein will fold onto itself from its one-dimensional starting point to its final three-dimensional shape.
"Blue Gene's final product, due in four or five years, will be able to "fold" a protein made of 300 amino acids, but that job will take an entire year of full-time computing." Paul Horn, senior vice president of IBM research, September 21, 2000
Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single protein molecule:
A Few Hundred Thousand Computers vs. A Single Protein Molecule - video
Interestingly, there are some (perhaps many?) complex protein folding problems found by scientists that have still refused to be solved by the brute number crunching power of super-computers, but, 'surprisingly', these problems have been solved by the addition of 'human intuition';
So Much For Random Searches - PaV - September 2011
Excerpt: There’s an article in Discover Magazine about how gamers have been able to solve a problem in HIV research in only three weeks (!) that had remained outside of researcher’s powerful computer tools for years. This, until now, unsolvable problem gets solved because: "They used a wide range of strategies, they could pick the best places to begin, and they were better at long-term planning. Human intuition trumped mechanical number-crunching." Here’s what intelligent agents were able to do within the search space of possible solutions:,,, "until now, scientists have only been able to discern the structure of the two halves together. They have spent more than ten years trying to solve structure of a single isolated half, without any success. The Foldit players had no such problems. They came up with several answers, one of which was almost close to perfect. In a few days, Khatib had refined their solution to deduce the protein’s final structure, and he has already spotted features that could make attractive targets for new drugs." Thus,,
Random search by powerful computer: 10 years and No Success
Intelligent Agents guiding powerful computing: 3 weeks and Success.
As well, despite some very optimistic claims, it seems future 'quantum computers' will not fair much better in finding functional proteins in sequence space than even a idealized 'material' supercomputer of today can do:
The Limits of Quantum Computers – March 2008
Excerpt: "Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle"
The Limits of Quantum Computers - Scott Aaronson - 2007
Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,,
Here is Scott Aaronson's blog in which refutes recent claims that P=NP (Of note: if P were found to equal NP, then a million dollar prize would be awarded to the mathematician who provided the proof that NP problems could be solved in polynomial time):
Excerpt: Quantum computers are not known to be able to solve NP-complete problems in polynomial time.
Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time:
Combinatorial Algorithms for Protein Folding in Lattice
Models: A Survey of Mathematical Results – 2009
Excerpt: Protein Folding: Computational Complexity
NP-completeness: from 10^300 to 2 Amino Acid Types
NP-completeness: Protein Folding in Ad-Hoc Models
NP-completeness: Protein Folding in the HP-Model
Another factor severely complicating man's ability to properly mimic protein folding is that, much contrary to evolutionary thought, many proteins fold differently in different 'molecular' situations:
The Gene Myth, Part II - August 2010
As a sidelight to the complexity found for folding any relatively short amino acid sequence into a 3-D protein, the complexity of computing the actions of even a simple atom, in detail, quickly exceeds the capacity of our most advanced supercomputers of today:
Delayed time zero in photoemission: New record in time measurement accuracy - June 2010
Excerpt: Although they could confirm the effect qualitatively using complicated computations, they came up with a time offset of only five attoseconds. The cause of this discrepancy may lie in the complexity of the neon atom, which consists, in addition to the nucleus, of ten electrons. "The computational effort required to model such a many-electron system exceeds the computational capacity of today's supercomputers," explains Yakovlev.
Also of interest to the extreme difficultly man has in computing the folding of a protein within any reasonable amount of time, it seems water itself, (H2O), was 'designed' with protein folding in mind:
Protein Folding: One Picture Per Millisecond Illuminates The Process - 2008
Excerpt: The RUB-chemists initiated the folding process and then monitored the course of events. It turned out that within less than ten milliseconds, the motions of the water network were altered as well as the protein itself being restructured. “These two processes practically take place simultaneously“, Prof. Havenith-Newen states, “they are strongly correlated.“ These observations support the yet controversial suggestion that water plays a fundamental role in protein folding, and thus in protein function, and does not stay passive.
Water Is 'Designer Fluid' That Helps Proteins Change Shape - 2008
There are overlapping 'chaperone' systems insuring that proteins fold into the precisely correct shape:
Proteins Fold Who Knows How - July 2010
Excerpt: New work published in Cell shows that this “chaperone” device speeds up the proper folding of the polypeptide when it otherwise might get stuck on a “kinetic trap.” A German team likened the assistance to narrowing the entropic funnel. “The capacity to rescue proteins from such folding traps may explain the uniquely essential role of chaperonin cages within the cellular chaperone network,” they said. GroEL+GroES therefore “rescues” protein that otherwise might misfold and cause damage to the cell.,,, “In contrast to all other components of this chaperone network, the chaperonin, GroEL, and its cofactor, GroES, are uniquely essential, forming a specialized nano-compartment for single protein molecules to fold in isolation.”
Nature Review Article Yields Unpleasant Data For Darwinism - August 2011
In real life, the protein folds into its final shape in a fraction of a second! The Blue Gene computer would have to operate at least 33 million times faster to accomplish what the protein does in a fraction of a second. This is the complexity found for folding JUST ONE relatively short 'simple' existing protein molecule. Yet, evolution must account for the origination, and organization, of far, far, more than just one relatively short specifically sequenced protein molecule:
A New Guide to Exploring the Protein Universe
"It is estimated, based on the total number of known life forms on Earth, that there are some 50 billion different types of proteins in existence today, and it is possible that the protein universe could hold many trillions more."
Lynn Yarris - 2005
Shoot no one really has a firm clue as to exactly how many different proteins reside in a single cell much less all of life;
Go to the Cell, Thou Sluggard - March 2011
Excerpt: Calculations indicate that each human cell contains roughly a billion protein molecules.,,, These proteins have a kind of address label, a signal sequence, that specifies what place inside or outside the cell they need to be transported to. This transport must function flawlessly if order is to be maintained in the cell,
Even the most generous of protein classifications, 'folds and superfamilies' yields several thousand completely unique proteins:
SCOP (Structural Classification of Proteins) site - gpuccio
Excerpt: However we group the proteome, we have at present at least 1000 different fundamental folds, 2000 “a little less fundamentally different” folds (the superfamilies), and 6000 totally unrelated groups of primary sequences.
What makes matters much worse for the materialist is that he will try to assert that existing functional proteins of one structure can easily mutate into other functional proteins, of a completely different structure or function, by pure chance. Yet once again the empirical evidence betrays the materialist. The proteins that are found in life are shown to be highly constrained in their ability to evolve into other proteins:
Following the Evidence Where It Leads: Observations on Dembski's Exchange with Shapiro - Ann Gauger - January 2012
Excerpt: So far, our research indicates that genuine innovation, a change to a function not already pre-existent in a protein, is beyond the reach of natural processes, even when the starting proteins are very similar in structure.
Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009
Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,,
A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses.
Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009
Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed.
Wheel of Fortune: New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law - Michael Behe - October 5, 2011
Excerpt: Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection.
Stability effects of mutations and protein evolvability. October 2009
Excerpt: The accepted paradigm that proteins can tolerate nearly any amino acid substitution has been replaced by the view that the deleterious effects of mutations, and especially their tendency to undermine the thermodynamic and kinetic stability of protein, is a major constraint on protein evolvability,,
The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway - Ann K. Gauger and Douglas D. Axe - April 2011
Excerpt: We infer from the mutants examined that successful functional conversion would in this case require seven or more nucleotide substitutions. But evolutionary innovations requiring that many changes would be extraordinarily rare, becoming probable only on timescales much longer than the age of life on earth.
When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe
Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied.
Corticosteroid Receptors in Vertebrates: Luck or Design? - Ann Gauger - October 11, 2011
Excerpt: if merely changing binding preferences is hard, even when you start with the right ancestral form, then converting an enzyme to a new function is completely beyond the reach of unguided evolution, no matter where you start.
“Mutations are rare phenomena, and a simultaneous change of even two amino acid residues in one protein is totally unlikely. One could think, for instance, that by constantly changing amino acids one by one, it will eventually be possible to change the entire sequence substantially… These minor changes, however, are bound to eventually result in a situation in which the enzyme has ceased to perform its previous function but has not yet begun its ‘new duties’. It is at this point it will be destroyed - along with the organism carrying it.” Maxim D. Frank-Kamenetski, Unraveling DNA, 1997, p. 72. (Professor at Brown U. Center for Advanced Biotechnology and Biomedical Engineering)
"A problem with the evolution of proteins having new shapes is that proteins are highly constrained, and producing a functional protein from a functional protein having a significantly different shape would typically require many mutations of the gene producing the protein. All the proteins produced during this transition would not be functional, that is, they would not be beneficial to the organism, or possibly they would still have their original function but not confer any advantage to the organism. It turns out that this scenario has severe mathematical problems that call the theory of evolution into question. Unless these problems can be overcome, the theory of evolution is in trouble."
Problems in Protein Evolution:
Extreme functional sensitivity to conservative amino acid changes on enzyme exteriors - Doug Axe
Darwin's God: Post Synaptic Proteins Intolerant of Change - December 2010
Excerpt: Not only is there scant evidence of intermediate designs leading to the known proteins, but the evidence we do have is that these proteins do not tolerate change.
As well, the 'errors/mutations' that are found to 'naturally' occur in protein sequences are found to be 'designed errors':
Cells Defend Themselves from Viruses, Bacteria With Armor of Protein Errors - Nov. 2009
There are even 'protein police':
GATA-1: A Protein That Regulates Proteins - Feb. 2010
Heat shock proteins:
Excerpt: They play an important role in protein-protein interactions such as folding and assisting in the establishment of proper protein conformation (shape) and prevention of unwanted protein aggregation.
This following paper, and audio interview, shows that there is a severe 'fitness cost' for cells to carry 'transitional' proteins that have not achieved full functionality yet:
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness - May 2010
Excerpt: Despite the theoretical existence of this short adaptive path to high fitness, multiple independent lines grown in tryptophan-limiting liquid culture failed to take it. Instead, cells consistently acquired mutations that reduced expression of the double-mutant trpA gene. Our results show that competition between reductive and constructive paths may significantly decrease the likelihood that a particular constructive path will be taken.
Testing Evolution in the Lab With Biologic Institute's Ann Gauger - audio
In fact the Ribosome, which makes the myriad of different, yet specific, types of proteins found in life, is found to be severely intolerant to any random mutations occurring to proteins.
The Ribosome: Perfectionist Protein-maker Trashes Errors
Excerpt: The enzyme machine that translates a cell's DNA code into the proteins of life is nothing if not an editorial perfectionist...the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products... To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is "shocking" and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis.
And exactly how is the evolution new life forms suppose to 'randomly' occur if it is prevented from 'randomly' occurring to the proteins in the first place?
As well, the 'protein factory' of the ribosome, which is the only known machine in the universe capable of making proteins of any significant length, is far more complicated than first thought:
Honors to Researchers Who Probed Atomic Structure of Ribosomes - Robert F. Service
Excerpt: "The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.”
Moreover, scientists are finding many protein complexes are extremely intolerant to any random mutations:
Warning: Do NOT Mutate This Protein Complex: - June 2009
Excerpt: In each cell of your body there is a complex of 8 or more proteins bound together called the BBSome. This protein complex, discovered in 2007, should not be disturbed. Here’s what happens when it mutates: “A homozygous mutation in any BBSome subunit (except BBIP10) will make you blind, obese and deaf, will obliterate your sense of smell, will make you grow extra digits and toes and cause your kidneys to fail.”... the BBSome is “highly conserved” (i.e., unevolved) in all ciliated organisms from single-celled green algae to humans,..."
Which begs the question, "If this complex of 8 proteins which is found throughout life, is severely intolerant to any mutations happening to it now, how in the world did it come to be in the first photosynthetic life in the first place?
Even if evolution somehow managed to overcome these impossible hurdles for generating novel proteins by totally natural means, evolution would still face the monumental hurdles of generating complimentary protein/protein binding sites, in which the novel proteins would actually interact with each other in order to accomplish the specific tasks needed in a cell (it is estimated that there are least 10,000 different types of protein-protein binding sites in a 'simple' cell; Behe: Edge Of Evolution).
What does the recent hard evidence say about novel protein-protein binding site generation?
Protein-Protein Interactions (PPI) Fine-Tune the Case for Intelligent Design - Article with video - April 2011
Excerpt: The most recent work by the Harvard scientists indicates that the concentration of PPI-participating proteins in the cell is also carefully designed.
"The likelihood of developing two binding sites in a protein complex would be the square of the probability of developing one: a double CCC (chloroquine complexity cluster), 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the entire world in the past 4 billion years, so the odds are against a single event of this variety (just 2 binding sites being generated by accident) in the history of life. It is biologically unreasonable."
Michael J. Behe PhD. (from page 146 of his book "Edge of Evolution")
The Sheer Lack Of Evidence For Macro Evolution - William Lane Craig - video
Nature Paper,, Finds Darwinian Processes Lacking - Michael Behe - Oct. 2009
Excerpt: Now, thanks to the work of Bridgham et al (2009), even such apparently minor switches in structure and function (of a protein to its supposed ancestral form) are shown to be quite problematic. It seems Darwinian processes can’t manage to do even as much as I had thought. (which was 1 in 10^40 for just 2 binding sites)
So, how many protein-protein binding sites are found in life?
Dr. Behe, on the important Table 7.1 on page 143 of Edge Of Evolution, finds that a typical cell might have some 10,000 protein-binding sites. Whereas a conservative estimate for protein-protein binding sites in a multicellular creature is,,,
Largest-Ever Map of Plant Protein Interactions - July 2011
So taking into account that they only covered 2%, of the full protein-protein "interactome", then that gives us a number, for different protein-protein interactions, of 310,000. Thus, from my very rough 'back of the envelope' calculations, we find that this is at least 30 times higher than Dr. Behe's estimate of 10,000 different protein-protein binding sites for a typical single cell (Page 143; Edge of Evolution; Behe). Therefore, at least at first glance from my rough calculations, it certainly seems to be a gargantuan step that evolution must somehow make, by purely unguided processes, to go from a single cell to a multi-cellular creature. To illustrate just how difficult of a step it is, the order of difficulty, of developing a single protein-protein binding site, is put at 10^20 replications of the malarial parasite by Dr. Behe. This number comes from direct empirical observation.
Dr. Behe's empirical research agrees with what is found if scientists try to purposely design a protein-protein binding site:
Viral-Binding Protein Design Makes the Case for Intelligent Design Sick! (as in cool) - Fazale Rana - June 2011
Moreover, there is, 'surprisingly', found to be 'rather low' conservation of Domain-Domain Interactions occurring in Protein-Protein interactions:
Excerpt: Knowledge of specific domain-domain interactions (DDIs) is essential to understand the functional significance of protein interaction networks. Despite the availability of an enormous amount of data on protein-protein interactions (PPIs), very little is known about specific DDIs occurring in them.,,, Our results show that only 23% of these DDIs are conserved in at least two species and only 3.8% in at least 4 species, indicating a rather low conservation across species.,,,
As well, RNA, which codes for the proteins at the ribosome, is found to be intolerant to 'random mutations':
Molecular Typesetting: How Errors Are Corrected (In RNA) While Proteins Are Being Built
Excerpt: Ensuring that proteins are built correctly is essential to the proper functioning of our bodies,,,“Scientists have been puzzled as to how this process makes so few mistakes.,,,“In fact, there is more than one identified mechanism for ensuring that genetic code is copied correctly."
The cell has elaborate ways to safeguard its genetic library by repairing DNA, but now scientists are finding the same enzymes can also repair RNA. RNA methylation damage can be repaired by the same AlkB enzyme that repairs DNA. This is surprising because RNA and proteins were considered more expendable than DNA. (Creation-Evolution Headlines - Feb. 2003)
RNA: Protein Regulators Are Themselves Regulated
“What was formerly conceived of as a direct, straightforward pathway is gradually turning out to be a dense network of regulatory mechanisms: genes are not simply translated into proteins via mRNA (messenger RNA). MicroRNAs control the translation of mRNAs (messenger RNAs) into proteins, and proteins in turn regulate the microRNAs at various levels.”
Researchers Uncover New Kink In Gene Control: - Oct. 2009
Excerpt: a collaborative effort,, has uncovered more than 300 proteins that appear to control genes, a newly discovered function for all of these proteins previously known to play other roles in cells.,,,The team suspects that many more proteins encoded by the human genome might also be moonlighting to control genes,,,
On top of these monumental problems, for just finding any one specific functional protein, or for just finding any protein/protein binding sites, or for accounting for multiple layers of error correction that prevent evolution from happening to proteins in the first place, a materialist must still account for how the DNA code came about in any origin of life scenario he puts forth. These following videos and articles highlight the 'DNA problem':
Programming of Life - DNA - video
A New Design Argument - Charles Thaxton
Excerpt: "There is an identity of structure between DNA (and protein) and written linguistic messages. Since we know by experience that intelligence produces written messages, and no other cause is known, the implication, according to the abductive method, is that intelligent cause produced DNA and protein. The significance of this result lies in the security of it, for it is much stronger than if the structures were merely similar. We are not dealing with anything like a superficial resemblance between DNA and a written text. We are not saying DNA is like a message. Rather, DNA is a message. True design thus returns to biology."
Information Theory, Evolution, and the Origin of Life - Hubert P. Yockey, 2005
The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video
Codes and Axioms are always the result of mental intention, not material processes
A.E. Wilder Smith, DNA, Cactus, and Von Neumann Machines - John MacArthur - audio
Information - The Utter Demise Of Darwinian Evolution - video
"A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107."
(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)
The Digital Code of DNA - 2003 - Leroy Hood & David Galas
The Digital Code of DNA and the Unimagined Complexity of a ‘Simple’ Bacteria – Rabbi Moshe Averick – video (Notes in Description)
Upright Biped Replies to Dr. Moran on “Information” - December 2011
Excerpt: 'a fair reading suggests that the information transfer in the genome shouldn’t be expected to adhere to the qualities of other forms of information transfer. But as it turns out, it faithfully follows the same physical dynamics as any other form of recorded information.'
Even the leading "New Atheist" in the world, Richard Dawkins, agrees that DNA functions exactly like digital code:
Richard Dawkins Opens Mouth; Inserts Foot - video
i.e. DNA functions exactly as a 'devised code':
Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177)
Ode to the Code - Brian Hayes
Evolutionists have long argued that the genetic code is universal for all lifeforms, and maintain that that fact is strong evidence for evolution from a universal common anscestor, yet it appears they were wrong once again:
No Darwin Tree of Life (Craig Venter vs. Richard Dawkins)- video
Venter vs. Dawkins on the Tree of Life - and Another Dawkins Whopper - March 2011
Excerpt:,,, But first, let's look at the reason Dawkins gives for why the code must be universal:
"The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation...this would spell disaster." (2009, p. 409-10)
OK. Keep Dawkins' claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code).
Simple counting question: does "one or two" equal 23? That's the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two.
As well there was a ‘optimality’ found for the 20 amino acid set used in the 'standard' Genetic code when the set was compared to 1 million randomly generated alternative amino acid sets;
Does Life Use a Non-Random Set of Amino Acids? - Jonathan M. - April 2011
Excerpt: The authors compared the coverage of the standard alphabet of 20 amino acids for size, charge, and hydrophobicity with equivalent values calculated for a sample of 1 million alternative sets (each also comprising 20 members) drawn randomly from the pool of 50 plausible prebiotic candidates. The results? The authors noted that: "…the standard alphabet exhibits better coverage (i.e., greater breadth and greater evenness) than any random set for each of size, charge, and hydrophobicity, and for all combinations thereof."
Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement – 2009
Excerpt: A molecular dynamics calculation of the amino acid polar requirement is used to score the canonical genetic code. Monte Carlo simulation shows that this computational polar requirement has been optimized by the canonical genetic code, an order of magnitude more than any previously known measure, effectively ruling out a vertical evolution dynamics.
The Finely Tuned Genetic Code - Jonathan M. - November 2011
Excerpt: Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: "why is the genetic code the way it is and how did it come to be?," that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. - Eugene Koonin and Artem Novozhilov
Moreover the first DNA code of life on earth had to be at least as complex as the current DNA code found in life:
Shannon Information - Channel Capacity - Perry Marshall - video
“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
Donald E. Johnson – Bioinformatics: The Information in Life
Deciphering Design in the Genetic Code - Fazale Rana
Excerpt: When researchers calculated the error-minimization capacity of one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution where the naturally occurring genetic code's capacity occurred outside the distribution. Researchers estimate the existence of 10 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This finding means that of the 10 possible genetic codes, few, if any, have an error-minimization capacity that approaches the code found universally in nature.
“The genetic code’s error-minimization properties are far more dramatic than these (one in a million) results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 codes few, if any have an error-minimization capacity that approaches the code found universally throughout nature.”
Fazale Rana - From page 175; 'The Cell’s Design'
Here is a comment on a study of a 'putative primitive' amino acid set;
DNA - The Genetic Code - Optimal Error Minimization & Parallel Codes - Dr. Fazale Rana - video
Excerpt: It appears then, that the genetic code has been put together in view of minimizing not just the occurence of amino acid substitution mutations, but also the detrimental effects that would result when amino acid substitution mutations do occur.
Though the DNA code is found to be optimal from a error minimization standpoint, it is also now found that the fidelity of the genetic code, of how a specific amino acid is spelled, is far greater than had at first been thought:
Synonymous Codons: Another Gene Expression Regulation Mechanism - September 2010
Excerpt: There are 64 possible triplet codons in the DNA code, but only 20 amino acids they produce. As one can see, some amino acids can be coded by up to six “synonyms” of triplet codons: e.g., the codes AGA, AGG, CGA, CGC, CGG, and CGU will all yield arginine when translated by the ribosome. If the same amino acid results, what difference could the synonymous codons make? The researchers found that alternate spellings might affect the timing of translation in the ribosome tunnel, and slight delays could influence how the polypeptide begins its folding. This, in turn, might affect what chemical tags get put onto the polypeptide in the post-translational process. In the case of actin, the protein that forms transport highways for muscle and other things, the researchers found that synonymous codons produced very different functional roles for the “isoform” proteins that resulted in non-muscle cells,,, In their conclusion, they repeated, “Whatever the exact mechanism, the discovery of Zhang et al. that synonymous codon changes can so profoundly change the role of a protein adds a new level of complexity to how we interpret the genetic code.”,,,
Werner Gitt - In The Beginning Was Information - p. 95
Collective evolution and the genetic code - 2006:
Excerpt: The genetic code could well be optimized to a greater extent than anything else in biology and yet is generally regarded as the biological element least capable of evolving.
Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes.... the present findings support the view that protein-coding regions can carry abundant parallel codes.
The data compression of some stretches of human DNA is estimated to be up to 12 codes thick (12 different ways of DNA transcription) (Trifonov, 1989). (This is well beyond the complexity of any computer code ever written by man). John Sanford - Genetic Entropy
The multiple codes of nucleotide sequences. Trifonov EN. - 1989
Excerpt: Nucleotide sequences carry genetic information of many different kinds, not just instructions for protein synthesis (triplet code).
"In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10].
Donald E. Johnson – Programming of Life – pg.51 - 2010
DNA Caught Rock 'N Rollin': On Rare Occasions DNA Dances Itself Into a Different Shape - January 2011
Ends and Means: More on Meyer and Nelson in BIO-Complexity - September 2011
Excerpt: According to Garrett and Grisham's Biochemistry, the aminoacyl tRNA snythetase is a "second genetic code" because it must discriminate among each of the twenty amino acids and then call out the proper tRNA for that amino acid: "Although the primary genetic code is key to understanding the central dogma of molecular biology on how DNA encodes proteins, the second genetic code is just as crucial to the fidelity of information transfer."
Histone Inspectors: Codes and More Codes - Cornelius Hunter - March 2010
Excerpt: By now most people know about the DNA code. A DNA strand consists of a sequence of molecules, or letters, that encodes for proteins. Many people do not realize, however, that there are additional, more nuanced, codes associated with the DNA.
Four More DNA Bases? - August 2011
Excerpt: As technology allows us to delve ever deeper into the inner workings of the cell, we continue to find layer-upon-layer of complexity. DNA, in particular, is an incredibly complex information-bearing molecule that bears the hallmarks of design.
Besides multiple layers of 'classical information' embedded in overlapping layers throughout the DNA, there has now been discovered another layer of 'quantum information' embedded throughout the DNA:
Quantum Information In DNA & Protein Folding - short video
Human DNA is like a computer program but far, far more advanced than any software we've ever created.
Bill Gates, The Road Ahead, 1996, p. 188
The Coding Found In DNA Surpasses Man's Ability To Code - Stephen Meyer - video
Stephen Meyer - Excerpted Clip of CBN interview on problems of Craig Venter's Synthetic Life - DNA - Complexity Of The Cell - Layered Information - video
Genetic Entropy - Dr. John Sanford - Evolution vs. Reality (Super Programming in the Genome that 'dwarfs' our computer programs) - video
DNA - Evolution Vs. Polyfuctionality - video
DNA - Poly-Functional Complexity equals Poly-Constrained Complexity
Do you believe Richard Dawkins exists?
Excerpt: DNA is the best information storage mechanism known to man. A single pinhead of DNA contains as much information as could be stored on 2 million two-terabyte hard drives.
Bill Gates, in recognizing the superiority found in Genetic Coding compared to the best computer coding we now have, has now funded research into this area:
Welcome to CoSBi - (Computational and Systems Biology)
Excerpt: Biological systems are the most parallel systems ever studied and we hope to use our better understanding of how living systems handle information to design new computational paradigms, programming languages and software development environments. The net result would be the design and implementation of better applications firmly grounded on new computational, massively parallel paradigms in many different areas.
How DNA Compares To Human Language - Perry Marshall - video
Yet the DNA code is not even reducible to the laws of physics or chemistry:
The Origin of Life and The Suppression of Truth
Excerpt: 'Many claims have been made that nucleotides of DNA have been produced in such “spark and soup” experiments. However, after a careful review of the scientific literature, evolutionist Robert Shapiro stated that the nucleotides of DNA and RNA, "….have never been reported in any amount in such sources, yet a mythology has emerged that maintains the opposite….I have seen several statements in scientific sources which claim that proteins and nucleic acids themselves have been prepared… These errors reflect the operation of an entire belief system… The facts do not support his belief…Such thoughts may be comforting, but they run far ahead of any experimental validation."
Life’s Irreducible Structure
Excerpt: “Mechanisms, whether man-made or morphological, are boundary conditions harnessing the laws of inanimate nature, being themselves irreducible to those laws. The pattern of organic bases in DNA which functions as a genetic code is a boundary condition irreducible to physics and chemistry." Michael Polanyi - Hungarian polymath - 1968 - Science (Vol. 160. no. 3834, pp. 1308 – 1312)
“an attempt to explain the formation of the genetic code from the chemical components of DNA… is comparable to the assumption that the text of a book originates from the paper molecules on which the sentences appear, and not from any external source of information.”
Dr. Wilder-Smith
The Capabilities of Chaos and Complexity - David L. Abel - 2009
Excerpt: "A monstrous ravine runs through presumed objective reality. It is the great divide between physicality and formalism. On the one side of this Grand Canyon lies everything that can be explained by the chance and necessity of physicodynamics. On the other side lies those phenomena than can only be explained by formal choice contingency and decision theory—the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used. Physical dynamics includes spontaneous non linear phenomena, but not our formal applied-science called “non linear dynamics”(i.e. language,information).
i.e. There are no physical or chemical forces between the nucleotides along the linear axis of DNA (where the information is) that causes the sequence of nucleotides to exist as they do. In fact as far as the foundational laws of the universe are concerned the DNA molecule doesn’t even have to exist at all.
Judge Rules DNA is Unique (and not patentable) Because it Carries Functional Information - March 2010
“Today the idea that DNA carries genetic information in its long chain of nucleotides is so fundamental to biological thought that it is sometimes difficult to realize the enormous intellectual gap that it filled.... DNA is relatively inert chemically.”
Stephen Meyer is interviewed about the "information problem" in DNA, Signature in the Cell - video
The DNA Enigma - The Ultimate Chicken and Egg Problem - Chris Ashcraft - video
The DNA Enigma - Where Did The Information Come From? - Stephen C. Meyer - video
Believing Life's 'Signature in the Cell' an Interview with Stephen Meyer - CBN video
Every Bit Digital DNA’s Programming Really Bugs Some ID Critics - March 2010
Excerpt: In 2003 renowned biologist Leroy Hood and biotech guru David Galas authored a review article in the world’s leading scientific journal, Nature, titled, “The digital code of DNA.”,,, MIT Professor of Mechanical Engineering Seth Lloyd (no friend of ID) likewise eloquently explains why DNA has a “digital” nature: "It’s been known since the structure of DNA was elucidated that DNA is very digital. There are four possible base pairs per site, two bits per site, three and a half billion sites, seven billion bits of information in the human DNA. There’s a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins."
Stephen C. Meyer - Signature In The Cell:
"DNA functions like a software program," "We know from experience that software comes from programmers. Information--whether inscribed in hieroglyphics, written in a book or encoded in a radio signal--always arises from an intelligent source. So the discovery of digital code in DNA provides evidence that the information in DNA also had an intelligent source."
Extreme Software Design In Cells - Stephen Meyer - video
DNA - The Genetic Code - Optimization, Error Minimization & Parallel Codes - Fazale Rana - video
As well as coding optimization, DNA is also optimized to prevent damage from light:
DNA Optimized for Photostability
Excerpt: These nucleobases maximally absorb UV-radiation at the same wavelengths that are most effectively shielded by ozone. Moreover, the chemical structures of the nucleobases of DNA allow the UV-radiation to be efficiently radiated away after it has been absorbed, restricting the opportunity for damage.
The materialist must also account for the overriding complex architectural organization of DNA:
DNA Wrapping (Histone Protein Wrapping to Cell Division)- video
DNA - Replication, Wrapping & Mitosis - video
Dr. Jerry Bergman, "Divine Engineering: Unraveling DNA's Design":
The DNA packing process is both complex and elegant and is so efficient that it achieves a reduction in length of DNA by a factor of 1 million.
DNA Packaging: Nucleosomes and Chromatin
each of us has enough DNA to go from here to the Sun and back more than 300 times, or around Earth's equator 2.5 million times! How is this possible?
It turns out that DNA is also optimized for 'maximally dense packing' as well:
Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome - Oct. 2009
3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009
Scientists' 3-D View of Genes-at-Work Is Paradigm Shift in Genetics - Dec. 2009
Excerpt: Highly coordinated chromosomal choreography leads genes and the sequences controlling them, which are often positioned huge distances apart on chromosomes, to these 'hot spots'. Once close together within the same transcription factory, genes get switched on (a process called transcription) at an appropriate level at the right time in a specific cell type. This is the first demonstration that genes encoding proteins with related physiological role visit the same factory.
Although evolution depends on 'mutations/errors' to DNA to make evolution plausible, there are multiple layers of error correction in the cell to protect against any "random changes" to DNA from happening in the first place:
The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective - February 2011
Excerpt: "Unbounded random change of nucleotide codes through the accumulation of irreparable, advantageous, code-expanding, inheritable mutations at the level of individual nucleotides, as proposed by evolutionary theory, requires the mutation protection at the level of the individual nucleotides and at the higher levels of the code to be switched off or at least to dysfunction. Dysfunctioning mutation protection, however, is the origin of cancer and hereditary diseases, which reduce the capacity to live and to reproduce. Our mutation protection perspective of the evolutionary dynamics of digital and nucleotide codes thus reveals the presence of a paradox in evolutionary theory between the necessity and the disadvantage of dysfunctioning mutation protection. This mutation protection paradox, which is closely related with the paradox between evolvability and mutational robustness, needs further investigation."
Contradiction in evolutionary theory - video - (The contradiction between extensive DNA repair mechanisms and the necessity of 'random mutations/errors' for Darwinian evolution)
The Darwinism contradiction of repair systems
Excerpt: The bottom line is that repair mechanisms are incompatible with Darwinism in principle. Since sophisticated repair mechanisms do exist in the cell after all, then the thing to discard in the dilemma to avoid the contradiction necessarily is the Darwinist dogma.
Repair mechanisms in DNA include:
A proofreading system that catches almost all errors
A mismatch repair system to back up the proofreading system
Photoreactivation (light repair)
Removal of methyl or ethyl groups by O6 – methylguanine methyltransferase
Base excision repair
Nucleotide excision repair
Double-strand DNA break repair
Recombination repair
Error-prone bypass
Scientists Decipher Missing Piece Of First-responder DNA Repair Machine - Oct. 2009
Quantum Dots Spotlight DNA-Repair Proteins in Motion - March 2010
Excerpt: "How this system works is an important unanswered question in this field," he said. "It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It's akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour." Dr. Bennett Van Houten - of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot.
Of note: DNA repair machines ‘Fixing every pothole in America before the next rush hour’ is analogous to the traveling salesman problem. The traveling salesman problem is a NP-hard (read: very hard) problem in computer science; The problem involves finding the shortest possible route between cities, visiting each city only once. ‘Traveling salesman problems’ are notorious for keeping supercomputers busy for days.
NP-hard problem
Since it is obvious that there is not a material CPU (central processing unit) in the DNA, or cell, busily computing answers to this monster logistic problem, in a purely ‘material’ fashion, by crunching bits, then it is readily apparent that this monster ‘traveling salesman problem’, for DNA repair, is somehow being computed by ‘non-local’ quantum computation within the cell and/or within DNA;
Of related interest:
Electric (Quantum) DNA repair - video!
DNA Computer
Excerpt: DNA computers will work through the use of DNA-based logic gates. These logic gates are very much similar to what is used in our computers today with the only difference being the composition of the input and output signals.,,, With the use of DNA logic gates, a DNA computer the size of a teardrop will be more powerful than today’s most powerful supercomputer. A DNA chip less than the size of a dime will have the capacity to perform 10 trillion parallel calculations at one time as well as hold ten terabytes of data. The capacity to perform parallel calculations, much more trillions of parallel calculations, is something silicon-based computers are not able to do. As such, a complex mathematical problem that could take silicon-based computers thousands of years to solve can be done by DNA computers in hours.
further notes:
Researchers discover how key enzyme repairs sun-damaged DNA - July 2010
Excerpt: Ohio State University physicist and chemist Dongping Zhong and his colleagues describe how they were able to observe the enzyme, called photolyase, inject a single electron and proton into an injured strand of DNA. The two subatomic particles healed the damage in a few billionths of a second. "It sounds simple, but those two atomic particles actually initiated a very complex series of chemical reactions," said Zhong,,, "It all happened very fast, and the timing had to be just right."
DNA 'molecular scissors' discovered - July 2010
Excerpt: ' We discovered a new protein, FAN1, which is essential for the repair of DNA breaks and other types of DNA damage.
More DNA Repair Wonders Found - October 2010
Excerpt: This specialized enzyme may attract other repair enzymes to the site, and “speeds up the process by about 100 times.” The enzyme “uses several rod-like helical structures... to grab hold of DNA.”,,, On another DNA-repair front, today’s Nature described a “protein giant” named BRCA2 that is critically involved in DNA repair, specifically targeting the dangerous double-stranded breaks that can lead to serious health consequences
‘How good would each typists have to be, in order to match the DNA’s performance? The answer is almost too ludicrous to express. For what it is worth, every typists would have to have an error rate of about one in a trillion; that is, he would have to be accurate enough to make only a single error in typing the Bible 250,000 times at a stretch. A good secretary in real life has an error rate of about one per page. This is about a billion times the error rate of the histone H4 gene. A line of real life secretaries (without a correcting reference) would degrade the text to 99 percent of its original by the 20th member of the line of 20 billion. By the 10,000 member of the line less than 1 percent would survive. The point near total degradation would be reached before 99.9995% of the typists had even seen it.
Richard Dawkins - The blind watchmaker - Page 123-124
Moreover, the protein machinery that replicates DNA is found to be vastly different in even the most ancient of different single celled organisms:
Did DNA replication evolve twice independently? - Koonin
Excerpt: However, several core components of the bacterial (DNA) replication machinery are unrelated or only distantly related to the functionally equivalent components of the archaeal/eukaryotic (DNA) replication apparatus.
There simply is no smooth 'gradual transition' to be found between these most ancient of life forms, bacteria and archaea, as this following articles and videos clearly point out:
Was our oldest ancestor a proton-powered rock?
Excerpt: In particular, the detailed mechanics of DNA replication would have been quite different. It looks as if DNA replication evolved independently in bacteria and archaea,... Even more baffling, says Martin, neither the cell membranes nor the cell walls have any details in common (between the bacteria and the archaea).
Problems of the RNA World - Did DNA Evolve Twice? - Dr. Fazale Rana - video
An enormous gap exists between prokaryote (bacteria and cyanobacteria) cells and eukaryote (protists, plants and animals) type of cells. A crucial difference between prokaryotes and eukaryotes is the means they use to produce ATP (energy).
Mitochondria - Molecular Machine - Powerhouse Of The Cell - video
On The Non-Evidence For The Endosymbiotic Origin Of The Mitochondria - March 2011
Bacteria Too Complex To Be Primitive Eukaryote Ancestors - July 2010
Excerpt: “Bacteria have long been considered simple relatives of eukaryotes,” wrote Alan Wolfe for his colleagues at Loyola. “Obviously, this misperception must be modified.... There is a whole process going on that we have been blind to.”,,, For one thing, Forterre and Gribaldo revealed serious shortcomings with the popular “endosymbiosis” model – the idea that a prokaryote engulfed an archaea and gave rise to a symbiotic relationship that produced a eukaryote.
On the Origin of Mitochondria: Reasons for Skepticism on the Endosymbiotic Story -
Jonathan M. - January 10, 2012
Materialism simply has no credible answer for how this extreme level of complexity 'accidentally' arose in the first living cell, nor how this extreme integrated complexity found in life randomly evolved to the next 'simple' step of life, and to imagine/believe it can happen by accident, with no compelling evidence to support your position, is not empirical science. In fact, believing in something without any reasonable evidence whatsoever is usually called blind faith.
Even more problematic for evolutionists is that even within the 'bacterial world' there are enormous unexplained gaps of completely unique genes within each different species of bacteria:
ORFan Genes Challenge Common Descent – Paul Nelson – video with references
Because of these insurmountable problems, for generating novel functional proteins, or meaningful DNA, or any meaningfully functional information whatsoever, materialists are trying real hard to sell the 'RNA World' to the general public. Yet, we have absolutely no compelling reason to believe that a hypothetical 'RNA World' will ever start magically generating the massive amounts of complex functional information required for the first life. Here is a sampling of many critiques against the RNA world hypothesis:
Three subsets of sequence complexity and their relevance to biopolymeric information - David L Abel and Jack T Trevors:
Excerpt: Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction...No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization...It is only in researching the pre-RNA world that the problem of single-stranded metabolically functional sequencing of ribonucleotides (or their analogs) becomes acute.
Origin of Life: Claiming Something for Almost Nothing (RNA)
Excerpt: Yarus admitted, “the tiny replicator has not been found, and that its existence will be decided by experiments not yet done, perhaps not yet imagined.” But does this (laboratory) work support a naturalistic origin of life? A key question is whether a (self-replicating) molecule could form under plausible prebiotic conditions. Here’s how the paper described their work in the lab to get this (precursor) molecule:,,(several 'unnatural' complex steps are listed)
Doug Axe humorously dismantled one of the latest ploys by materialists to oversell their meager evidence for a "self-replicating RNA molecule" in this following article:
Biologic Institute Announces First Self-Replicating Motor Vehicle - Doug Axe -
Excerpt: "So, advertising this as “self-replication” is a bit like advertising something as “free” when the actual deal is 1 free for every 1,600 purchased. It’s even worse, though, because you need lots of the pre-made precursors in cozy proximity to a finished RNA in order to kick the process off. That makes the real deal more like n free for every 1,600 n purchased, with the caveats that n must be a very large number and that full payment must be made in advance."
Stephen Meyer points out that intelligence design was clearly required for even this meager result that Dr. Axe spoke about:
Biological Information: The Puzzle of Life that Darwinism Hasn’t Solved - Stephen C. Meyer
Thus, as my book Signature in the Cell shows, Joyce’s experiments not only demonstrate that self-replication itself depends upon information-rich molecules, but they also confirm that intelligent design is the only known means by which information arises.
Stephen Meyer Responds to Fletcher in Times Literary Supplement - Jan. 2010
Excerpt: everything we know about RNA catalysts, including those with partial self-copying capacity, shows that the function of these molecules depends upon the precise arrangement of their information-carrying constituents (i.e., their nucleotide bases). Functional RNA catalysts arise only once RNA bases are specifically-arranged into information-rich sequences—that is, function arises after, not before, the information problem has been solved.
Stephen C. Meyer and Paul A. Nelson
Excerpt: Although Yarus et al. claim that the DRT model undermines an intelligent design explanation for the origin of the genetic code, the model’s many shortcomings in fact illustrate the insufficiency of undirected chemistry to construct the semantic system represented by the code we see today.
Ann Gauger offers a easy to understand outline of Meyer and Nelson's preceding paper:
In BIO-Complexity, Meyer and Nelson Debunk DRT - Ann Gauger
Excerpt: While DNA carries information necessary to build cells, it performs no chemistry and builds no cellular structures by itself. Rather, the information in DNA must be translated into proteins, which then can carry out the various chemical and structural functions of life. But there is no direct way to convert a given DNA sequence into a protein sequence -- no direct chemical association between DNA nucleotides and amino acids. Some sort of decoding mechanism is needed to translate the information encoded in DNA into protein.
That decoding mechanism involves a whole host of enzymes, RNAs and regulatory molecules, all functioning as an elegant, efficient, accurate and complicated system for copying and translating the information in DNA into a usable form.,,, The problem is, this decoding system is self-referential and causally circular. Explaining its origin becomes a chicken and egg problem. As it stands now, you need the machinery that translates DNA into protein in order to make the very same machinery that translates DNA into protein.,,, There is no natural affinity between RNAs, amino acids, and codes. And the origin of life remains inexplicable in materialistic terms.
Materialists have not even created all 4 'letters' of RNA by natural means:
Response to Darrel Falk’s Review of Signature in the Cell - Stephen Meyer - Jan. 2010
Excerpt: Sutherland’s work only partially addresses the first and least severe of these difficulties: the problem of generating the constituent building blocks or monomers in plausible pre-biotic conditions. It does not address the more severe problem of explaining how the bases in nucleic acids (either DNA or RNA) acquired their specific information-rich arrangements.
Stirring the Soup - May 2009
"essentially, the scientists have succeeded in creating a couple of letters of the biological alphabet (in a "thermodynamically uphill" environment). What they need to do now is create the remaining letters, and then show how these letters were able to attach themselves together to form long chains of RNA, and arrange themselves in a specific order to encode information for creating specific proteins, and instructions to assemble the proteins into cells, tissues, organs, systems, and finally, complete phenotypes."
Uncommon Descent - C Bass:
Scientists Say Intelligent Designer Needed for Origin of Life Chemistry
Excerpt: Organic chemist Dr. Charles Garner recently noted in private correspondence that "while this work helps one imagine how RNA might form, it does nothing to address the information content of RNA. So, yes, there was a lot of guidance by an intelligent chemist." Sutherland's research produced only 2 of the 4 RNA nucleobases, and Dr. Garner also explained why, as is often the case, "the basic chemistry itself also required the hand of an intelligent chemist."
Meyer Responds to Stephen Fletcher - Stephen Meyer - March 2010
Excerpt: Nevertheless, this work does nothing to address the much more acute problem of explaining how the nucleotide bases in DNA or RNA acquired their specific information-rich arrangements, which is the central topic of my book (Signature In The Cell). In effect, the Powner (Sutherland) study helps explain the origin of the “letters” in the genetic text, but not their specific arrangement into functional “words” or “sentences.”
Deflating the synthetic proofs of the RNA World - David Tyler - August 2011
Excerpt: There may be a consensus about the RNA World, but it is not a consensus based on evidence. The approach is supported by synthetic proofs drawn from unrealistic laboratory experiments, showing all the signs of a dogmatism that pastes its ideas on to nature.
Here are some more critiques of the 'RNA World' scenario:
Did Life Begin in an RNA World?
Self Replication and Perpetual Motion - The Second Law's Take On The RNA World
Chemistry by Chance: A Formula for Non-Life by Charles McCombs, Ph.D.
Excerpt: The following eight obstacles in chemistry ensure that life by chance is untenable.
1. The Problem of Unreactivity
2. The Problem of Ionization
3. The Problem of Mass Action
4. The Problem of Reactivity
5. The Problem of Selectivity
6. The Problem of Solubility
7. The Problem of Sugar
8. The Problem of Chirality
The RNA World: A Critique - Gordon Mills and Dean Kenyon:
OOL (Origin Of Life) on the Rocks:
New Scientist Weighs in on the Origin of Life - Jonathan M. - August 17, 2011
Excerpt: To conclude, Michael Marshall's New Scientist article does not even come close to demonstrating the feasibility of the RNA world hypothesis, much less the origin of the sequence-specific information necessary for even the simplest of biological systems.
The Origin of Life: An RNA World? - Jonathan M. - August 22, 2011 (Refutation of Nick Matzke)
Excerpt Summary & Conclusion
We have explored just a small handful of the confounding difficulties confronting the chemical origin of life. This is not a god-of-the-gaps argument, as Matzke claims, but rather a positive argument, based on our uniform and repeated experience of cause-and-effect. It is not based on what we don't know, but on what we do know: that intelligence is a necessary and sufficient condition for the production of novel complex and functionally specified information. The design inference is based on sound and conventional scientific methodology. It utilizes the historical or abductive method and infers to the best explanation from multiple competing hypotheses.
Origin of Life: Claiming Something for Almost Nothing - March 2010
Excerpt: A look through the paper, however, shows complex lab procedures that are hard to justify in nature. (intelligence is required for even this meager step),,, the problem of sequencing the nucleotides – the key question – has not been addressed. Where did the genetic code come from? One ribozyme is not a code.
Excerpt: As Stephen Meyer has comprehensively documented in his book, Signature in the Cell, the RNA-world hypothesis is fraught with problems, quite apart from those pertaining to the origin of information. For example, the formation of the first RNA molecule would have required the prior emergence of smaller constituent molecules, including ribose sugar, phosphate molecules, and the four RNA nucleotide bases. However, it turns out that both synthesizing and maintaining these essential RNA building blocks -- especially ribose -- and the nucleotide bases is a very difficult task under origin-of-life conditions.
Since the RNA-World has so many insurmountable problems, some evolutionists have tried the 'metabolism first' scenario to try get past the gargantuan probabilistic hurdles facing the origin of life 'problem'. But yet again the evolutionists have failed miserably:
Lack of evolvability in self-sustaining autocatalytic networks constraints metabolism-first scenarios for the origin of life - Dec. 2009
Excerpt: we demonstrate here that replication of compositional information is so inaccurate that fitter compositional genomes cannot be maintained by selection and, therefore, the system lacks evolvability (i.e., it cannot substantially depart from the asymptotic steady-state solution already built-in in the dynamical equations).
A realistic look at the preceding paper is found here:
Metabolism-First Origin of Life Won’t Work
Excerpt: "“We do not know how the transition to digitally encoded information has happened in the originally inanimate world; that is, we do not know where the RNA world might have come from,"
Douglas Axe also comments on the results of the preceding study here:
Explaining Life by Explaining it Away — February 6th, 2010 by Douglas Axe
Excerpt: Think of it this way. If no conceivable mixture of small molecules provides even a faint hope for the emergence of metabolism catalyzed by genetically encoded enzymes, then whatever these mixtures may or may not do, they can’t explain life as we see it. And as the evidence now stands, one would be hard pressed to argue that there is even a faint hope.
Other leading researchers find the metabolism first scenario wholly implausible:
“Pigs don’t fly”
Excerpt: One of the most devastating critiques of the new “metabolism first” approaches to the origin of life was leveled two years ago by Leslie Orgel right before he died (Nov. 2007)
Even leading 'new atheist' Richard Dawkins admits no one has a clue how the first living cell 'evolved':
Leading Darwinist Richard Dawkins Dodges Debates, Refuses to Defend Evolution - Stephen Meyer
"(Richard) Dawkins says that there is no evidence for intelligent design in life, and yet he also acknowledges that neither he nor anyone else has an evolutionary explanation for the origin of the first living cell. We know now even the simplest forms of life are chock-full of digital code, complex information processing systems and other exquisite forms of nanotechnology."
In realizing the staggering impossibilities presented for any conceivable origin of life scenario, some materialists, including Francis Crick the co-discover of the DNA helix, have, in my opinion, completely left the field of experimental biology and suggested pan-spermia, the theory pre-biotic amino acids, or life itself, came to earth from outer-space on comets, or even delivered by UFO's, to account for this sudden appearance of life on earth.
Richard Dawkins Vs. Ben Stein - The UFO Interview - video
The panspermia hypothesis, which is really born out of sheer desperation rather than any sound reason on the materialist part, has several problems. One problem is astronomers, using spectral analysis, have not found any vast reservoirs of biological molecules anywhere they have looked in the universe (Ross; Creation as Science). Another problem is, even if comets were nothing but pre-biotic amino acid snowballs, how are the amino acids going to molecularly survive the furnace-like temperatures generated when the comet crashes into the earth?
Botching Evolutionary Science - Casey Luskin - April 2009
Excerpt: Of course, the textbook makes no mention of studies which have shown that such impacts would likely vaporize organic molecules carried to earth. (See Edward Anders. “Pre-biotic organic matter from comets and asteroids.” Nature, Vol. 342:255-257 (1989).
Dr. Hugh Ross has surmised delivery by meteorites or comets has now effectively been ruled out because of the homochirality problem of finding only the 'left handed' amino acids needed to build life in the universe somewhere:
"Circularly polarized UV light only produces a 17% excess (of R or L-amino-acids) and such selective destruction of organics require monochromatic light (monochromatic light isn’t known to occur naturally anywhere in the universe). So directed panspermia (life delivered by UFO's) is their last resort." Hugh Ross
I would like to reiterate, materialism postulated a very simple first cell, yet the simplest cell scientists have been able to find on earth, which can't even be seen with the naked eye, is vastly more complex than any machine man has ever made through concerted effort. This is especially true since a cell can self-replicate with seeming ease whereas a machine cannot. This following site has a interactive graph that lets people look into this 'invisible' world of microbes:
CELL TO CARBON ATOM - SIZE AND SCALE - Interactive Graph - Move cursor at the bottom of graph to the right to reduce the size:
Here is a neat little video clip that I wish was a bit longer (they say a longer one is in the works):
The Flow – Resonance Film – video
Description: The Flow, from inside a cell, looks at the supervening layers of reality that we can observe, from quarks to nucleons to atoms and beyond. The deeper we go into the foundations of reality the more it loses its form, eventually becoming a pure mathematical conception.
The smallest cyano-bacterium known to science has hundreds of millions of individual atomic molecules (not counting water molecules), divided into nearly a thousand completely distinct atomic molecule types; and a genome (DNA sequence) of 1.8 million bits, with over a million individual protein molecules which are sub-divided into hundreds of distinct protein classes. Once again, the integrated complexity found in the simplest bacterium known to science easily outclasses the integrated complexity of any machine man has ever made. These following articles and videos make this point clear:
"The manuals needed for building the entire space shuttle and all its components and all its support systems would be truly enormous! Yet the specified complexity (information) of even the simplest form of life - a bacterium - is arguably as great as that of the space shuttle."
J.C. Sanford - Geneticist - Genetic Entropy and the Mystery Of the Genome
'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica."Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
Ben Stein - EXPELLED - The Staggering Complexity Of The Cell - video
“Although the tiniest living things known to science, bacterial cells, are incredibly small (10^-12 grams), each is a veritable micro-miniaturized factory containing thousands of elegantly designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the non-living world”. Michael Denton, "Evolution: A Theory in Crisis," 1986, p. 250.
The Cell as a Collection of Protein Machines
"We have always underestimated cells. Undoubtedly we still do today,,, Indeed, the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each which is composed of a set of large protein machines."
Bruce Alberts: Former President, National Academy of Sciences;
The Cell - A World Of Complexity Darwin Never Dreamed Of - Donald E. Johnson - video
Bioinformatics: The Information in Life - Donald Johnson - video
Programming of Life - February 2012 - podcast
Here is the video that goes with the 'Programming Of Life' book:
Programming of Life - video
Here is Dr. Johnson's Home Page;
Science Integrity - Exposing Unsubstantiated Science Claims
On a slide in the preceding video, entitled 'Information Systems In Life', Dr. Johnson points out that:
* the genetic system is a pre-existing operating system;
* the specific genetic program (genome) is an application;
* the native language has codon-based encryption system;
* codes are decrypted and output to tRNA computers;
Cells Are Like Robust Computational Systems, - June 2009
Nanoelectronic Transistor Combined With Biological Machine Could Lead To Better Electronics: - Aug. 2009
Excerpt: While modern communication devices rely on electric fields and currents to carry the flow of information, biological systems are much more complex. They use an arsenal of membrane receptors, channels and pumps to control signal transduction that is unmatched by even the most powerful computers.
Paramecium caudatum can communicate with neighbors using a non-molecular method, probably photons. The cell populations were separated either with glass allowing photon transmission from 340 nm to longer waves, or quartz being transmittable from 150 nm, i.e. from UVlight to longer waves. Energy uptake, cell division rate and growth correlation were influenced.
Systems biology: Untangling the protein web - July 2009
Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening."
Simulations reveal new information about the gateway to the cell nucleus
Excerpt: “There are whole machines in living cells that are made of hundreds or thousands of proteins,” says Schulten, “and the nuclear pore is one of those systems. It’s actually one of the most magnificent systems in the cell.”,,,Hundreds to thousands of NPCs are embedded in the nuclear envelope of each cell,"...
Life Leads the Way to Invention - Feb. 2010
Excerpt: a cell is 10,000 times more energy-efficient than a transistor. “ In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power.” This and other amazing facts lead to an obvious conclusion: inventors ought to look to life for ideas.,,, Essentially, cells may be viewed as circuits that use molecules, ions, proteins and DNA instead of electrons and transistors. That analogy suggests that it should be possible to build electronic chips – what Sarpeshkar calls “cellular chemical computers” – that mimic chemical reactions very efficiently and on a very fast timescale.
This stunning energy efficiency of a cell is found across all life domains, thus strongly suggesting that all life on earth was Intelligently Design for maximal efficiency instead of accidentally, and gradually, evolved:
Mean mass-specific metabolic rates are strikingly similar across life's major domains: Evidence for life's metabolic optimum
Excerpt: Here, using the largest database to date, for 3,006 species that includes most of the range of biological diversity on the planet—from bacteria to elephants, and algae to sapling trees—we show that metabolism displays a striking degree of homeostasis across all of life.
Also of interest is that a cell apparently seems to be successfully designed along the very stringent guidelines laid out by Landauer's principle of 'reversible computation' in order to achieve such amazing energy efficiency, something man has yet to accomplish in any meaningful way for computers:
Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon - Charles H. Bennett
Excerpt: Of course, in practice, almost all data processing is done on macroscopic apparatus, dissipating macroscopic amounts of energy far in excess of what would be required by Landauer’s principle. Nevertheless, some stages of biomolecular information processing, such as transcription of DNA to RNA, appear to be accomplished by chemical reactions that are reversible not only in principle but in practice.,,,,
Further quotes on the unmatched complexity of the cell:
“Each cell with genetic information, from bacteria to man, consists of artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction and a capacity not equaled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours" Geneticist Michael Denton PhD. Evolution: A Theory In Crisis pg. 329
"To grasp the reality of life as it has been revealed by molecular biology, we must first magnify a cell a thousand million times until it is 20 kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would see then would be an object of unparalleled complexity,...we would find ourselves in a world of supreme technology and bewildering complexity."
Geneticist Michael Denton PhD., Evolution: A Theory In Crisis, pg.328
Building a Cell: Staggering Complexity: - Feb. 2010
Excerpt: “All organisms, from bacteria to humans, face the daunting task of replicating, packaging and segregating up to two metres (about 6 x 10^9 base pairs) of DNA when each cell divides,” “,,,the segregation machinery must function with far greater accuracy than man-made machines and with an exquisitely soft touch to prevent the DNA strands from breaking.” Bloom and Joglekar talked “machine language” over and over. The cell has specialized machines for all kinds of tasks: segregation machines, packaging machines, elaborate machines, streamlined machines, protein translocation machines, DNA-processing machines, DNA-translocation machines, robust macromolecular machines, accurate machines, ratchets, translocation pumps, mitotic spindles, DNA springs, coupling devices, and more. The authors struggle to “understand how these remarkable machines function with such exquisite accuracy.”
Here is a good article that came out in GN magazine in Nov. 2009:
10 Ways Darwin Got It Wrong
• Information processing, storage and retrieval.
• Artificial languages and their decoding systems.
• Error detection, correction and proofreading devices for quality control.
• Digital data-embedding technology.
• Transportation and distribution systems.
• Assembly processes employing pre-fabrication and modular construction.
• Self-reproducing robotic manufacturing plants.
There simply is no "simple life" on earth as materialism had presumed - even the well known single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes.
Programming of Life - Biological Computers - video
The inner life of a cell - Harvard University - video
User's guide to the video
Dr. Fazale (Fuz) Rana discusses the beauty and elegance of biochemistry - video
Here are some fairly simple overviews of the cell:
How the Body Works: The Cell - video
Programming of Life - Eukaryotic Cell - video
Map Of Major Metabolic Pathways In A Cell - Diagram
Glycolysis and the Citric Acid Cycle: The Control of Proteins and Pathways - Cornelius Hunter - July 2011
Metabolism: A Cascade of Design
Excerpt: A team of biological and chemical engineers wanted to understand just how robust metabolic pathways are. To gain this insight, the researchers compared how far the errors cascade in pathways found in a variety of single-celled organisms with errors in randomly generated metabolic pathways. They learned that when defects occur in the cell’s metabolic pathways, they cascade much shorter distances than when errors occur in random metabolic routes. Thus, it appears that metabolic pathways in nature are highly optimized and unusually robust, demonstrating that metabolic networks in the protoplasm are not haphazardly arranged but highly organized.
Making the Case for Intelligent Design More Robust
Excerpt: ,,, In other words, metabolic pathways are optimized to withstand inevitable concentration changes of metabolites.
Wonders of the Cell - 2008 - Christopher Wayne Ashcraft - video
Primary Cilium As Cellular 'GPS System' Crucial To Wound Repair
Excerpt: The primary cilium, the solitary, antenna-like structure that studs the outer surfaces of virtually all human cells, orient cells to move in the right direction and at the speed needed to heal wounds, much like a Global Positioning System helps ships navigate to their destinations.
"What we are dealing with is a physiological analogy to the GPS system with a coupled autopilot that coordinates air traffic or tankers on open sea,"
Mere Biochemistry: Cell Division Involves Thousands of Complex, Interacting Parts - September 2010
Astonishingly, actual motors, which far surpass man-made motors in 'engineering parameters', are now being found inside 'simple cells'.
Articles and Videos on Molecular Motors
Michael Behe - Life Reeks Of Design - 2010 - video
Macroevolution, Good Science, and Redeeming Mathematics - Kate Deddens - February 2012
Excerpted quote: As obviously designed as a spaceship or a computer…Evolutionary biologists have been able to pretend to know how complex biological systems originated only because they treated them as black boxes. Now that biochemists have opened the black boxes and see what is inside, they know the Darwinian theory is just a story, not a scientific explanation…
(Phillip E. Johnson, Defeating Darwinism, Downers Grove, IL: InterVarsity Press, 1997, 77-78.)
And in spite of the fact of finding molecular motors permeating the simplest of bacterial life, there are no detailed Darwinian accounts for the evolution of even one such motor or system.
"There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system only a variety of wishful speculations. It is remarkable that Darwinism is accepted as a satisfactory explanation of such a vast subject."
James Shapiro - Molecular Biologist
The following expert doesn't even hide his very unscientific preconceived philosophical bias against intelligent design,,,
‘We should reject, as a matter of principle, the substitution of intelligent design for the dialogue of chance and necessity,,,
Yet at the same time the same expert readily admits that neo-Darwinism has ZERO evidence for the chance and necessity of material processes producing any cellular system whatsoever,,,
Franklin M. Harold,* 2001. The way of the cell: molecules, organisms and the order of life, Oxford University Press, New York, p. 205.
*Professor Emeritus of Biochemistry, Colorado State University, USA
Michael Behe - No Scientific Literature For Evolution of Any Irreducibly Complex Molecular Machines
“The response I have received from repeating Behe's claim about the evolutionary literature, which simply brings out the point being made implicitly by many others, such as Chris Dutton and so on, is that I obviously have not read the right books. There are, I am sure, evolutionists who have described how the transitions in question could have occurred.” And he continues, “When I ask in which books I can find these discussions, however, I either get no answer or else some titles that, upon examination, do not, in fact, contain the promised accounts. That such accounts exist seems to be something that is widely known, but I have yet to encounter anyone who knows where they exist.”
David Ray Griffin - retired professor of philosophy of religion and theology
What I find very persuasive, to the suggestion that the universe was designed with life in mind, is that physicists find many processes in a cell operate at the 'near optimal' capacities allowed in any physical system:
William Bialek - Professor Of Physics - Princeton University:
Excerpt: "A central theme in my research is an appreciation for how well things “work” in biological systems. It is, after all, some notion of functional behavior that distinguishes life from inanimate matter, and it is a challenge to quantify this functionality in a language that parallels our characterization of other physical systems. Strikingly, when we do this (and there are not so many cases where it has been done!), the performance of biological systems often approaches some limits set by basic physical principles. While it is popular to view biological mechanisms as an historical record of evolutionary and developmental compromises, these observations on functional performance point toward a very different view of life as having selected a set of near optimal mechanisms for its most crucial tasks.,,,The idea of performance near the physical limits crosses many levels of biological organization, from single molecules to cells to perception and learning in the brain,,,,"
Physicists Finding Perfection… in Biology — June 1st, 2009 by Biologic Staff
Excerpt: "biological processes tend to be optimal in cases where this can be tested."
Also of note: There is a fairly substantial economic payoff to be had for presupposing superior 'Intelligent Design' in life, as is testified to by the burgeoning field of Biomimicry:
Biomimicry - Superior Designs That Were Found In Life
Also of note; Sometimes evolutionists will point to the Rubisco Enzyme as an example of 'bad design', but it turns out the Rubisco Enzyme is indeed optimal for the purpose to which it was created for supporting higher life forms above it. Higher life forms that the Rubisco is not aware of, nor cares about.
Rubisco is not an example of unintelligent design - David Tyler
Excerpt: Rubisco's ability to capture CO2 increases with increasing CO2 content in the atmosphere, so its efficiency rises in a CO2-rich atmosphere. However, increasing oxygen levels in the atmosphere will reduce Rubisco's ability to capture carbon. So a negative feedback mechanism exists to regulate the relative concentrations of oxygen and carbon dioxide in the atmosphere. This is another example of design affecting the Earth's ecology,,
From 3.8 to .6 billion years ago photosynthetic bacteria, and sulfate-reducing reducing bacteria, dominated the geologic and fossil record (that’s over 80% of the entire time life has existed on earth). The geologic and fossil record also reveals, during this time, a large portion of these very first bacterial life-forms lived in irreducibly complex, symbiotic, mutually beneficial, colonies called Stromatolites. Stromatolites are rock like structures the photo-synthetic bacteria built up over many years, much like coral reefs are slowly built up over many years by the tiny creatures called corals. Although Stromatolites are not nearly as widespread as they once were, they are still around today in a few sparse places like Shark’s Bay Australia.
Michael Denton - Stromatolites Are Extremely Ancient - video
Shark's Bay - Modern Stromatolites - Pictures
Both the oldest Stromatolite fossils, and the oldest bacterium fossils, found on earth demonstrate an extreme conservation of morphology which, very contrary to evolutionary thought, simply means they have not changed and look very similar to Stromatolites and bacteria of today.
Odd Geometry of Bacteria May Provide New Way to Study Earth's Oldest Fossils - May 2010
Excerpt: Known as stromatolites, the layered rock formations are considered to be the oldest fossils on Earth.,,,That the spacing pattern corresponds to the mats' metabolic period -- and is also seen in ancient rocks -- shows that the same basic physical processes of diffusion and competition seen today were happening billions of years ago,,,
Everything new is old again: Photosynthesis from 3.3 billion years ago - July 2011
Excerpt: The most direct evidence yet for ancient photosynthesis has been uncovered in a fossil of a matted carpet of microbes that lived on a beach 3.3 billion years ago.
Excerpt: These (fossilized bacteria) cells are actually very similar to present day cyanobacteria. This is not only true for an isolated case but many living genera of cyanobacteria can be linked to fossil cyanobacteria. The detail noted in the fossils of this group gives indication of extreme conservation of morphology, more extreme than in other organisms.
Static evolution: is pond scum the same now as billions of years ago?
Excerpt: But what intrigues (paleo-biologist) J. William Schopf most is lack of change. Schopf was struck 30 years ago by the apparent similarities between some 1-billion-year-old fossils of blue-green bacteria and their modern microbial microbial. "They surprisingly looked exactly like modern species," Schopf recalls. Now, after comparing data from throughout the world, Schopf and others have concluded that modern pond scum differs little from the ancient blue-greens. "This similarity in morphology is widespread among fossils of [varying] times," says Schopf. As evidence, he cites the 3,000 such fossils found;
Bacteria: Fossil Record - Ancient Compared to Modern - Picture
Contrary to what materialism would expect, these very first photosynthetic bacteria found in the fossil record, and by chemical analysis of the geological record, are shown to have been preparing the earth for more advanced life to appear from the very start of their existence by producing the necessary oxygen for higher life-forms to exist, and by reducing the greenhouse gases of earth’s early atmosphere. Photosynthetic bacteria slowly removed the carbon dioxide, and built the oxygen up, in the earth’s atmosphere primarily by this following photosynthetic chemical reaction:
6H2O + 6CO2 ----------> C6H12O6+ 6O2
The above chemical equation translates as:
Interestingly, the gradual removal of greenhouse gases corresponded to the gradual 15% increase of light and heat coming from the sun during that time (Ross; Creation as Science). This 'lucky' correspondence of the slow increase of heat from the sun with the same perfectly timed slow removal of greenhouse gases from the earth’s atmosphere was necessary to keep the earth from cascading into either a 'greenhouse earth' or 'snowball earth'.
Why Didn't Early Earth Freeze? The Mystery Deepens - April 2010
This following paper offers methane gas as a possible contributing solution to the faint sun paradox:
Methane-Based Greenhouse and Anti-Greenhouse Events Led to Stable Archean Climate
This following paper shows that the Earth's early atmosphere would have been stripped away by the sun if it had not been finely tuned:
Earth’s Primordial Atmosphere Must Be Fine-Tuned - Hugh Ross
Excerpt: The team then produced calculations demonstrating that the only reasonable scenario for explaining why the Sun’s radiation did not remove Earth’s primordial atmosphere was that the early Earth’s atmosphere was at least a hundred times richer in carbon dioxide.
Thus following study shows that the buildup of oxygen in the atmosphere was more gradual than previously thought;
Rise of Atmospheric Oxygen More Complicated (Gradual) Than Previously Thought - December 2011
Excerpt: Oxygen levels gradually crossed the low atmospheric oxygen threshold for pyrite -- an iron sulfur mineral -- oxidation by 2,500 million years ago and the loss of any mass-independently fractionated sulfur by 2,400 million years ago. Then oxygen levels rose at an ever-increasing rate through the Paleoproterozoic, achieving about 1 percent of the present atmospheric level.,, Initially, any oxygen in the atmosphere, produced by the photosynthesis of single-celled organisms, was used up when sulfur, iron and other elements oxidized. When sufficient oxygen accumulated in the atmosphere, it permeated the groundwater and began oxidizing buried organic material, oxidizing carbon to create carbon dioxide.
More interesting still, the byproducts of the complex biogeochemical processes involved in the oxygen production by these early bacteria are (red banded) iron formations, limestone, marble, gypsum, phosphates, sand, and to a lesser extent, coal, oil and natural gas (note; though some coal, oil and natural gas deposits are from this early era of bacterial life, most coal, oil and natural gas deposits originated on earth after the Cambrian explosion of higher life forms some 540 million years ago). The resources produced by these early photosynthetic bacteria are very useful, one could even very well say 'necessary', for the technologically advanced civilizations of humans today to exist.
The following video is good for seeing just how far back the red banded iron formations really go (3.8 billion years ago). But be warned, Dr. Newman operates from a materialistic worldview and makes many unwarranted allusions of the 'magical' power of evolution to produce photosynthetic bacteria. Although to be fair, she does readily acknowledge the staggering level of complexity being dealt with in photosynthesis, as well as admitting that no one really has a clue how photosynthesis 'evolved'.
Exploring the deep connection between bacteria and rocks - Dianne Newman - MIT lecture video
This following papers back up Dr. Newman's assertion of extremely ancient oxygenic photosynthesis with other lines of evidence:
Ancient Microbes Responsible for Breathing Life Into Ocean 'Deserts' - August 2010
Excerpt: Brian Kendall and Ariel Anbar, together with colleagues at other institutions, show that "oxygen oases" in the surface ocean were sites of significant oxygen production long before the breathing gas began to accumulate in the atmosphere..,, What Kendall discovered was a unique relationship of high rhenium and low molybdenum enrichments in the samples from South Africa, pointing to the presence of dissolved oxygen on the seafloor itself.,,, "It was especially satisfying to see two different geochemical methods -- rhenium and molybdenum abundances and Fe chemistry -- independently tell the same story," Kendall noted. Evidence that the atmosphere contained at most minute amounts of oxygen came from measurements of the relative abundances of sulfur (S) isotopes.
Breathing new life into Earth: New research shows evidence of early oxygen on our planet - August 2011
These following articles explore some of the other complex geochemical processes that are also involved in the forming of the red banded iron, and other precious ore, formations on the ancient earth.
Banded Rocks Reveal Early Earth Conditions, Changes
Rich Ore Deposits Linked to Ancient Atmosphere - Nov. 2009
Interestingly, while the photo-synthetic bacteria were reducing greenhouse gases and producing oxygen, and metal, and minerals, which would all be of benefit to modern man, 'sulfate-reducing' bacteria were also producing their own natural resources which would be very useful to modern man. Sulfate-reducing bacteria helped prepare the earth for advanced life by detoxifying the primeval earth and oceans of poisonous levels of heavy metals while depositing them as relatively inert metal ores. Metal ores which are very useful for modern man, as well as fairly easy for man to extract today (mercury, cadmium, zinc, cobalt, arsenic, chromate, tellurium and copper to name a few). To this day, sulfate-reducing bacteria maintain an essential minimal level of these heavy metals in the ecosystem which are high enough so as to be available to the biological systems of the higher life forms that need them yet low enough so as not to be poisonous to those very same higher life forms.
Bacterial Heavy Metal Detoxification and Resistance Systems:
Excerpt: Bacterial plasmids contain genetic determinants for resistance systems for Hg2+ (and organomercurials), Cd2+, AsO2, AsO43-, CrO4 2-, TeO3 2-, Cu2+, Ag+, Co2+, Pb2+, and other metals of environmental concern.,, Recombinant DNA analysis has been applied to mercury, cadmium, zinc, cobalt, arsenic, chromate, tellurium and copper resistance systems.
The role of bacteria in hydrogeochemistry, metal cycling and ore deposit formation:
Textures of sulfide minerals formed by SRB (sulfate-reducing bacteria) during bioremediation (most notably pyrite and sphalerite) have textures reminiscent of those in certain sediment-hosted ores, supporting the concept that SRB may have been directly involved in forming ore minerals.
Researchers Identify Mysterious Life Forms in the Extreme Deep Sea
Excerpt: Xenophyophores are noteworthy for their size, with individual cells often exceeding 10 centimeters (4 inches), their extreme abundance on the seafloor and their role as hosts for a variety of organisms.,,, The researchers spotted the life forms at depths up to 10,641 meters (6.6 miles) within the Sirena Deep of the Mariana Trench.,,, Scientists say xenophyophores are the largest individual cells in existence. Recent studies indicate that by trapping particles from the water, xenophyophores can concentrate high levels of lead, uranium and mercury,,,
Man has only recently caught on to harnessing the ancient detoxification ability of bacteria to cleanup his accidental toxic spills, as well as his toxic waste, from industry:
What is Bioremediation? - video
Metal-mining bacteria are green chemists - Sept. 2010
Further note:
Arsenic removal: research on bioremediation using arsenite-eating bacteria
As a side note to this, recently bacteria surprised scientists by their ability to quickly detoxify the millions of barrels of oil spilled in the Gulf of Mexico:
Mighty oil-eating microbes help clean up the Gulf - July 2010
Excerpt: Where is all the oil? Nearly two weeks after BP finally capped the biggest oil spill in U.S. history, the oil slicks that once spread across thousands of miles of the Gulf of Mexico have largely disappeared. Nor has much oil washed up on the sandy beaches and marshes along the Louisiana coast.,,, The lesson from past spills is that the lion’s share of the cleanup work is done by nature in the form of oil-eating bacteria and fungi. (Thank God)
Deepwater Oil Plume in Gulf Degraded by Microbes, Study Shows
Excerpt: An intensive study by scientists with the Lawrence Berkeley National Laboratory (Berkeley Lab) found that microbial activity degrades oil much faster than anticipated. This degradation appears to take place without a significant level of oxygen depletion.
Methane Gas Concentrations in Gulf of Mexico Quickly Returned to Near-Normal Levels, Surprising Researchers - January 2011
Excerpt: Calling the results "extremely surprising", researchers report that methane gas concentrations in the Gulf of Mexico have returned to near normal levels only months after a massive release occurred following the Deepwater Horizon oil rig explosion.
Microbes Consumed Oil in Gulf Slick at Unexpected Rates, Study Finds - August 2011
Excerpt: "Our study shows that the dynamic microbial community of the Gulf of Mexico supported remarkable rates of oil respiration, despite a dearth of dissolved nutrients," the researchers said. Edwards added that the results suggest "that microbes had the metabolic potential to break down a large portion of hydrocarbons and keep up with the flow rate from the wellhead."
Here are a couple of sites showing the crucial link of a minimal levels of metals to biological life:
Transitional Metals And Cytochrome C oxidase - Michael Denton - Nature's Destiny
Proteins prove their metal - July 2010
Excerpt: ‘Nearly half of all enzymes require metals to function in catalysing biological reactions,’ Kylie Vincent, of Oxford University’s Department of Chemistry tells us. ‘Both the metal and the surrounding protein are crucial in tuning the reactivity of metal catalytic centres in enzymes.' These ‘metal centres’ are hives of industry at a microscopic scale, with metals often held in a special protein environment where they may be assembled into intricate clusters inside proteins.
Your Copper Pipes - November 2011
Excerpt: In the fascinating field of ‘metals in biology’, by virtue of direct interactions with amino acid side-chains within polypeptide chains, metals play unique and critical roles in biology, promoting structures and chemistries that would not otherwise be available to proteins alone.,,, ATP7A is also important for the delivery of copper to nascent proteins in the Golgi apparatus. In mammals, ATP7A is expressed in many tissues except the liver,
As well, in conjunction with bacteria, geological processes helped detoxify the earth of dangerous levels of metal:
The Concentration of Metals for Humanity's Benefit:
Excerpt: They demonstrated that hydrothermal fluid flow could enrich the concentration of metals like zinc, lead, and copper by at least a factor of a thousand. They also showed that ore deposits formed by hydrothermal fluid flows at or above these concentration levels exist throughout Earth's crust. The necessary just-right precipitation conditions needed to yield such high concentrations demand extraordinary fine-tuning. That such ore deposits are common in Earth's crust strongly suggests supernatural design.
And on top of the fact that poisonous heavy metals on the primordial earth were brought into 'life-enabling' balance by complex biogeochemical processes, there was also an explosion of minerals on earth which were a result of that first life, as well as being a result of each subsequent 'Big Bang of life' there afterwards.
The Creation of Minerals:
Excerpt: Thanks to the way life was introduced on Earth, the early 250 mineral species have exploded to the present 4,300 known mineral species. And because of this abundance, humans possessed all the necessary mineral resources to easily launch and sustain global, high-technology civilization.
"Today there are about 4,400 known minerals - more than two-thirds of which came into being only because of the way life changed the planet. Some of them were created exclusively by living organisms" - Bob Hazen - Smithsonian - Oct. 2010, pg. 54
To put it mildly, this minimization of poisonous elements, and 'explosion' of useful minerals, is strong evidence for Intelligently Designed terra-forming of the earth that 'just so happens' to be of great benefit to modern man.
Clearly many, if not all, of these metal ores and minerals laid down by these sulfate-reducing bacteria, as well as laid down by the biogeochemistry of more complex life, as well as laid down by finely-tuned geological conditions throughout the early history of the earth, have many unique properties which are crucial for technologically advanced life, and are thus indispensable to man’s rise above the stone age to the advanced 'space-age' technology of modern civilization.
Minerals and Their Uses
Mineral Uses In Industry
Inventions: Elements and Compounds - video
Bombardment Makes Civilization Possible
What is the common thread among the following items: pacemakers, spark plugs, fountain pens and compass bearings? Give up? All of them currently use (or used in early versions) the two densest elements, osmium and iridium. These two elements play important roles in technological advancements. However, if certain special events hadn't occurred early in Earth's history, no osmium or iridium would exist near the planet's surface.
As well, many types of bacteria in earth's early history lived in what are called cryptogamic colonies on the earth's primeval continents. These colonies dramatically transformed the primeval land into stable nutrient filled soils which were receptive for future advanced vegetation to appear.
Land organisms from Cambrian found in soil layer under the soil - November 2011
Excerpt: Other evidence of life on land includes quilted spheroids (Erytholus globosus gen. et sp. nov.) and thallose impressions (Farghera sp. indet.), which may have been slime moulds and lichens, respectively. These distinctive fossils in Cambrian palaeosols represent communities comparable with modern biological soil crusts.
Cryptobiotic Soils: Holding the Place in Place
Excerpt: Cryptobiotic soil crusts, consisting of soil cyanobacteria, lichens and mosses, play an important ecological roles,,, Cryptobiotic crusts increase the stability of otherwise easily eroded soils, increase water infiltration in regions that receive little precipitation, and increase fertility in soils often limited in essential nutrients such as nitrogen and carbon (Harper and Marble, 1988; Johansen, 1993; Metting, 1991; Belnap and Gardner, 1993; Belnap, 1994; Williams et al., 1995).
Bacterial 'Ropes' Tie Down Shifting Southwest
Excerpt: In the desert, the initial stabilization of topsoil by rope-builders promotes colonization by a multitude of other microbes. From their interwoven relationships arise complex communities known as "biological soil crusts," important ecological components in the fertility and sustainability of arid ecosystems.
Excerpt: When moistened, cyanobacteria become active, moving through the soil and leaving a trail of sticky material behind. The sheath material sticks to surfaces such as rock or soil particles, forming an intricate web of fibers throughout the soil. In this way, loose soil particles are joined together, and an otherwise unstable surface becomes very resistant to both wind and water erosion.
Moreover, worms, in addition to their critical role for soil aeration, are also found to detoxify the soils of poisonous heavy metals:
The worm that turned on heavy metal - December 2010
Excerpt: The team has carried out two feasibility studies on the use of worms in treating waste. The team first used compost produced by worms, vermicompost, as a successful adsorbent substrate for remediation of wastewater contaminated with the metals nickel, chromium, vanadium and lead. The second used earthworms directly for remediation of arsenic and mercury present in landfill soils and demonstrated an efficiency of 42 to 72% in approximately two weeks for arsenic removal and 7.5 to 30.2% for mercury removal in the same time period.
Materialism simply has no coherent answers for why these different bacterial types, biogeochemical processes, and worms etc.., would start working in precise concert with each other preparing the earth for future life to appear from the very start of their first appearance on earth.
In further related note, several different types of bacteria are found to be integral for the nitrogen fixation cycle required for plants:
nitrogen fixation - illustration
nitrogen fixation - video:
Just how crucial, and finely tuned, the nitrogen cycle is is revealed by this following study:
Engineering and Science Magazine - Caltech - March 2010
Excerpt: “Without these microbes, the planet would run out of biologically available nitrogen in less than a month,” Realizations like this are stimulating a flourishing field of “geobiology” – the study of relationships between life and the earth. One member of the Caltech team commented, “If all bacteria and archaea just stopped functioning, life on Earth would come to an abrupt halt.” Microbes are key players in earth’s nutrient cycles. Dr. Orphan added, “...every fifth breath you take, thank a microbe.”
Planet's Nitrogen Cycle Overturned - Oct. 2009
Excerpt: "Ammonia is a waste product that can be toxic to animals.,,, archaea can scavenge nitrogen-containing ammonia in the most barren environments of the deep sea, solving a long-running mystery of how the microorganisms can survive in that environment. Archaea therefore not only play a role, but are central to the planetary nitrogen cycles on which all life depends.,,,the organism can survive on a mere whiff of ammonia – 10 nanomolar concentration, equivalent to a teaspoon of ammonia salt in 10 million gallons of water."
Novel Nitrogen Uptake Design - Oct. 2009
Excerpt: The exceptionality of the snow roots and their nitrogen-capturing machinery, their extraordinarily complex designs, and their optimal efficiency qualifies them as evidence, not for evolution, but rather for supernatural design.
Arbuscular Mycorrhizal Fungi Design
Excerpt: The mutual relationship between vascular plants (flowering plants) and arbuscular mycorrihizal fungi (AMF) is the most prevalent known plant symbiosis. Vascular plants provide sites all along their root systems where colonies of AMF can assemble and feed on the nutrients supplied by the plants. In return, the AMF supply phosphorus, nitrogen, and carbon in molecular forms that the vascular plants can readily assimilate. The (overwhelming) challenge for evolutionary models is how to explain by natural means the simultaneous appearance of both vascular plants and AMF.
Of somewhat related interest to this topic, it is found that colonies of bacteria have some mysterious way of communicating essential information very quickly amongst themselves:
Electrical Communication in Bacteria - August 2010
Excerpt: These responses occurred too quickly for any sort of chemical exchange or molecular process such as osmosis, says Nielsen. The most plausible option, his team reports in the 25 February issue of Nature, is that the bacteria are somehow communicating electrically by transmitting electrons back and forth. How exactly they do this is unclear,
Moreover, the overall principle of long term balanced symbiosis, which is in fact what we have with the overall biogeochemical cycles of the earth, is a very anti-random chance fact which pervades the entire ecology of our planet and points powerfully to the intentional craftsmanship of a Designer:
Intelligent Design - Symbiosis and the Golden Ratio - video
God's Creation - Symbiotic (Cooperative) Relationships - video
Some Trees 'Farm' Bacteria to Help Supply Nutrients - July 2010
Since oxygen readily reacts and bonds with many of the solid elements making up the earth itself, and since the slow process of tectonic activity controls the turnover of the earth's crust, it took photosynthetic bacteria a few billion years before the earth’s crust was saturated with enough oxygen to allow a sufficient level of oxygen to be built up in the atmosphere as to allow higher life:
New Wrinkle In Ancient Ocean Chemistry - Oct. 2009
Excerpt: "Our data point to oxygen-producing photosynthesis long before concentrations of oxygen in the atmosphere were even a tiny fraction of what they are today, suggesting that oxygen-consuming chemical reactions were offsetting much of the production,"
Increases in Oxygen Prepare Earth for Complex Life
Excerpt: We at RTB argue that any mechanism exhibiting complex, integrated actions that bring about a specified outcome is designed. Studies of Earth’s history reveal highly orchestrated interplay between astronomical, geological, biological, atmospheric, and chemical processes that transform the planet from an uninhabitable wasteland to a place teeming with advanced life. The implications of design are overwhelming.
As well, Plate tectonics are also shown to be finely-tuned and thus tied to the 'terra forming', intelligent design, perspective in this following paper:
Evidence of Early Plate Tectonics
Excerpt: Plate tectonics plays a critical role in keeping the Earth’s temperature constant during the Sun’s significant brightness changes. Almost four billion years ago, the Sun was 30 percent dimmer than it is today, and it has steadily increased its light output over the intervening period. This steady increase would have boiled Earth’s oceans away without plate tectonics moderating the greenhouse gas content of the atmosphere.
Once sufficient oxygenation of the earth's mantle and atmosphere was finally accomplished, higher life forms could finally be introduced on earth. Moreover, scientists find the rise in oxygen percentages in the geologic record to correspond exactly to the sudden appearance of large animals in the fossil record that depend on those particular percentages of oxygen to be present. The geologic record shows a 10% oxygen level at the time of the Cambrian explosion of higher life-forms in the fossil record some 540 million years ago. The geologic record also shows a strange and very quick rise from the 17% oxygen level, of 50 million years ago, to a 23% oxygen level 40 million years ago (Falkowski 2005, 2008). This strange rise in oxygen levels corresponds exactly to the abrupt appearance of large mammals in the fossil record who depend on those high oxygen levels. Interestingly, for the last 10 million years the oxygen percentage has been holding steady around 21%. 21% happens to be a 'very comfortable' percentage for humans to exist. If the oxygen level was only a few percentage lower, large mammals would become severely hampered in their ability to metabolize energy; if only a few percentage higher, there would be uncontrollable outbreaks of fire across the land (Denton; Nature's Destiny).
Composition Of Atmosphere - Pie Chart and Percentages:
The interplay of the biogeochemical (life and earth) processes that produce this balanced. life enabling, oxygen rich, atmosphere are very complex:
The Life and Death of Oxygen - 2008
Excerpt: “The balance between burial of organic matter and its oxidation appears to have been tightly controlled over the past 500 million years.” “The presence of O2 in the atmosphere requires an imbalance between oxygenic photosynthesis and aerobic respiration on time scales of millions of years hence, to generate an oxidized atmosphere, more organic matter must be buried (by tectonic activity) than respired.” - Paul Falkowski
The Oxygen and Carbon Dioxide Cycle - video
This following article and video clearly indicate that the life sustaining balanced symbiosis of the atmosphere is far more robust, as to tolerating man's industrial activities, than Global Warming alarmist would have us believe:
Earth's Capacity To Absorb CO2 Much Greater Than Expected: Nov. 2009
Excerpt: New data show that the balance between the airborne and the absorbed fraction of carbon dioxide has stayed approximately constant since 1850, despite emissions of carbon dioxide having risen from about 2 billion tons a year in 1850 to 35 billion tons a year now. This suggests that terrestrial ecosystems and the oceans have a much greater capacity to absorb CO2 than had been previously expected.
A Really Inconvenient Truth!
Global Warming Apocalypse? No! - video
Because of this basic chemical requirement of complex photosynthetic bacterial life establishing and helping maintain the proper oxygen levels necessary for higher life forms on any earth-like planet, this gives us further reason to strongly believe the earth is extremely unique in its ability to support intelligent life in this universe. What is more remarkable is that this balance for the atmosphere is maintained through complex symbiotic relationships with other bacteria, all of which are intertwined in very complex geochemical processes. This is irreducible complexity stacked on top of irreducible complexity!!! All of these studies of early life, and processes, on early earth fall directly in line with the anthropic hypothesis and have no rational explanation, from any materialistic theory based on blind chance, as to why all the first types of bacterial life found in the fossil record would suddenly, from the very start of their appearance on earth, start working in precise harmony with each other, and with geology, to prepare the earth for future life to appear. Nor can materialism explain why once these complex bacterial-geological processes had helped prepare the earth for higher life forms, they continue to work in precise harmony with each other to help maintain the proper balanced conditions that are of primary benefit for the higher life that is above them:
The Microbial Engines That Drive Earth’s Biogeochemical Cycles - Falkowski 2008
Excerpt: Microbial life can easily live without us; we, however, cannot survive without the global catalysis and environmental transformations it provides. - Paul G. Falkowski - Professor Geological Sciences - Rutgers
Biologically mediated cycles for hydrogen, carbon, nitrogen, oxygen, sulfur, and iron - image of interdependent 'biogeochemical' web
Interestingly, when Dr. Ross factors in the probability for 'simple' bacterial life randomly happening in this universe, which is necessary for more advanced life to exist on any planet in the first place, the probability for a planet which can host life explodes into gargantuan proportions:
Does the Probability for ETI = 1?
Excerpt: In another book I wrote with Fuz, Who Was Adam?, we describe calculations done by evolutionary biologist Francisco Ayala and by astrophysicists John Barrow, Brandon Carter, and Frank Tipler for the probability that a bacterium would evolve under ideal natural conditions—given the presumption that the mechanisms for natural biological evolution are both effective and rapid. They determine that probability to be no more than 10-24,000,000.
The bottom line is that rather than the probability for extraterrestrial intelligent life being 1 as Aczel claims, very conservatively from a naturalistic perspective it is much less than 10^500 + 22 -1054 -100,000,000,000 -24,000,000. That is, it is less than 10-100,024,000,532. In longhand notation it would be 0.00 … 001 with 100,024,000,531 zeros (100 billion, 24 million, 5 hundred and thirty-one zeros) between the decimal point and the 1. That longhand notation of the probability would fill over 20,000 complete Bibles. (As far as scientific calculations are concerned, determining how close a probability is to zero, only Penrose's 1 in 10^10^123 calculation, for the initial phase-space of the universe, is closer)
Dr. Ross points out that the extremely long amount of time it took to prepare a suitable place for humans to exist in this universe, for the relatively short period of time that we can exist on this planet, is actually a point of evidence that argues strongly for Theism:
Anthropic Principle: A Precise Plan for Humanity By Hugh Ross
Excerpt: Brandon Carter, the British mathematician who coined the term “anthropic principle” (1974), noted the strange inequity of a universe that spends about 15 billion years “preparing” for the existence of a creature that has the potential to survive no more than 10 million years (optimistically).,, Carter and (later) astrophysicists John Barrow and Frank Tipler demonstrated that the inequality exists for virtually any conceivable intelligent species under any conceivable life-support conditions. Roughly 15 billion years represents a minimum preparation time for advanced life: 11 billion toward formation of a stable planetary system, one with the right chemical and physical conditions for primitive life, and four billion more years toward preparation of a planet within that system, one richly layered with the biodeposits necessary for civilized intelligent life. Even this long time and convergence of “just right” conditions reflect miraculous efficiency.
Moreover the physical and biological conditions necessary to support an intelligent civilized species do not last indefinitely. They are subject to continuous change: the Sun continues to brighten, Earth’s rotation period lengthens, Earth’s plate tectonic activity declines, and Earth’s atmospheric composition varies. In just 10 million years or less, Earth will lose its ability to sustain human life. In fact, this estimate of the human habitability time window may be grossly optimistic. In all likelihood, a nearby supernova eruption, a climatic perturbation, a social or environmental upheaval, or the genetic accumulation of negative mutations will doom the species to extinction sometime sooner than twenty thousand years from now.
At least one scientist is far more pessimistic about the 'natural' future lifespan of the human race than 20,000 years:
Humans will be extinct in 100 years says eminent scientist - June 2010
This following study, of a vital enzyme found in all life, conforms to the notion of 'terraforming' the toxic primordial earth, as well I would argue that the enzyme conforms to the principle of 'Genetic Entropy' since the enzyme was reconstructed from the data of many different 'derived' enzymes;
Enzymes Complex from the Get-go
Excerpt: “Given the ancient origin of the reconstructed thioredoxin enzymes (a vital enzyme found in all living cells), with some of them predating the buildup of atmospheric oxygen, we expected their catalytic chemistry to be simple," said Fernandez. "Instead we found that enzymes that existed in the Precambrian era up to four billion years ago possessed many of the same chemical mechanisms observed in their modern-day relatives.”,, Further examination of the ancient enzymes revealed some striking features: The enzymes were highly resistant to temperature and were active in more acidic conditions. The findings suggest that the species hosting these ancient enzymes thrived in very hot environments that since then have progressively cooled down, and that they lived in oceans that were more acidic than today.
Though it is impossible to reconstruct the DNA of the earliest bacteria fossils, scientists find in the fossil record, and compare them to their descendants of today, there are many ancient bacteria spores recovered and 'revived' from salt crystals and amber crystals which have been compared to their living descendants of today. Some bacterium spores, in salt crystals, dating back as far as 250 million years have been revived, had their DNA sequenced, and compared to their offspring of today (Vreeland RH, 2000 Nature). To the disbelieving shock of many evolutionary scientists, both ancient and modern bacteria were found to have the almost same exact DNA sequence.
The Paradox of the "Ancient" (250 Million Year Old) Bacterium Which Contains "Modern" Protein-Coding Genes:
“Almost without exception, bacteria isolated from ancient material have proven to closely resemble modern bacteria at both morphological and molecular levels.” Heather Maughan*, C. William Birky Jr., Wayne L. Nicholson, William D. Rosenzweig§ and Russell H. Vreeland ;
Evolutionists were so disbelieving at this stunning lack of change, far less change than was expected from the neo-Darwinian view, that they insisted the stunning similarity was due to modern contamination in Vreeland's experiment. Yet the following study laid that objection to rest by verifying that Dr. Vreeland's methodology for extracting ancient DNA was solid and was not introducing contamination because the DNA sequences this time around were completely unique:
World’s Oldest Known DNA Discovered (419 million years old) - Dec. 2009
Excerpt: But the DNA was so similar to that of modern microbes that many scientists believed the samples had been contaminated. Not so this time around. A team of researchers led by Jong Soo Park of Dalhousie University in Halifax, Canada, found six segments of identical DNA that have never been seen before by science. “We went back and collected DNA sequences from all known halophilic bacteria and compared them to what we had,” Russell Vreeland of West Chester University in Pennsylvania said. “These six pieces were unique",,,
These following studies, by Dr. Cano on ancient bacteria, preceded Dr. Vreeland's work:
“Raul J. Cano and Monica K. Borucki discovered the bacteria preserved within the abdomens of insects encased in pieces of amber. In the last 4 years, they have revived more than 1,000 types of bacteria and microorganisms — some dating back as far as 135 million years ago, during the age of the dinosaurs.,,, In October 2000, another research group used many of the techniques developed by Cano’s lab to revive 250-million-year-old bacteria from spores trapped in salt crystals. With this additional evidence, it now seems that the “impossible” is true.”
Dr. Cano and his former graduate student Dr. Monica K. Borucki said that they had found slight but significant differences between the DNA of the ancient, 25-40 million year old amber-sealed Bacillus sphaericus and that of its modern counterpart,(thus ruling out that it is a modern contaminant, yet at the same time confounding materialists, since the change is not nearly as great as evolution's 'genetic drift' theory requires.)
30-Million-Year Sleep: Germ Is Declared Alive
Dr. Cano's work on ancient bacteria came in for intense scrutiny since it did not conform to Darwinian predictions, and since people found it hard to believe you could revive something that was millions of years old. Yet Dr. Cano has been vindicated:
“After the onslaught of publicity and worldwide attention (and scrutiny) after the publication of our discovery in Science, there have been, as expected, a considerable number of challenges to our claims, but in this case, the scientific method has smiled on us. There have been at least three independent verifications of the isolation of a living microorganism from amber."
In reply to a personal e-mail from myself, Dr. Cano commented on the 'Fitness Test' I had asked him about:
Dr. Cano stated: "We performed such a test, a long time ago, using a panel of substrates (the old gram positive biolog panel) on B. sphaericus. From the results we surmised that the putative "ancient" B. sphaericus isolate was capable of utilizing a broader scope of substrates. Additionally, we looked at the fatty acid profile and here, again, the profiles were similar but more diverse in the amber isolate.":
Fitness test which compared ancient bacteria to its modern day descendants, RJ Cano and MK Borucki
Thus, the most solid evidence available for the most ancient DNA scientists are able to find does not support evolution happening on the molecular level of bacteria. In fact, according to the fitness test of Dr. Cano, the change witnessed in bacteria conforms to the exact opposite, Genetic Entropy; a loss of functional information/complexity, since fewer substrates and fatty acids are utilized by the modern strains. Considering the intricate level of protein machinery it takes to utilize individual molecules within a substrate, we are talking an impressive loss of protein complexity, and thus loss of functional information, from the ancient amber sealed bacteria. Here is a revisit to the video of the 'Fitness Test' that evolutionary processes have NEVER passed as for a demonstration of the generation of functional complexity/information above what was already present in a parent species bacteria:
Is Antibiotic Resistance evidence for evolution? - 'Fitness Test' - video
According to prevailing evolutionary dogma, there 'HAS' to be 'major genetic drift' to the DNA of modern bacteria from 250 million years ago, even though the morphology (shape) of the bacteria can be expected to remain exactly the same. In spite of their preconceived materialistic bias, scientists find there is no significant genetic drift from the ancient DNA. In fact recent research, with bacteria which are alive right now, has also severely weakened the 'genetic drift' argument of evolutionists:
The consequences of genetic drift for bacterial genome complexity - Howard Ochman - 2009
Excerpt: The increased availability of sequenced bacterial genomes allows application of an alternative estimator of drift, the genome-wide ratio of replacement to silent substitutions in protein-coding sequences. This ratio, which reflects the action of purifying selection across the entire genome, shows a strong inverse relationship with genome size, indicating that drift promotes genome reduction in bacteria.
I find it interesting that the materialistic theory of evolution expects there to be a significant amount of genetic drift from the DNA of ancient bacteria to its modern descendants, while the morphology can be allowed to remain exactly the same with its descendants. Alas for the atheistic materialist once again, the hard evidence of ancient DNA has fell in line with the anthropic hypothesis.
Many times a materialist will offer what he considers conclusive proof for evolution by showing bacteria that have become resistant to a certain antibiotic such as penicillin. Yet upon close inspection, once again this 'conclusive proof' dissolves away. All observed instances of 'beneficial' adaptations of bacteria to new antibiotics have been shown to be the result of degradation of preexisting molecular abilities:
List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria:
Moreover it is shown that nothing new has evolved since ancient bacteria have the very same ability to developed resistance to antibiotics as modern strains do:
Antibiotic resistance is ancient - September 2011
Evolution - Tested And Falsified - Don Patton - video
The following is a reflection on the true implications of the 'evolution' of bacteria becoming resistant to multiple antibiotics that has many people concerned as to its danger:
Superbugs not super after all
Excerpt: It is precisely because the mutations which give rise to resistance are in some form or another defects, that so-called supergerms are not really ‘super’ at all—they are actually rather ‘wimpy’ compared to their close cousins.
MRSA - Supergerms Do they prove evolution?
Are You Too Clean? - New Studies Suggest Getting A Little Dirty May Be Just What The Doctor Ordered - December 2010
For materialists to conclusively prove evolution they would have to violate the principle of Genetic Entropy by clearly demonstrating a gain of functional information bits (Fits) over the parent species (Abel - Null-Hypothesis) in the fitness test which I've listed previously. Materialists have not done so, nor will they ever. The staggering interrelated complexity for the integrated whole of a distinct 'kind' of life-form simply will not allow the generation of complex functional information above the parent species to happen in its genome by chance alone. (Sanford, Genetic Entropy 2005)
This following site highlights the problem that the integrated complexity of a genome presents for Neo-Darwinism mechanism of random mutation:
Poly-Functional Complexity equals Poly-Constrained Complexity
This following quote reiterates the principle that material processes cannot generate functional information:
“There is no known law of nature, no known process and no known sequence of events which can cause information to originate by itself in matter.” Werner Gitt, “In the Beginning was Information”, 1997, p. 106. (Dr. Gitt was the Director at the German Federal Institute of Physics and Technology) His challenge to scientifically falsify this statement has remained unanswered since first published.
Some materialists believe they have conclusive proof for evolution because bacteria can quickly adapt to detoxify new man-made materials, such as nylon, even though it is, once again, just a minor variation within kind, i.e. though the bacteria adapt they still do not demonstrate a gain in fitness over the parent strain once the nylon is consumed (Genetic Entropy). I’m not nearly as impressed with their 'stunning proof' as they think I should be. In fact recent research has shown the correct explanation for the nylon-eating enzyme, produced on the plasmids, seems to be a special mechanism which recombines parts of the genes in the plasmids in a way that is non-random. This is shown by the absence of stop codons, which would be generated if the variation were truly random. The 'clockwork' repeatability of the adaptation clearly indicates a designed mechanism that fits perfectly within the limited 'variation within kind' model of Theism, and stays well within the principle of Genetic Entropy since the parent strain is still more fit for survival once the nylon is consumed from the environment. (Answers In Genesis)
Nylon Degradation – Analysis of Genetic Entropy
Excerpt: At the phenotypic level, the appearance of nylon degrading bacteria would seem to involve “evolution” of new enzymes and transport systems. However, further molecular analysis of the bacterial transformation reveals mutations resulting in degeneration of pre-existing systems.
Why Scientists Should NOT Dismiss Intelligent Design - William Dembski
Excerpt: "the nylonase enzyme seems “pre-designed” in the sense that the original DNA sequence was preadapted for frame-shift mutations to occur without destroying the protein-coding potential of the original gene. Indeed, this protein sequence seems designed to be specifically adaptable to novel functions."
Though Darwinists love to claim this as a 'new' protein. The simple fact is that is the same exact enzyme/protein, esterase, with only a minor variation on its previous enzymatic activity:
“Mutational analysis of 6-aminohexanoate-dimer hydrolase:
Relationship between nylon oligomer hydrolytic and esterolytic activities”
Excerpt: “Based upon the following findings, we propose that the nylon oligomer hydrolase has newly evolved through amino acid substitutions in the catalytic cleft of a pre-existing esterase with the b-lactamase-fold”.
Taku Ohkia, Yoshiaki Wakitania, Masahiro Takeoa, Kengo Yasuhiraa, Naoki Shibatab,
Yoshiki Higuchib, Seiji Negoroa FEBS Letters 580 (2006) 5054–5058
In fact it is now strongly suspected that all changes in the genome, which are deemed to be 'beneficial', are now found to be 'designed' changes that still stay within the overriding principle of Genetic Entropy:
Revisiting The Central Dogma (Of Evolution) In The 21st Century - James Shapiro - 2008
Excerpt: Genetic change is almost always the result of cellular action on the genome (not replication errors). (of interest - 12 methods of 'epigenetic' information transfer in the cell are noted in the paper)
Scientists Discover What Makes The Same Type Of Cells Different - Oct. 2009
Excerpt: Until now, cell variability was simply called “noise”, implying statistical random distribution. However, the results of the study now show that the different reactions are not random, but that certain causes (environmental clues) lead to predictable distribution patterns,,,
Bacteria 'Invest' (Designed) Wisely to Survive Uncertain Times, Scientists Report - Dec. 2009
Excerpt: Essentially, variability of bacterial cells appears to match the variability in the environment, thereby increasing the chances of bacterial survival,
De Novo Genes: - Cornelius Hunter - Nov. 2009
Excerpt: Cells have remarkable adaptation capabilities. They can precisely adjust which segments of the genome are copied for use in the cell. They can edit and regulate those DNA copies according to their needs. And they can even modify the DNA itself, such as with adaptive mutations,,,,One apparent de novo gene is T-urf13 which was found in certain varieties of corn.
The secrets of intelligence lie within a single cell - April 2010
Excerpt: Yet something amazing is happening here: because the damage to the Antithamnion filament is unforeseeable, the organism faces a situation for which it has not been able to adapt, and is therefore unable to call upon inbuilt responses. It has to use some sort of problem-solving ingenuity instead.
This overriding truth of never being able to violate the Genetic Entropy of poly-constrained information by natural means applies to the 'non-living realm' of viruses, such as bird flu and HIV, as well:
Ryan Lucas Kitner, Ph.D. 2006. - Bird Flu
Excerpt: influenza viruses do possess a certain degree of variability; however, the amount of genetic information which a virus can carry is vastly limited, and so are the changes which can be made to its genome before it can no longer function.
As well, the virus is far more complex than many people have ever imagined, as this following video clearly points out:
Virus - Assembly Of A Nano-Machine - video
Though most people think of viruses as being very harmful to humans, the fact is that the Bacteriophage (Bacteria Eater) virus, in the preceding video, is actually a very beneficial virus to man for it is one of the main mechanisms found in nature by which bacteria populations are kept in check so as to keep them from 'overpopulating' the world. If bacteria did not have such mechanisms keeping them in check, the effect on the environment of the earth would soon throw the entire ecology of the planet into chaos, thus making the earth inhospitable for higher life forms.
Michael Behe defends the one 'overlooked' protein/protein binding site generated by the HIV virus, that Abbie Smith and Ian Musgrave had found, by pointing out it is well within the 2 binding site limit he set in "The Edge Of Evolution" on this following site:
Response to Ian Musgrave's "Open Letter to Dr. Michael Behe," Part 4
"Yes, one overlooked protein-protein interaction developed, leading to a leaky cell membrane --- not something to crow about after 10^20 replications and a greatly enhanced mutation rate."
An information-gaining mutation in HIV? NO!
In fact, I followed this debate very closely and it turns out the trivial gain of just one protein-protein binding site being generated for the non-living HIV virus, that the evolutionists were 'crowing' about, came at a staggering loss of complexity for the living host it invaded (People) with just that one trivial gain of a 'leaky cell membrane' in binding site complexity. Thus the 'evolution' of the virus clearly stayed within the principle of Genetic Entropy since far more functional complexity was lost by the living human cells it invaded than was ever gained by the non-living HIV virus. A non-living virus which depends on those human cells to replicate in the first place. Moreover, while learning HIV is a 'mutational powerhouse' which greatly outclasses the 'mutational firepower' of the entire spectrum of higher life-forms combined for millions of years, and about the devastating effect HIV has on humans with just that one trivial binding site being generated, I realized if evolution were actually the truth about how life came to be on Earth then the only 'life' that would be around would be extremely small organisms with the highest replication rate, and with the most mutational firepower, since only they would be the fittest to survive in the dog eat dog world where blind pitiless evolution rules and only the 'fittest' are allowed to survive.
Dr. Meyer makes a interesting comment here about simple self-replicating molecules which got simpler very quickly by neo-Darwinian processes;
In a classic experiment, Spiegelman in 1967 showed what happens to a molecular replicating system in a test tube, without any cellular organization around it. … these initial templates did not stay the same; they were not accurately copied. They got shorter and shorter until they reached the minimal size compatible with the sequence retaining self-copying properties. And as they got shorter, the copying process went faster. - Stephen Meyer - The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, DE: ISI Books, 2011), p. 313–18.
This following link has a nice overview of the self-replicating experiment in 1967 by Spiegelman in which the replicating molecule got simpler;
Origins of Life – Freeman Dyson – page 74
Here is a defence of Dr. Behe's binding site limit from the T-urf13 gene/protein that was argued, by Darwinists, to be a 'new' gene/protein that refuted Behe's limit:
How Arthur Hunt Fails To Refute Behe (T-URF13)- Jonathan M - February 2011
On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe
Excerpt: furthermore, T-urf 13 involves a kind of degradation of maize. In the case of the Texas maize–hence the T—the T-urf 13 was located by researchers because it was there that the toxin that decimated the corn grown in Texas in the late 60′s attached itself. So the “manufacturing” of this “de novo” gene proved to make the maize less fit. This is in keeping with Behe’s latest findings.
I would also like to point out scientists have never changed any one type of single cell organism, bacteria, or virus, into any other type of single cell organism, bacteria, or virus, despite years of exhaustive experimentation trying to change them. In fact, it is commonly known the further scientists deviate any particular single cell organism, bacteria, or virus, type from its original state, the more unfit for survival the manipulated population will quickly become (Genetic Entropy). As former president of the French Academy of Sciences Pierre P. Grasse has stated:
“What is the use of their unceasing mutations, if they do not change? In sum, the mutations of bacteria and viruses are merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.”
As well, to reiterate what was said in another article I listed previously, bacteria that are resistant to multiple antibiotics (MRSA) are actually superwimps instead of supergerms. This is because the multiple deleterious mutations they have incurred, from their interaction with different antibiotics, make them dramatically less fit for survival in the wild than their non-mutated cousins:
Superbugs not super after all
Excerpt: It is precisely because the mutations which give rise to resistance are in some form or another defects, that so-called supergerms are not really ‘super’ at all—they are actually rather ‘wimpy’ compared to their close cousins. When I was finally discharged from hospital, I still had a strain of supergerm colonizing my body. Nothing had been able to get rid of it, after months in hospital. However, I was told that all I had to do on going home was to ‘get outdoors a lot, occasionally even roll in the dirt, and wait.’ In less than two weeks of this advice, the supergerms were gone. Why? The reason is that supergerms are actually defective in other ways, as explained. Therefore, when they are forced to compete with the ordinary bacteria which normally thrive on our skin, they do not have a chance. They thrive in hospital because all the antibiotics and antiseptics being used there keep wiping out the ordinary bacteria which would normally outcompete, wipe out and otherwise keep in check these ‘superwimps’.
NDM-1 Superbug the Result of Bad Policies, Not Compelling Evidence for Evolution's Creative Powers - Sept. 2010
'Random mutations', though touted as this great engine of creativity by evolutionists, are in fact a pitiful mechanism to explain the generation of the functional information that we find in life, as this following references show:
Unexpectedly small effects of mutations in bacteria bring new perspectives - November 2010
Excerpt: Most mutations in the genes of the Salmonella bacterium have a surprisingly small negative impact on bacterial fitness. And this is the case regardless whether they lead to changes in the bacterial proteins or not.,,, using extremely sensitive growth measurements, doctoral candidate Peter Lind showed that most mutations reduced the rate of growth of bacteria by only 0.500 percent. No mutations completely disabled the function of the proteins, and very few had no impact at all. Even more surprising was the fact that mutations that do not change the protein sequence had negative effects similar to those of mutations that led to substitution of amino acids. A possible explanation is that most mutations may have their negative effect by altering mRNA structure, not proteins, as is commonly assumed.
Random Mutations Destroy Information - Perry Marshall - video
Random Mutations and the Heroics of Evolution
Excerpt: A child once informed his friends his toy bulldozer could dig all the way through the Earth. But wasn’t the Earth too big? No, look at the Grand Canyon—it is proof of what such small shovels can do. Such childish logic, amazingly, shows up repeatedly in evolutionary “theory.”
Michael Behe's Blog - October 2007
Excerpt: As I showed for mutations that help in the human fight against malaria, many beneficial mutations actually are the result of breaking or degrading a gene. Since there are so many ways to break or degrade a gene, those sorts of beneficial mutations can happen relatively quickly. For example, there are hundreds of different mutations that degrade an enzyme abbreviated G6PD, which actually confers some resistance to malaria. Those certainly are beneficial in the circumstances. The big problem for evolution, however, is not to degrade genes (Darwinian random mutations can do that very well!) but to make the coherent, constructive changes needed to build new systems.
Materialists simply do not have any evidence for the truly 'beneficial' mutations they need to make evolution work. The following site has numerous quotes, studies and videos which reveal the overwhelmingly negative mutation rate which has been found in life:
Mutation Studies, Videos, And Quotes
It is also interesting to note that scientists have actually used a mechanism of 'excessive mutations' to help humans in their fight against pathogenic viruses, as the following articles clearly point out:
GM Crops May Face Genetic Meltdown
Quasispecies Theory and the Behavior of RNA Viruses - July 2010
Excerpt: Many predictions of quasispecies theory run counter to traditional views of microbial behavior and evolution and have profound implications for our understanding of viral disease. ,,, it has been termed “mutational meltdown.” It is now clear that many RNA viruses replicate near the error threshold. Early studies with VSV showed that chemical mutagens generally reduced viral infectivity, and studies with poliovirus clearly demonstrated that mutagenic nucleoside analogs push viral populations to extinction [40]–[43]. The effect is dramatic—a 4-fold increase in mutation rate resulted in a 95% reduction in viral titer.,,, While mutation-independent activities have also been identified, it is clear that APOBEC-mediated lethal mutagenesis is a critical cellular defense against RNA viruses. The fact that these pathogens replicate close to the error threshold makes them particularly sensitive to slight increases in mutational load.,,,
In fact, trying to narrow down an actual hard number for the truly beneficial mutation rate, that would actually explain the massively integrated machine-like complexity of proteins we find in life, is what Dr. Behe did in this following book:
"The Edge of Evolution: The Search for the Limits of Darwinism"
The Edge Of Evolution - Michael Behe - Video Lecture
The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155).
Dr. Behe states in The Edge of Evolution on page 135:
"Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite."
Where's the substantiating evidence for neo-Darwinism?
Richard Dawkins’ The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe - Oct. 2009
Excerpt: The rarity of chloroquine resistance is not in question. In fact, Behe’s statistic that it occurs only once in every 10^20 cases was derived from public health statistical data, published by an authority in the Journal of Clinical Investigation. The extreme rareness of chloroquine resistance is not a negotiable data point; it is an observed fact.
Antimalarial drug resistance - Nicholas J. White 1,2
Excerpt: Resistance to chloroquine in P. falciparum has arisen spontaneously less than ten times in the past fifty years (14). This suggests that the per-parasite probability of developing resistance de novo is on the order of 1 in 10^20 parasite multiplications. The single point mutations in the gene encoding cytochrome b (cytB), which confer atovaquone resistance, or in the gene encoding dihydrofolate reductase (dhfr), which confer pyrimethamine resistance, have a per-parasite probability of arising de novo of approximately 1 in 10^12 parasite multiplications (5). To put this in context, an adult with approximately 2% parasitemia has 10^12 parasites in his or her body. But in the laboratory, much higher mutation rates thane 1 in every 10^12 are recorded (12).
An Atheist Interviews Michael Behe About "The Edge Of Evolution" - video
Thus, the actual rate for 'truly' beneficial mutations, which would account for the staggering machine-like complexity we see in life, is far in excess of one-hundred-billion-billion mutational events. So the one in a thousand, to one in a million, number for 'truly' beneficial mutations is actually far, far, too generous for an estimate for evolutionists to use as an estimate in their 'hypothetical' calculations for beneficial mutations.
In fact, from consistent findings such as these, it is increasingly apparent the principle of Genetic Entropy is the overriding foundational rule for all of biology, with no exceptions at all, and belief in 'truly' beneficial mutations is nothing more than wishful speculation on the materialist part which has no foundation in empirical science whatsoever.
Evolution vs. Genetic Entropy - video
The following article has a simple example of how Genetic Entropy plays out, even allowing that some mutations might truly be slightly beneficial in the molecular sense as far as molecular functionality is concerned:
Excerpt: Even if there were several possible pathways by which to construct a gain-of-FCT mutation, or several possible kinds of adaptive gain-of-FCT features, the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene.
The sort of loss-of-function examples seen in the Lenski's LTEE (Long Term Evolution Experiment) will never show that natural selection can increase high CSI. To understand why, imagine the following hypothetical situation.
Consider an imaginary order of insects, the Evolutionoptera. Let’s say there are 1 million species of Evolutionoptera, but ecologists find that the extinction rate among Evolutionoptera is 1000 species per millennium. The speciation rate (the rate at which new species arise) during the same period is 1 new species per 1000 years. At these rates, every thousand years 1000 species of Evolutionoptera will die off, while one new species will develop–a net loss of 999 species. If these processes continue, in 1,000,001 years there will be no species of Evolutionoptera left on earth.
More Darwinian Degradation: Much Ado about Yeast - Michael Behe - January 2012
Excerpt: "It seems to me that Richard Lenski, who knows how to get the most publicity out of exceedingly modest laboratory results, has taught his student well. In fact, the results can be regarded as the loss of two pre-existing abilities: 1) the loss of the ability to separate from the mother cell during cell division; and 2) the loss of control of apoptosis. The authors did not analyze the genetic changes that occurred in the cells, but I strongly suspect that if and when they do, they'll discover that functioning genes or regulatory regions were broken or degraded. This would be just one more example of evolution by loss of pre-existing systems, at which we already knew that Darwinian processes excel. The apparently insurmountable problem for Darwinism is to build new systems."
The foundational overriding principle, in life sciences, for explaining the sub-speciation of all species from any particular initial parent species that was designed is Genetic Entropy. Genetic Entropy, a rule which draws its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), and the principle can be stated something like this:
"All beneficial adaptations away from a parent species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally created in the parent species genome."
Genetic Entropy also fits very well with the theological question that many children ask their teachers, 'Why would a loving God allow pathogenic viruses and bacteria to exist?'
What about parasites? - September 2010
Excerpt: these parasites must have been benign and beneficial in their original form. Perhaps some were independent and free-living, and others had beneficial symbiotic relationships with animals or humans. ,,,These once-harmless creatures degenerated, and became parasitic and harmful.
The following shows that we can actually watch the 'final act' of Genetic Entropy, 'mutational meltdown', in the laboratory for small asexual populations (bacteria, yeast, etc.):
The Mutational Meltdown in Asexual Populations - Lynch
Excerpt: Loss of fitness due to the accumulation of deleterious mutations appears to be inevitable in small, obligately asexual populations, as these are incapable of reconstituting highly fit genotypes by recombination or back mutation. The cumulative buildup of such mutations is expected to lead to an eventual reduction in population size, and this facilitates the chance accumulation of future mutations. This synergistic interaction between population size reduction and mutation accumulation leads to an extinction process known as the mutational meltdown,,,
These following articles refute Richard E. Lenski's 'supposed evolution' of the citrate ability for the E-Coli bacteria after 20,000 generations of the E-Coli from his 'Long Term Evolution Experiment' (LTEE) which has been going on since 1988:
Multiple Mutations Needed for E. Coli - Michael Behe
Excerpt: As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1) Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.),,, If Lenski’s results are about the best we've seen evolution do, then there's no reason to believe evolution could produce many of the complex biological features we see in the cell.
Michael Behe's Quarterly Review of Biology Paper Critiques Richard Lenski's E. Coli Evolution Experiments - December 2010
Excerpt: After reviewing the results of Lenski's research, Behe concludes that the observed adaptive mutations all entail either loss or modification--but not gain--of Functional Coding ElemenTs (FCTs)
Richard Lenski's Long-Term Evolution Experiments with E. coli and the Origin of New Biological Information - September 2011
Excerpt: The results of future work aside, so far, during the course of the longest, most open-ended, and most extensive laboratory investigation of bacterial evolution, a number of adaptive mutations have been identified that endow the bacterial strain with greater fitness compared to that of the ancestral strain in the particular growth medium. The goal of Lenski's research was not to analyze adaptive mutations in terms of gain or loss of function, as is the focus here, but rather to address other longstanding evolutionary questions. Nonetheless, all of the mutations identified to date can readily be classified as either modification-of-function or loss-of-FCT.
Lenski's e-coli - Analysis of Genetic Entropy
Excerpt: Mutants of E. coli obtained after 20,000 generations at 37°C were less “fit” than the wild-type strain when cultivated at either 20°C or 42°C. Other E. coli mutants obtained after 20,000 generations in medium where glucose was their sole catabolite tended to lose the ability to catabolize other carbohydrates. Such a reduction can be beneficially selected only as long as the organism remains in that constant environment. Ultimately, the genetic effect of these mutations is a loss of a function useful for one type of environment as a trade-off for adaptation to a different environment.
Genetic Entropy Confirmed (in Lenski's e-coli) - June 2011
Excerpt: No increases in adaptation or fitness were observed, and no explanation was offered for how neo-Darwinism could overcome the downward trend in fitness.
Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations)
Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually.
The preceding experiment was interesting, for they found, after 50,000 generations of e-coli which is equivalent to about 1,000,000 years of 'supposed' human evolution, only 5 'beneficial' mutations. Moreover, these 5 'beneficial' mutations were found to interfere with each other when they were combined in the ancestral population. Needless to say, this is far, far short of the functional complexity we find in life that neo-Darwinism is required to explain the origination of. Even more problematic for neo-Darwinism is when we realize that Michael Behe showed that the 'beneficial' mutations were actually loss or modification of function mutations. i.e. The individual 'beneficial' mutations were never shown to be in the process of building functional complexity at the molecular level in the first place!
Moreover, Lenski's work has also shown that 'convergent evolution' is impossible because his work has shown that evolution is 'historically contingent'. This following video and article make this point clear:
Lenski's Citrate E-Coli - Disproof of Convergent Evolution - Fazale Rana - video (the disproof of convergence starts at the 2:45 minute mark of the video)
The Long Term Evolution Experiment - Analysis
Excerpt: The experiment just goes to show that even with historical contingency and extreme selection pressure, the probability of random mutations causing even a tiny evolutionary improvement in digestion is, in the words of the researchers who did the experiment, “extremely low.” Therefore, it can’t be the explanation for the origin and varieity of all the forms of life on Earth.
The loss of 'convergent evolution', as a argument for molecular sequence similarity in widely divergent species, is a major blow to neo-Darwinian story telling:
Implications of Genetic Convergent Evolution for Common Descent - Casey Luskin - Sept. 2010
Excerpt: When building evolutionary trees, evolutionists assume that functional genetic similarity is the result of inheritance from a common ancestor. Except for when it isn't. And when the data doesn't fit their assumptions, evolutionists explain it away as the result of "convergence." Using this methodology, one can explain virtually any dataset. Is there a way to falsify common descent, even in the face of convergent genetic similarity? If convergent genetic evolution is common, how does one know if their tree is based upon homologous sequences or convergent ones? Critics like me see the logic underlying evolutionary trees to be methodologically inconsistent, unpersuasive, and ultimately arbitrary.
Origin of Hemoglobins: A Repeated Problem for Biological Evolution - 2010
Excerpt: When analyzed from an evolutionary perspective, it appears as if the hemoglobins originated independently in jawless vertebrates and jawed vertebrates.,,, This result fits awkwardly within the evolutionary framework. It also contradicts the results of the Long-term Experimental Evolution (LTEE; Lenski) study, which demonstrated that microevolutionary biochemical changes are historically contingent.
Convergence: Evidence for a Single Creator - Fazale Rana
Excerpt: When critically assessed, the evolutionary paradigm is found to be woefully inadequate when accounting for all the facets of biological convergence. On the other hand, biological convergence is readily explained by an origins model that evokes a single Creator (reusing optimal designs).
Bernard d'Abrera on Butterfly Mimicry and the Faith of the Evolutionist - October 2011
Excerpt: For it to happen in a single species once through chance, is mathematically highly improbable. But when it occurs so often, in so many species, and we are expected to apply mathematical probability yet again, then either mathematics is a useless tool, or we are being criminally blind.,,, Evolutionism (with its two eldest daughters, phylogenetics and cladistics) is the only systematic synthesis in the history of the universe (science) that proposes an Effect without a Final Cause. It is a great fraud, and cannot be taken seriously because it outrageously attempts to defend the philosophically indefensible.
Lenski's work also conforms to the extreme limit found for just two 'coordinated' mutations conferring any 'evolutionary benefit';
Michael Behe on the most recent Richard Lenski “evolvability” paper - April 2011
More from Lenski's Lab, Still Spinning Furiously - Michael Behe - January, 2012
Even more crushing evidence can be gleaned from Lenski's long term evolution experiment on E-coli. Upon even closer inspection, it seems Lenski's 'cuddled' E. coli are actually headed for genetic meltdown instead of evolving into something, anything, better.
New Work by Richard Lenski:
Excerpt: Interestingly, in this paper they report that the E. coli strain became a “mutator.” That means it lost at least some of its ability to repair its DNA, so mutations are accumulating now at a rate about seventy times faster than normal.
Sometimes a materialist will say, "gene duplication is the real engine of evolution" which generates the new functional information in molecular biology. Yet they simply don't have any evidence to support that assertion:
Gene Duplication Quotes and Papers
Michael Behe, The Edge of Evolution, pg. 162 Swine Flu, Viruses, and the Edge of Evolution
"Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell--both ones we've discovered so far and ones we haven't--at best extremely limited benefit, since no such process was able to do much of anything. It's critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing--neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered--was of much use."
Again I would like to emphasize, I’m not arguing Darwinism cannot make complex functional systems, the data on malaria, and the other examples, are a observation that it does not. In science observation beats theory all the time. So Professor (Richard) Dawkins can speculate about what he thinks Darwinian processes could do, but in nature Darwinian processes have not been shown to do anything in particular.
Michael Behe - 46 minute mark of video lecture on 'The Edge of Evolution' for C-SPAN
Experimental evolution, loss-of-function mutations, and “the first rule of adaptive evolution” - Michael J. Behe - December 2010
Mike Behe on a new journal paper admitting that Darwinian evolution 'can’t do' complex systems - August 2011
Excerpt: 'I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. - Michael Behe
The following experiment recently confirmed the severe limit for evolution found by Dr Behe:
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness - Ann K. Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke – 2010
Excerpt: When all of these possibilities are left open by the experimental design, the populations consistently take paths that reduce expression of trpAE49V,D60N, making the path to new (restored) function virtually inaccessible. This demonstrates that the cost of expressing genes that provide weak new functions is a significant constraint on the emergence of new functions. In particular, populations with multiple adaptive paths open to them may be much less likely to take an adaptive path to high fitness if that path requires over-expression.
Response from Ralph Seelke to David Hillis Regarding Testimony on Bacterial Evolution Before Texas State Board of Education, January 21, 2009
Excerpt: He has done excellent work showing the capabilities of evolution when it can take one step at a time. I have used a different approach to show the difficulties that evolution encounters when it must take two steps at a time. So while similar, our work has important differences, and Dr. Bull’s research has not contradicted or refuted my own.
Epistasis between Beneficial Mutations - July 2011
Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations.
Behe and Snoke go even further, addressing the severe problems with the Gene Duplication scenario in this following study:
Simulating evolution by gene duplication of protein features that require multiple amino acid residues: Michael J. Behe and David W. Snoke
Interestingly Fred Hoyle arrived at the same conclusion, of a 2 amino acid limit, years earlier from a 'mathematical' angle:
The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations - Douglas D. Axe* - December 2010
quote of note: ,, the most significant implication comes not from how the two cases contrast but rather how they cohere—both showing severe limitations to complex adaptation. To appreciate this, consider the tremendous number of cells needed to achieve adaptations of such limited complexity. As a basis for calculation, we have assumed a bacterial population that maintained an effective size of 10^9 individuals through 10^3 generations each year for billions of years. This amounts to well over a billion trillion (10^21) opportunities (in the form of individuals whose lines were not destined to expire imminently) for evolutionary experimentation. Yet what these enormous resources are expected to have accomplished, in terms of combined base changes, can be counted on the fingers.
This following paper clearly reveals that there is a 'cost' to duplicate genes that further precludes the scenario from being plausible:
Experimental Evolution of Gene Duplicates in a Bacterial Plasmid Model
Excerpt: In a striking contradiction to our model, no such conditions were found. The fitness cost of carrying both plasmids increased dramatically as antibiotic levels were raised, and either the wild-type plasmid was lost or the cells did not grow. This study highlights the importance of the cost of duplicate genes and the quantitative nature of the tradeoff in the evolution of gene duplication through functional divergence.
This recent paper also found the gene duplication scenario to be highly implausible:
The Extinction Dynamics of Bacterial Pseudogenes - Kuo and Ochman - August 2010
Excerpt: "Because all bacterial groups, as well as those Archaea examined, display a mutational pattern that is biased towards deletions and their haploid genomes would be more susceptible to dominant-negative effects that pseudogenes might impart, it is likely that the process of adaptive removal of pseudogenes is pervasive among prokaryotes."
Many times evolutionists are very deceptive in saying that evolutionary processes can generate functional information, as with the duplicate gene scenario, when in fact no one has ever experimentally demonstrated a gain in functional information, above a parent species, that would violate the principle of genetic entropy. These following articles reveal some of the many elaborate ploys evolutionists have used in the past to try to deceive, yes deceive!, the public into thinking evolutionary processes can easily generate functional information:
Assessing the NCSE’s Citation Bluffs on the Evolution of New Genetic Information - Feb. 2010
Casey Luskin, in response to a comment from Nick Matzke claiming that there are many examples of neo-Darwinian processes creating functional information in life, points out the 'scientifically' vacuous nature of the vast majority of neo-Darwinian claims for the origin of new biological information here:
Excerpt: many papers which Nick (Matzke) would probably claim show the “origin of new genetic information” invoked natural selection, but then:
did not identify a stepwise mutational pathway,
did not identify what advantages might be gained at each step
did not calculate the plausibility of this pathway evolving under known population sizes, mutation rates, and other relevant probabilistic resources, and in many cases
Is it persuasive to invoke natural selection as the cause of new genetic information when you don’t even know what function is being selected? This is why I said that in many cases, natural selection is used as a “magic wand.” It’s just asserted, even though no one really knows what was going on.
How to Play the Gene Evolution Game - Casey Luskin - Feb. 2010
The NCSE, Judge Jones, and Bluffs About the Origin of New Functional Genetic Information - Casey Luskin - March 2010
To answer our fourth question (What evidence is found for the appearance of all species of life on earth, and is man the last species to appear on earth?) we come to the evidence found for the amazing variety of complex life on earth.
Cambrian Explosion thru Refutation Of Human Evolution
Anonymous said...
great blog! thanks!
Ilíon said...
Quoting Behe: "There is now considerable evidence that genes alone do not control development. For example when an egg's genes (DNA) are removed and replaced with genes (DNA) from another type of animal, development follows the pattern of the original egg until the embryo dies from lack of the right proteins. (The rare exceptions to this rule involve animals that could normally mate to produce hybrids.) ..."
I had long wondered about that. I had, in fact, strongly suspected that that is how it would be.
Anonymous said...
Useful material. Thanks. Do you have time to organize it?
kuartus said...
Hey BA, I think you might find this article interesting:
Direct measurement of the quantum wavefunction
Anonymous said...
outstanding post! I hate commenting and i dont usually do it but because i enjoyed this, what the heck! Thanks alot!:)
Anonymous said...
What i don't understood is if truth be told how you're now not really much more smartly-preferred than you might be right now.
You are very intelligent. You know thus considerably on the subject
of this subject, produced me for my part believe it from
numerous numerous angles. Its like women and men aren't interested except it is something to do with Girl gaga! Your individual stuffs great. Always maintain it up!
My blog ; bacalao |
2427773c072a2c33 | Hamilton-Jacobi equation
From Scholarpedia
Robert L. Warnock (2010), Scholarpedia, 5(7):8330. doi:10.4249/scholarpedia.8330 revision #91338 [link to/cite this article]
Jump to: navigation, search
Post-publication activity
Curator: Robert L. Warnock
The Hamilton-Jacobi Equation is a first-order nonlinear partial differential equation of the form \( H(x,u_x(x,\alpha,t),t)+u_t(x,\alpha,t)=K(\alpha,t)\) with independent variables \((x,t)\in {\mathbb R}^n\times{\mathbb R}\) and parameters \( \alpha\in {\mathbb R}^n\ .\) It has wide applications in optics, mechanics, and semi-classical quantum theory. Its solutions determine infinite families of solutions of Hamilton's ordinary differential equations, which are the equations of motion of a mechanical system or an optical system in the ray approximation.
Sir William Rowan Hamilton (1805-1865) carried out one of the earliest studies of geometrical optics in an arbitrary medium with varying index of refraction (Hamilton (1830-1832), Synge (1937), Carathéodory (1937)). He found a powerful expression of the topic in a characteristic function, which is the optical path length of a ray, regarded as a function of initial and final positions and times of the ray. This and related functions satisfy partial differential equations, and directly determine infinite families of rays. Following an analogy between rays and trajectories of a mechanical system, Hamilton soon extended his concepts to mechanics, incorporating ideas of Lagrange and others concerning generalized coordinates. The resulting Hamiltonian mechanics, notable for its invariance under coordinate transformations, is a cornerstone of theoretical physics.
With an emphasis on mechanics, Carl Gustav Jacob Jacobi (1804-1851) sharpened Hamilton's formulation, clarified mathematical issues, and made significant applications (Jacobi (1842-1843)). The resultant Hamilton-Jacobi theory and later developments are presented in several famous texts: Arnol'd (1974), Landau & Lifshitz (1969), Gantmacher (1970), Born & Wolf (1965), Lanczos (1949), Carathéodory (1982), Courant & Hilbert (1962). For studies using modern PDE theory see Lions (1982), Evans (2008), and Benton (1977). The theory embodies a wave-particle duality, which figured in the advent of the de Broglie - Schrödinger wave mechanics (Jammer (1966)). Hamilton-Jacobi theory also played an important role in development of the theory of first order partial differential equations and the calculus of variations (Courant & Hilbert (1962), Carathéodory (1982)).
In a view broader than that of the original work, a solution of the Hamilton-Jacobi equation is the generator of a canonical transformation, a symplectic change of variables intended to simplify the equations of motion. In this framework (as applied to mechanics) there are solutions of a type different from that of Hamilton, which determine not only orbits but also invariant tori in phase space on which the orbits lie. These solutions, which are known to exist only under special circumstances, are the subject of the celebrated work of Kolmogorov, Arnol'd, and Moser; see Gallavotti (1983). Even approximate invariants, constructed by approximate solutions of the Hamilton-Jacobi equation, have implications for stability of motion over finite times (Nekhoroshev (1977), Warnock & Ruth (1992)). Approximate invariants also find applications in the Einstein-Brillouin-Keller quantization of semi-classical quantum theory (Keller (1958), Percival (1977), Chapman et al. (1976), Martens & Ezra (1987)). Various forms and generalizations of the Hamilton-Jacobi equation occur widely in contemporary applied mathematics, for instance in optimal control theory (Fleming & Rishel (1975)).
Canonical Transformations
Canonical transformations (equivalently, symplectic transformations) are of crucial importance in classical mechanics, as they are the chief means of solving a mechanical system or clarifying the structure of the system when it cannot be solved. The Hamilton-Jacobi equation is used to generate particular canonical transformations that simplify the equations of motion.
A mechanical system with \(n\) degrees of freedom is described by generalized coordinates \(q=(q_1,\cdots, q_n)\) and corresponding generalized momenta \( p=(p_1,\cdots,p_n)\ ;\) we write \(z=(q,p)\ .\) The motion of the system is governed by Hamilton's canonical equations of motion, i.e., the ordinary differential equations
\[\tag{1} \dot q= H_p(z,t)\ ,\quad \dot p=-H_q(z,t)\ , \]
where \( \dot{}\ \) denotes the time derivative and subscripts indicate vectors of partial derivatives; thus \(H_q=(\partial H/\partial q_1,\cdots,\partial H/\partial q_n)\ .\) The Hamiltonian function \(H:\mathbb{R}^{2n}\times\mathbb{R}\rightarrow\mathbb{R}\) is here assumed to be \(C^2\) in \(z\) and continuous in \(t\ .\) The solution of the initial value problem (or flow) for the Hamiltonian system (1) is denoted by \({\mathbf z}(t,z_0)=({\mathbf q}(t,z_0),{\mathbf p}(t,z_0))\) for initial value \(z_0={\mathbf z}(0,z_0)\ .\) This solution, denoted by the bold faced letter \(\mathbf z\) to distinguish it from a general point \(z\) in phase space, will be called an orbit. If \(H\) depends on the time, specification of an orbit requires the initial time \(t_0\) (not just the elapsed time) as well as the initial condition \(z_0\ ;\) for convenience the origin of time is chosen so that \(t_0=0\ .\)
One seeks a transformation of coordinates, \(Z=(Q,P)=\Phi(z,t)=(\Phi_1(z,t),\Phi_2(z,t))\ ,\) so that the equations of motion retain their form but with a new Hamiltonian \(K\ ,\) namely
\[\tag{2} \dot Q= K_P(Z,t)\ ,\quad \dot P=- K_Q(Z,t)\ . \]
If \(K\) can be made independent of \(Q\ ,\) then \(\mathbf P\) is constant and the solution of (2) is given simply as
\[\tag{3} {\mathbf Q}(t,Z_0)=Q_0+\int_0^t K_P(P_0,\tau)d\tau\ ,\quad {\mathbf P}(t,Z_0)=P_0\ . \]
The solution of (1) is retrieved by the inverse transformation \(z=\Psi(Z,t) \equiv \Phi^{-1}(Z,t)\ .\)
Write \({\mathbf Z}(t,Z_0)=({\mathbf Q}(t,Z_0),{\mathbf P}(t,Z_0))=\Phi({\mathbf z}(t,z_0),t)\) for an orbit in the new coordinates, where \(Z_0=\Phi(z_0,0)\ .\) Reference to initial conditions will often be suppressed. A canonical transformation will be implicitly determined through the equation
\[\tag{4} {\mathbf p}(t)\cdot\dot{\mathbf q}(t)-H({\mathbf z}(t),t)=-{\mathbf Q}(t)\cdot\dot{\mathbf P}(t)-K({\mathbf Z}(t),t) +\frac{d}{dt}F({\mathbf q}(t),{\mathbf P}(t),t)\ , \]
where \(\cdot\) indicates the scalar product and the given function \(F(q,P,t)\) is \(C^2\) in its first two arguments, \(C^1\) in \(t\ ,\) and such that
\[\tag{5} \det F_{qP}=\det\{\partial^2 F/\partial q_i\partial P_j\}\ne 0\ , \]
in some open region \(\Omega\subset \mathbb{R}^{2n+1}\) of \((q,P,t)\)-space. This function \(F\) is called the generator or generating function of the transformation. By writing out \(dF/dt\ ,\) one sees that (4) is satisfied if
\[\tag{6} {\mathbf p}(t)=F_q({\mathbf q}(t),{\mathbf P}(t),t)\ , \ :\]
\[\tag{7} {\mathbf Q}(t)=F_P({\mathbf q}(t),{\mathbf P}(t),t)\ , \ :\]
\[\tag{8} K({\mathbf Z}(t),t)=H({\mathbf z}(t),t)+ F_t({\mathbf q}(t),{\mathbf P}(t),t)\ . \]
This suggests defining the canonical transformation by the equations
\[\tag{9} p=F_q(q,P,t)\ , \ :\]
\[\tag{10} Q=F_P(q,P,t)\ . \]
Owing to condition (5) and the inverse function theorem, (9) can be solved for \(P=\Phi_2(z,t)\) (at least locally in \(\Omega\)). Substitution of the solution in (10) gives \(Q=\Phi_1(z,t)\) as well. To get the inverse transformation \(z=\Psi(Z,t)\ ,\) solve (10) for \(q=\Psi_1(Z,t)\ ,\) then substitute in (9) to find \(p=\Psi_2(Z,t)\ .\) Then the new Hamiltonian is defined by
\[\tag{11} K(Z,t)=H(z,t)+F_t(q,P,t)=H(\Psi(Z,t),t)+F_t(\Psi_1(Z,t),P,t)\ . \]
Textbooks usually apply a variational principle to show that the equations of motion are invariant in form under the transformation just defined. The advantage of the variational argument lies in its geometrical foundation, which provides motivation for the starting equation (4), but is too long a story for this brief account; see Arnol'd (1974) for the geometric viewpoint. By generalizing an idea in Jacobi's 20th lecture (Jacobi (1842-1843), pp.158-159), the proof may be carried out instead by direct calculation. Substitution of (9) and (10) in (11) gives
\[\tag{12} H(q,F_q(q,P,t),t)+F_t(q,P,t)=K(F_P(q,P,t),P,t)\ . \]
Take \(\partial/\partial P\) of (12), evaluate along orbits, and then subtract \(d/dt\) of (7). Similarly, take \(\partial/\partial q\) of (12), evaluate on orbits, and add \(d/dt\) of (6). This leads to the informative equations
\[\tag{13} F_{qP}(\dot{\mathbf q}-H_p)-(\dot{\mathbf Q}-K_P)+F_{PP}(\dot{\mathbf P}+K_Q)=0\ , \ :\]
\[\tag{14} F_{qP}(\dot{\mathbf P}+K_Q)-(\dot{\mathbf p}+H_q)+F_{qq}(\dot{\mathbf q}-H_p)=0\ . \]
In view of (5), this shows that (1) implies (2) and vice versa, as long as \((q,Q,t)\) lies in \(\Omega\ .\)
There are other possible choices of the old and new variables on which the generator may depend. In general the condition (5) on \(F(q,P,t)\) will not hold globally, in which case one might alternatively try to use a function \(F_1(q,Q,t)\) with \(\det F_{1qQ}\ne 0\ .\) Then the equations analogous to (4), (9), (10), and (11) are
\[\tag{15} p\dot q-H=P\dot Q-K+dF_1/dt\ , \qquad p=F_{1q}\ ,\qquad P=-F_{1Q}\ ,\qquad H+F_{1t}=K\ . \]
A frequently used notation follows Goldstein (1981), who writes \(F_2(q,P,t)\) for the first \(F\) discussed above, and gives equations for the four functions \(F_1(q,Q,t), F_2(q,P,t), F_3(p,Q,t), F_4(p,P,t)\ .\) These are far from being the only possible choices; see Feng et al. (1989) and Erdelyi & Berz (2001) for a broader view. According to a theorem in Section 48B of Arnol'd (1974) there is always a generator that can represent locally a given canonical transformation. It may depend on \(q=(q_1,\cdots,q_n)\) and \(n\) new variables \((P_{i_1},\cdots, P_{i_k},Q_{j_1},\cdots,Q_{j_{n-k}})\ .\)
One can show that the transformation induced by any generator with requisite smoothness is symplectic, which means that its Jacobian matrix \(M=\{ \partial \Phi_i(z,t)/\partial z_j \}\) is symplectic for all \(z\ .\) Written in terms of \(n\times n\) blocks this condition is
\[\tag{16} MJM^T=J\ ,\quad M=\begin{bmatrix}\partial Q/\partial q&\partial Q/\partial p\\ \partial P/\partial q&\partial P/\partial p\end{bmatrix}\ ,\quad J=\begin{bmatrix}0& -I\\I& 0\end{bmatrix}\ , \]
where \(T\) denotes transpose. For \(n=1\) the symplectic condition reduces to \(\det M=1\ .\) A symplectic transformation preserves volumes in phase space and areas on appropriate surfaces of even dimension. The conserved quantities are known as Poincaré invariants (Arnol'd, 1974, Gantmacher, 1970).
To prove (16) for the transformation induced by \(F\ ,\) differentiate (9) and (10) with respect to \(q\) and \(p\ .\) Thanks to (5), the resulting equations can be solved for \(M\ ;\) some calculation then shows that the solution obeys (16). An alternative viewpoint is to take symplecticity as the defining property of a canonical transformation (Meyer et al., 2008).
Hamilton-Jacobi Equation and Invariant Tori
To produce a useful transformation the generator \(F\) must be determined so that \(K\) is indeed independent of \(Q\ ,\) thus giving (3) as the solution of the transformed equations. With this form of \(K\ ,\) substitution of (9) in (11) yields
\[\tag{17} H(q, F_q(q,P,t),t)+ F_t(q,P,t)=K(P,t)\ , \]
which is the Hamilton-Jacobi equation for the type-2 generator. Here \(P\) is regarded as a (vector) parameter; the independent variables of the PDE are \(q\) and \(t\ .\) A solution of (17) depending on \(n\) parameters \(P_i\) and such that \(\det F_{qP}\ne 0\) was called a complete solution (Vollständige Lösung) by Jacobi. As was shown above, it determines a canonical transformation.
A case of great interest, that considered by Hamilton and Jacobi, is obtained by requiring \(K=0\ .\) Then \(Q\) and \(P\) are constant and the orbit \(z(t)\) determined by a complete solution through (9) and (10) satisfies the original Hamilton equations (1). That is seen from (13) and (14) which now reduce to
\[\tag{18} F_{qP}(\dot{\mathbf q}-H_p)=0\ ,\quad \dot{\mathbf p}+H_q-F_{qq}(\dot{\mathbf q}-H_p)=0\ . \]
By completeness the first equation implies \(\dot{\mathbf q}=H_p\ ,\) then the second equation gives \(\dot{\mathbf p}=-H_q,\ .\) Thus we have determined an infinite family of orbits through a complete solution of the Hamilton-Jacobi equation with \(K=0\ ,\) the initial conditions for each orbit being fixed by a choice of the \(2n\) parameters \(P_i,Q_i\ .\) The parameters need not be interpreted as new momenta and coordinates (and in fact were not in Jacobi's original treatment), and the \(P_i\) may enter the solution \(F\) in any way, perhaps through an Ansatz for the form of the solution. This will be illustrated below for the case of a central potential. A frequently used notation, due to Jacobi, is \(P=\alpha,\ Q=\beta\ .\)
In the following section it will be shown that a knowledge of the orbits is sufficient to construct explicitly a complete solution of the Hamilton-Jacobi equation with \(K=0\ .\) Thus the question of existence of the complete solution can be referred to the standard existence theory for ordinary differential equations. The situation is far different with \(K\ne 0\ ,\) in which case solutions of (17) exist only under special circumstances and are not continuous in \(P\ .\) A complete discussion of this case is too much for a short article, so the goal will be to give some idea of the character of the problem, and a method of solution for a truncated version of the problem. In particular, it will be seen how the construction of \(K\) must be part of the solution procedure.
To illustrate the situation with non-zero \(K\ ,\) take the case of a time-independent Hamiltonian \(H(z)\) and look for a solution in which \(K\) and \(F\) are also time-independent. Take polar coordinates \((q,p)=(\phi,I), \ \ (Q,P)=(\psi,J)\) where \( \phi,\psi\in [0,2\pi],\ \ I,J\in [0,\infty)\ .\) Also, define \(G\) so that \(F(\phi,J)=\phi\cdot J+G(\phi,J)\ ,\) where the first term on the right gives the identity transformation. Then the Hamilton-Jacobi equation to solve for \(G\) is
\[\tag{19} H(\phi,J+G_\phi(\phi,J))=K(J)\ , \]
and the equations (9) and (10) defining the transformation are
\[\tag{20} I=J+G_\phi(\phi,J)\ , \ :\]
\[\tag{21} \psi=\phi+G_J(\phi,J)\ . \]
If \(G\) satisfies (19) for some function \(K(J)\ ,\) then \(J\) is constant and (20) represents an invariant torus in phase space. The new angle variable \(\psi\) advances linearly in time, according to (3).
Now consider a perturbed integrable system with Hamiltonian
\[\tag{22} H(\phi,I)=H_0(I)+\epsilon V(\phi,I)\ , \]
which satisfies a condition of non-degeneracy
\[\tag{23} \det\ \nu_I(I)\ne 0\ ,\quad \nu(I)= H_{0I}(I)\ . \]
Next rearrange (19) to subtract the first terms of the Taylor series of \(H_0(J+G_\phi)\ :\)
\[\tag{24} -\nu(J)\cdot G_\phi=\epsilon V(\phi,J+G_\phi)+\big[ H_0(J+G_\phi)-H_0(J)-\nu(J)\cdot G_\phi\big] +\big[ H_0(J)-K(J)\big] \ . \]
The sum of the terms in the first square bracket is \(\mathcal{O}(G_\phi^2)\) and therefore small if the transformation induced by (20),(21) is close to the identity. Introduce the (multiple) Fourier series
\[\tag{25} G(\phi,J)= \sum_{m\in Z^n} g_m(J)\exp(im\cdot\phi)\ \]
so that
\[\tag{26} G_\phi(\phi,J)= \sum_{m\in Z^n} im\ g_m(J)\exp(im\cdot\phi)\ , \]
and take the Fourier transform of (24) to obtain
\[\tag{27} g_m(J)=\frac{i}{m\cdot\nu(J)}\frac{1}{(2\pi)^n}\int_{T^n} \exp(-im\cdot\phi)\big[\epsilon V(\phi,J+G_\phi) + H_0(J+G_\phi)-H_0(J)-\nu(J)\cdot G_\phi\big]d\phi,\quad m\ne {\mathbf 0}\ . \]
Since \(G_\phi\) does not contain the zero mode, the set of equations (26) and (27) is a closed system for the Fourier coefficients \(g_m,\ m\ne{\mathbf 0}\ .\) If a solution of this system is known for some \(J\ ,\) then the projection of (19) onto every mode except the zero mode has been solved. The zero mode projection is solved as well simply by defining \(K\) as the average of the left-hand side:
\[\tag{28} K(J)=\frac{1}{(2\pi)^n}\int_{T^n}d\phi\big[H_0(J+G_\phi)+ \epsilon V(\phi,J+G_\phi)\big]\ . \]
This gives some understanding of how the PDE (19) could be solved without prior knowledge of its right-hand side. The zero mode amplitude \(g_{\mathbf 0}\) can be chosen arbitrarily, for instance put equal to zero.
At first sight Eq.(27) would seem to be a straightforward fixed point problem that might be solved for the \(g_m\) by some kind of iteration, provided that the divisor \(m\cdot\nu(J)\) could be bounded away from zero through an appropriate choice of \(J\ .\) Thanks to (23) the value of \(\nu\) can be controlled by varying \(J\ .\) The iteration might be started by keeping only the term \(\epsilon V\ ,\) which gives lowest order perturbation theory. If the series (25) is truncated, then the problem can indeed be approached in that way, and (27) provides a practical method for computing approximate invariant tori (Warnock & Ruth (1987)). The exact problem requires the refined method of KAM theory to control small divisors \(m\cdot\nu \) for large \(m\) (Gallavotti (1983), Pöschel (1982)). The theory ensures the existence of invariant tori for sufficiently small \(\epsilon\ ,\) but they are not continuous functions of \(J\ .\) Rather, they exist only on a Cantor set in \(J\)-space, and the concept of complete solution does not apply in the classical sense (it is nevertheless possible to construct a smooth function which solves the Hamilton-Jacobi equation on the above-mentioned Cantor set; see Pöschel (1982)).
Action as a Solution of the Hamilton-Jacobi Equation
The following discussion is mostly an interpretation of Jacobi's 19th lecture. For a geometric approach see Arnol'd (1974), Section 46C. The goal is to solve the Hamilton-Jacobi equation for a Type-1 generator with the new Hamiltonian \( K = 0\ .\) Write \(Q=q_0\) so that the equation is
\[\tag{29} H(q,F_{1q}(q,q_0,t),t)+F_{1t}(q,q_0,t)=0\ . \]
Using the method of characteristics, suppose that the characteristic (orbit) \({\mathbf z}(t,z_0)=({\mathbf q}(t,z_0),{\mathbf p}(t,z_0))\) that solves (1) is known. Let us try to determine \(F_1(q,q_0,t)\) from its values for \(q={\mathbf q}(t,z_0)\) by means of an ODE for \(g(t)=F_1({\mathbf q}(t,z_0),q_0,t)\ .\) Since \(\dot g=F_{1q}\dot q+F_{1t}\ ,\) equations (15) and (29) suggest putting
\[\tag{30} \dot g(t)={\mathbf p}(t,z_0)\cdot \dot{\mathbf q}(t,z_0)-H({\mathbf z}(t,z_0),t)\ , \]
whence by integration the proposal
\[\tag{31} F_1({\mathbf q}(t,z_0),q_0,t) = \int_0^t\big[{\mathbf p}(\tau,z_0)\cdot \dot {\mathbf q}(\tau,z_0)-H({\mathbf z} (\tau,z_0),\tau)\big]d\tau\ \equiv S(q_0,p_0,t)\ . \]
From this one would like to get \(F_1(q,q_0,t)\) for general \(q\ ,\) but that can be done only if \(p_0\) can be deduced from the \(2n+1\) numbers \((q,q_0,t)\ .\) In general this is not possible for all \(t\ ;\) since orbits projected onto \(q\) space can cross, there can be more than one \(z_0\) giving the same \({\mathbf q}(t,z_0)\ .\) The locus of such crossings is called a caustic. To rule out caustics, the equation \(q={\mathbf q}(t,q_0,p_0)\) must be solvable uniquely for \(p_0=\mathcal{P}_0(q,q_0,t)\ .\) To ensure this, suppose \(t>0\) and
\[\tag{32} \det\bigg[\frac{\partial{\mathbf q }(t,q_0,p_0)}{\partial p_0}\bigg]\ne 0\ . \]
Under these conditions the proposed generator is defined through (31) as
\[\tag{33} F_1(q,q_0,t)=S(q_0,\mathcal{P}_0(q,q_0,t),t)\ . \]
This was Hamilton's essential idea, to view the action (integral of the Lagrangian) as a function of initial and final coordinates and times.
To show that \(F_1\) satisfies (29), first make a variation of the orbit, \({\mathbf z}(t,z_0)\rightarrow \tilde{\mathbf z}(t,\epsilon)={\mathbf z}(t,z_0)+\epsilon\delta{\mathbf z}(t)\ ,\) where \(\delta{\mathbf z}\) is an arbitrary \(C^1\) function. After integration by parts the corresponding variation of (31) is
\[\tag{34} \delta F_1({\mathbf q}(t,z_0),q_0,t) \equiv \bigg[\frac{d}{d\epsilon}\int_0^t\big[\tilde{\mathbf p}(\tau,\epsilon) \cdot\frac{d}{d\tau}\tilde{\mathbf q}(\tau,\epsilon)-H(\tilde{\mathbf z}(\tau,\epsilon),\tau)\big]d\tau \bigg]_{\epsilon=0} \ :\]
\[ =\int_0^t\big[(\dot{\mathbf q}-H_p)\cdot\delta {\mathbf p}-(\dot{\mathbf p}+H_q)\cdot\delta{\mathbf q} \big]d\tau+{\mathbf p}(\tau,z_0)\cdot\delta{\mathbf q}(\tau)\bigg|_0^t\ . \]
Since the integral is zero by (1), it follows that
\[\tag{35} \delta F_1({\mathbf q}(t,z_0),q_0,t)=F_{1q}({\mathbf q}(t,z_0),q_0,t)\cdot\delta{\mathbf q}(t,z_0)+F_{1q_0}({\mathbf q}(t,z_0),q_0,t)\cdot\delta q_0= {\mathbf p}(t,z_0)\cdot\delta{\mathbf q}(t,z_0)-p_0\cdot\delta q_0\ , \]
and since the variations are arbitrary
\[\tag{36} {\mathbf p}(t,z_0)=F_{1q}({\mathbf q}(t,z_0),q_0,t)\ , \ :\]
\[\tag{37} p_0=-F_{1q_0}({\mathbf q}(t,z_0),q_0,t)\ . \]
Next take \(d/dt\) of (31) and apply (36) to obtain
\[\tag{38} H({\mathbf q}(t,z_0),F_{1q}({\mathbf q}(t,z_0),q_0,t),t)+F_{1t}({\mathbf q}(t,z_0),q_0,t)=0\ . \]
Now this shows that \(F_1\) satisfies the Hamilton-Jacobi equation (29) since for any \(q, q_0\) there is a \(p_0\) such that \(q={\mathbf q}(t,z_0)\ ,\) by condition (32).
Recalling the equations (15) that define the canonical transformation, it is seen from (36) and (37) that the transformation from new to old variables is just the time evolution \(z={\mathbf z}(t,z_0)\ ,\) with the new variables being just the initial conditions \(z_0\ ,\) which are constant because the new Hamiltonian is zero. The condition \(\det(F_{1qq_0})\ne 0\)is implied by (32) and (37), as may be seen by differentiating the latter with respect to \(p_0\ ,\) then taking determinants.
If it is not possible to solve \(q={\mathbf q}(t,q_0,p_0)\) for \(p_0\ ,\) it may instead be possible to solve for \(q_0=\mathcal{Q}_0(q,p_0,t)\ .\) Then we can use a generator of Type 2, easily constructed by a Legendre transformation of \(F_1\) (Goldstein (1981)). Namely,
\[\tag{39} F_2(q,p_0,t)=F_1(q,q_0,t)+q_0\cdot p_0 \equiv F_1(q,\mathcal{Q}_0(q,p_0,t),t)+ \mathcal{Q}_0(q,p_0,t)\cdot p_0=S(\mathcal{Q}_0(q,p_0,t),p_0,t)+\mathcal{Q}_0(q,p_0,t)\cdot p_0\ . \]
By again applying the variational argument, it is easy to check that \(F_2\) satisfies all the required equations.
The discussion above proves existence of a solution of (29) in terms of the more elementary existence theory for (1), and also suggests methods of numerical solution of (29).
Solution of Classical Problems by Separation of Variables
Hamilton's principal function (33) solves the Hamilton-Jacobi equation (29) and determines an infinite family of orbits, but in order to construct it one needs to know this family of orbits at the start. Gantmacher (1970, Chap.4, Sect.26) refers to this as a ``vicious circle", and states that ``Jacobi's contribution consists in the fact that he continued Hamilton's investigation and broke the vicious circle". He showed that any solution \(F(q,t,\alpha)\) of
\[\tag{40} H(q,F_q)+F_t=0\ , \]
depending on real parameters \(\alpha= (\alpha_1,\cdots,\alpha_n)\) and complete in the sense that \(\det F_{q\alpha}\ne 0\ ,\) determines the orbits of the system. This is just the story recounted above, with the notation \(P=\alpha\ .\)
It is instructive to illustrate Jacobi's program in a soluble example. Gantmacher (1970, Chap.4, Sect.27) describes three structures of the Hamiltonian, labeled \( 1^o, 2^o, 3^o \) for which (29) is explicitly soluble, each embodying the idea of ``separation of variables". Type \(2^o\) includes some basic systems. For this case in two degrees of freedom the Hamiltonian has the form
\[\tag{41} H(z)=H_2(H_1(z_1),z_2)\ ,\quad z_i=(q_i,p_i)\ , \]
and similarly for \(n\) degrees of freedom,
\[\tag{42} H(z)=H_n(\cdots H_3(H_2(H_1(z_1),z_2),z_3)\cdots,z_n)\ . \]
Each \(H_n\) is required to be \(C^1\) in all arguments and to satisfy
\[\tag{43} \frac{\partial H_n(\cdots,q_n,p_n)}{\partial p_n}\ne 0\ . \]
Considering now two degrees of freedom, notice that because of (43) there exist functions \(G_1,G_2\) such that
\[\tag{44} H_1(q_1,G_1(q_1,\alpha_1))=\alpha_1\ ,\quad H_2(\alpha_1,q_2,G_2(q_2,\alpha_2,\alpha_1))=\alpha_2\ , \]
for any constants \(\alpha_1,\alpha_2\ .\) Identification of \(G_i\) with \(F_{q_i}\) gives a solution of (40) in the form
\[\tag{45} F(q,t,\alpha)=\int_{q_{10}}^{q_1}G_1(q_1^\prime,\alpha_1)dq_1^\prime + \int_{q_{20}}^{q_2}G_2(q_2^\prime,\alpha_2,\alpha_1)dq_2^\prime-\alpha_2 t\ . \]
Indeed, after substitution of this function and application of (44) the l.h.s. of (40) reads \(\alpha_2-\alpha_2\ .\) Moreover, this solution is complete, as is seen by differentiating the first equation of (44) with respect to \(\alpha_1\) and the second with respect to \(\alpha_2\ .\) Because of (43), that shows that \(D_2G_1\ne 0\) and \(D_3G_2\ne 0\ ,\) hence \(\det F_{q\alpha} =D_1G_1D_3G_2\ne0\ ,\) where \(D_i\) means partial derivative with respect to the \(i\)-th argument.
An example is planar motion in a central potential \(V(r)\) (Goldstein, 1981). In polar coordinates the Hamiltonian for a particle of mass \(m\) is
\[\tag{46} H=\frac{1}{2m}\bigg[p_r^2+\frac{p_\phi^2}{r^2}\bigg]+V(r)\ , \quad (q_1,p_1)=(\phi,p_\phi)\ ,\quad (q_2,p_2)=(r,p_r)\ . \]
Since \(H\) is independent of \(\phi\ ,\) the conjugate momentum \(p_\phi\) is constant in time; it is the conserved angular momentum. To apply the above scheme put
\[\tag{47} H_1(q_1,p_1)=p_1=\alpha_1\ ,\quad H_2(\alpha_1,q_2,p_2)=\frac{1}{2m} \bigg[p_2^2+\frac{\alpha_1^2}{q_2^2}\bigg]+V(q_2)=\alpha_2\ . \]
Notice that \(H_2\) satisfies (43) if and only if \(p_2\ne0\ .\) Now the \(G_i\) defined by (44) are
\[\tag{48} G_1(q_1,\alpha_1)=\alpha_1\ , \ :\]
\[\tag{49} G_2(q_2,\alpha_2,\alpha_1)=\pm\Pi(q_2,\alpha_2,\alpha_1)\ ,\quad \Pi= \bigg[2m\big(\alpha_2-V(q_2)\big)-(\alpha_1/q_2)^2\bigg]^{1/2}\ge 0\ . \]
In physicist's notation \( \alpha_1=L=\) angular momentum, \( \alpha_2=E= \) energy, and the formula (45) reads
\[\tag{50} F(\phi,r,L,E,t)=L(\phi-\phi_0)-Et\pm\int_{r_0}^r\Pi(\rho,E,L)d\rho\ ,\quad \Pi(\rho,E,L)=\big[2m(E-V(\rho))-(L/\rho)^2\big]^{1/2}\ . \]
Here \( r_0, r\) must be such that the argument of the square root is non-negative in the region of integration. The motion \( \phi(t),\ r(t) \) is obtained by solving (9) and (10) with \(P=\alpha\) and \(Q=\beta\ .\) To that end compute
\[\tag{51} \beta_1=F_L=\phi-\phi_0\mp L\int_{r_0}^r\Pi(\rho,E,L)^{-1}d\rho/\rho^2\ , \ :\]
\[\tag{52} \beta_2=F_E=-t\pm 2m\int_{r_0}^r\Pi(\rho,E,L)^{-1}d\rho\ . \]
For initial conditions \( \phi_0,\ p_{\phi_0},\ r_0,\ p_{r_0}\) the parameters are \( \beta_1=0,\ \beta_2=0,\ \alpha_1=p_{\phi_0}=L,\ \alpha_2= (p_{r0}^2+(\alpha_1/r_0)^2)/2m+V(r_0)=E \ .\) Now (52) gives \(t(r)\ ,\) which must be inverted to give \(r(t)\ ;\) then (51) gives \(\phi(t)\ .\) This is the standard solution derived less elegantly in elementary treatments without the Hamilton-Jacobi method. The choice of sign in front of the integrals depends on \(t\) and initial conditions. Suppose that the potential is attractive and \(E\) is such that there is oscillatory motion in the effective one-dimensional potential with \(r_0\le r\le r_1\) (Goldstein, 1981). During the first half-period \(T/2\) the integral (52) runs from \(r_0\) to \(r\le r_1\ ,\) with the plus sign. During the second half-period the integral is defined as \(T/2\) plus the integral from \(r_1\) to \(r\ge r_0\) with the minus sign, and so on. Within any half-period the integral is monotonic in \(r\) so that the inversion of \(t(r)\) is always possible.
A list of other classical problems for which the Hamilton-Jacobi equation is separable in appropriate coordinates includes the Kepler two-body problem, planar motion in the Coulomb field of two fixed charges, planar motion in the Coulomb field of one fixed charge plus a constant electric field, and free motion of a particle constrained to an ellipsoid (Landau & Lifshitz (1969), Arnol'd (1974), Goldstein (1981), Jacobi (1842-1843)). All of these problems were treated by Jacobi. For a comprehensive study of integrable problems from a geometric perspective, which uses separation of variables and many other techniques, see Perelomov (1990).
Wave-Particle Duality and the Classical Limit of Quantum Theory
In the important case of a time-independent Hamiltonian, one may seek solutions of (17) with \(K=0\) in the form
\[\tag{53} F(q,P,t)=W(q,P)-P_1t\ , \]
where \(W\) is to be determined by the time-independent Hamilton-Jacobi equation
\[\tag{54} H(q,W(q,P))=E\ ,\quad E=P_1\ . \]
Here \(E\) is identified with the energy of the system, the value of \(H\) on an orbit. Now suppose that \(F\) is a complete solution and consider a family of orbits generated through (9) and (10), the members of the family corresponding to various values of \(Q\) with fixed \(P\ ,\) thus an \(n\)-dimensional family. (Recall that \(Q,P\) determine initial conditions of the orbits.) Now let us view this family in \((q,t)\)-space, supposing that \(t\) is sufficiently small to prevent caustics; that is, different curves \(q(t,Q,P)\) shall not intersect.
It is interesting to consider surfaces of constant \(F\) determined by the equation
\[\tag{55} F(q,P,t)=W(q,P)-Et=c(P)\ . \]
At any \(t\) the normal to the surface in \(q\)-space is in the direction of \(p=W_q(q,t)\ .\) Assuming for convenience that coordinates are Cartesian, it follows that the particles are moving normal to the surface in \(q\)-space at each \(t\ .\) A representative point on the surface defined by (55) is denoted by \((q_s(t),t)\ ,\) and the velocity of such a point, projected onto the unit normal \(n(q_s)\ ,\) is obtained by differentiating (55) as follows:
\[\tag{56} W_q(q_s,P)\cdot \frac{dq_s}{dt}=E\ ,\quad n(q_s)\cdot\frac{dq_s}{dt}=E/|W_q(q_s,P)|\ . \]
This velocity \(dq_s/dt\) might be called the phase velocity or wave front velocity of a "wave" defined by (55). It is not to be confused with the particle velocity; certainly not, since by (56) it is in a different direction and its projection onto the particle's direction is inversely proportional to the particle's velocity. The slower the particles, the faster the wave front moves.
This wave front description can be connected with quantum mechanics. The connection was of great importance in the development of de Broglie - Schrödinger wave mechanics. For simplicity take the Schrödinger equation for one particle with Cartesian coordinates \(q=(q_1,q_2,q_3)\ ,\)
\[\tag{57} -\frac{\hbar^2}{2m}\triangle_q\psi+V(q)\psi=i\hbar\psi_t\ . \]
The following story can be generalized to many interacting particles. Write the wave function in phase-amplitude form
\[\tag{58} \psi(q,t)=A(q,t)\exp\bigg[\frac{i}{\hbar}F(q,t)\bigg]\ , \]
where \(A\) and \(F\) are real. After substituting in (57) and separating real and imaginary parts one finds a pair of equations entirely equivalent to the Schrödinger equation,
\[\tag{59} \frac{1}{2m}\big(F_q\big)^2+V+F_t=\frac{\hbar^2}{2m}\frac{\triangle_qA}{A}\ , \ :\]
\[\tag{60} mA_t+A_q\cdot F_q+\frac{1}{2}A\triangle_qF=0\ . \]
The second equation can be recognized as the continuity equation of quantum mechanics, when stated in terms of \(\rho\ ,\) the probability density for finding a particle at \(q\ ,\) and \(\mathbf J\ ,\) the probability flux, where
\[\tag{61} \rho=|\psi|^2=A^2\ ,\quad {\mathbf J}={\rm Re}\bigg[\frac{\hbar}{im}\psi^*\psi_q\bigg]=\frac{1}{m}\rho F_q\ . \]
Then multiplication of (60) by \(2A/m\) yields the continuity equation
\[\tag{62} \rho_t+\nabla_q\cdot{\mathbf J}=0\ . \]
One hopes to retrieve some features of classical physics, if not all, by regarding Planck's constant as small. In the small-\(\hbar \) limit the right hand side of (59) is zero, and \(F\) satisfies the classical Hamilton-Jacobi equation
\[\tag{63} H(q,F_q)+F_t=0\ ,\quad H(q,p)=\frac{p^2}{2m}+V(q)\ . \]
Since \(H\) is time independent a solution is sought in the form \(F(q,P,t)=W(q,P)-Et\ ,\) with parameters \(P=(E,P_2,P_3)\ .\) Correspondingly, \(A(q,P)\) is assumed to be time-independent, so that the equations for \(W\) and \(A\) are
\[\tag{64} H(q,W_q)=E\ , \ :\]
\[\tag{65} \nabla_q\cdot(A^2W_q)=0\ . \]
Given a complete solution of (64) one must then solve (65), a linear PDE for \(A^2\) with variable coefficients. Thus one finds the zeroth-order semi-classical wave function
\[\tag{66} \psi_0(q,t)=A(q,P)\exp\bigg[\frac{i}{\hbar}(\ W(q,P)-Et\ )\bigg]\ . \]
It now appears that the general concept of phase velocity introduced above is just the conventional phase velocity for the matter wave of (66). Note also the beautiful expression \({\mathbf J}=\rho{\mathbf v}\) where \({\mathbf v}={\mathbf p}/m=W_q/m\) is the classical velocity.
In the case of one degree of freedom the construction of (66) can be worked out explicitly. The result is the lowest order WKB approximation presented in standard textbooks (Messiah, 1999). A striking feature is that imaginary solutions \(W\) are relevant, corresponding to quantum mechanical tunneling into classically forbidden regions of coordinate space. Also, for bound states the energy turns out to be quantized. Since important quantal features are retained even in the small-\(\hbar\) limit, the semi-classical theory should be carried beyond the one-dimensional case.
General semi-classical theory considers \(n\) degrees of freedom and non-separable systems, aiming for results in the sense of rigorous asymptotics for \(\hbar\rightarrow 0\ .\) Starting with a seminal paper of Keller (1958), the question of multi-dimensional quantization was reexamined (Percival (1977)), and the basic problem of how to do asymptotics in the neighborhood of caustics was attacked (Ludwig (1966)). In this endeavor the work of Maslov (1965, 1981) was prominent, along with that of other leading mathematicians (see Guillemin & Sternberg (1977) and citations therein). Similar asymptotic analysis applies to the wave equation for an inhomogeneous medium, in which case the limit is for small wavelength (geometrical optics), and the Hamilton-Jacobi equation is the eikonal equation as in Hamilton's original work (Born & Wolf, 1965).
Applications of semi-classical theory have been pursued extensively by physical chemists and physicists (Marcus, Miller, Martens, Ezra, Heller, Delos, Littlejohn, Gutzwiller, Klauder, Berry, Percival, et al.). A topic of great interest is highly excited states of atoms and molecules, for which semi-classical theory might be the best calculational approach. The influence of chaotic regions in classical motion is of course an interesting topic, much studied but perhaps still not fully understood (Gutzwiller (1991), Percival (1977)).
Numerical Methods
Numerical solution of the Hamilton-Jacobi equation is a powerful tool to attack complex problems in theoretical physics and engineering. There is a large literature on numerical methods, often applied to the case of eikonal equations, and ranging from classical approaches to generalized solutions of viscosity type (Lions (1982), Evans (2008)). It is perhaps fair to say that methods based on the method of characteristics are the most efficient and widely applicable. In such methods a large but finite set of orbits is computed, corresponding to various initial conditions. Some kind of interpolation procedure is then used to approximate the desired solution. For instance, to find a solution with \(K=0\) one could calculate Hamilton's principal function \(F_1(q,q_0,t)\) on available orbits, for \((q,q_0)\) on a finite mesh \(\{ q_i,q_{0j} \}\ .\) A \(C^2\) interpolation would then be used to define \(F_1\) at off-mesh points. In some important applications one has to do this for only one large value \(T\) of the time \(t\ ,\) since it is enough to follow the intersection of orbits with a surface of section encountered with period \(T\ .\) Such a construction is being pursued for the case of full-turn symplectic maps for circular particle accelerators, following earlier successes with methods in the same spirit (Warnock & Berg (1997), Warnock & Cai (2009)). In this example the Hamiltonian is very complicated, having thousands of terms, and there is a big cost advantage for stability studies in using a full turn map defined by a generator in place of computing separate orbits.
It is also possible to calculate invariant tori (solutions with \(K\ne 0\)) by interpolating data from single non-resonant orbits (Warnock & Ruth (1992)). This is done by fitting the formula (20) to a Fourier series, using values of \(I\) at the values of \(\phi\) where the orbit hits a surface of section. This proves to be much faster than more classical methods that hark back to perturbation theory (Warnock & Ruth (1987), Chapman, Garrett, & Miller (1976)), but it gives no direct control of the frequencies (winding numbers) of the torus constructed.
In most cases the method of characteristics requires a good symplectic integrator to follow orbits (Leimkuhler & Reich (2004)). The Hamilton-Jacobi equation is often used to derive such integrators, especially the implicit variety (Feng et al. (1989), Scovel & Channel (1990)).
Arnol'd, V. I., "Mathematical Methods of Classical Mechanics", (Springer, New York, 1974).
Benton, S. H., "The Hamilton-Jacobi Equation: A Global Approach", (Academic Press, New York, 1977).
Born, M. and Wolf, E., "Principles of Optics", (Pergamon Press, Oxford, 1965).
Carathéodory, C., "Geometrische Optik", (Springer, Berlin, 1937).
Carathéodory, C., "Calculus of Variations and Partial Differential Equations of the First Order", (Chelsea, New York, 1982).
Chapman, S., Garrett, B. C., and Miller, W. H., "Semiclassical Eigenvalues for Nonseparable Systems: Nonperturbative Solution of the Hamilton-Jacobi Equation in Action-Angle Variables", J. Chem. Phys. 64, 502-509 (1976).
Courant, R. and Hilbert, D., "Methods of Mathematical Physics, Vol. II", (Interscience, New York, 1962).
Erdelyi, B. and Berz, M., "Optimal Symplectic Approximation of Hamiltonian Flows", Phys. Rev. Lett. 87, 114302 (2001).
Evans, L., "Weak KAM theory and partial differential equations", in Calculus of Variations and Nonlinear Partial Differential Equations, pp. 123-154, Lecture Notes in Math. 1927 (Springer, Berlin, 2008).
Feng, K., Wu, H., Quin, M., and Wang, D., J. Comp. Math. 7, 71 (1989).
Fleming, W. H. and Rishel, R. "Deterministic and Stochastic Optimal Control", (Springer, Berlin, 1975).
Gallavotti, G., "The Elements of Mechanics", (Springer, New York, 1983).
Gantmacher, F., "Lectures in Analytical Mechanics", (MIR Publishers, Moscow, 1970).
Goldstein, H., "Classical Mechanics", (Addison-Wesley, Menlo Park, 1981).
Guillemin, V., and Sternberg, S., "Geometric Asymptotics", (Amer. Math. Soc., Providence, 1977).
Gutzwiller, M. C., "Chaos in Classical Mechanics", (Springer, New York, 1991).
Hamilton, W. R., "The Mathematical Papers of William Rowan Hamilton, Vol. I, Geometrical Optics, Vol.II, Dynamics" (Cambridge University Press, Cambridge, 1931), especially three Supplements (1830-1832) to the "Theory of Systems of Rays" (1827) in Vol.I, and "On a General Method in Dynamics" (1832) in Vol.II.
Jammer, M., "The Conceptual Development of Quantum Mechanics", (McGraw-Hill, New York, 1966).
Jacobi, C. G. J., "Vorlesungen über Dynamik", Königsberg lectures of 1842-1843, (reprinted by Chelsea Publishing Co., New York, 1969).
Keller, J., "Corrected Bohr-Sommerfeld Quantum Conditions for Nonseparable Systems", Ann. Physics 4, 100-188 (1958).
Leimkuhler, B. and Reich, S., "Simulating Hamiltonian Dynamics", (Cambridge U. Press, Cambridge, 2004).
Ludwig, D., "Uniform Asymptotic Expansions at a Caustic", Comm. Pure Appl. Math., 19, 215-250 (1966).
Lanczos, C., "The Variational Principles of Mechanics", (U. Toronto Press, Toronto, 1949).
Landau, L. D. and Lifshitz, E. M., "Mechanics", (Pergamon Press, Oxford, 1969).
Lions, P.-L., "Generalized Solutions of Hamilton-Jacobi Equations", (Pitman, Boston, 1982).
Martens, C. C. and Ezra, G. S., "Semi-classical Mechanics of Strongly Resonant Systems: a Fourier Transform Approach", J. Chem. Phys. 86, 279-307 (1987).
Maslov, V. P., "Perturbation Theory and Asymptotic Methods", (Moscow State U., Moscow, 1965).
Maslov, V. P. and Fedoriuk, M. V., "Semi-classical Approximation in Quantum Mechanics", (Reidel, Dordrecht, 1981).
Meyer, K., Hall, G., and Offin, D., "Introduction to Hamiltonian Dynamical Systems and the N-Body Problem", (Springer, New York, 2008).
Nekhoroshev, N. N., "An Exponential Estimate of the Time of Stability of Nearly Integrable Hamiltonian Systems", Russ. Math. Surveys 32, 6, 1-65 (1977).
Percival, I. C., "Semiclassical Theory of Bound States", in Advances in Chemical Physics 36 (Wiley, New York, 1977).
Perelomov, A. M., "Integrable Systems of Classical Mechanics and Lie Algebras" (Birkhäuser, Basel, 1990).
Pöschel, J., "Integrability of Hamiltonian Systems on Cantor Sets", Comm. Pure Appl. Math. 35, 653-695 (1982).
Scovel, C. and Channel, P., "Symplectic Integration of Hamiltonian Systems", Nonlinearity 3, 231-259 (1990).
Synge, J. L., "Geometrical Optics, an Introduction to Hamilton's Method", (Cambridge University Press, 1937).
Warnock, R. and Ruth, R. D., "Long Term Bounds on Nonlinear Hamiltonian Motion", Physica D 56 188-215 (1992).
Warnock, R. and Ruth, R. D., "Invariant Tori through Direct Solution of the Hamilton-Jacobi Equation", Physica D 26, 1-36 (1987).
Warnock, R. and Berg, J. S., "Fast Symplectic Mapping and Long-term Stability Near Broad Resonances", AIP Conf. Proc. 395 (Amer. Inst. Phys., 1997).
Warnock, R. and Cai, Y., "Construction of Large Period Symplectic Maps by Interpolative Methods", SLAC National Accelerator Laboratory report SLAC-PUB-13867 (2009), to be published in Proc. 10th International Computational Accelerator Physics Conference.
See also
Principle of least action
Personal tools
Focal areas |
ca16743945b13cfe | Common Compounds
Exam Guide
Construction Kits
Companion Notes
Just Ask Antoine!
Slide Index
Tutorial Index
Companion Notes
Atoms & ions
Chemical change
The mole
Energy & change
The quantum theory
Electrons in atoms
The periodic table
Home :Companion NotesPrint | Comment
Electrons in atoms
Learning Objectives
Lecture Notes
Internet sites and paper references for further exploration.
Frequently Asked Questions
Find an answer, or ask a question.
Learning objectives
• Explain the difference between a continuous spectrum and a line spectrum.
• Explain the difference between an emission and an absorption spectrum.
• Use the concept of quantized energy states to explain atomic line spectra.
• Given an energy level diagram, predict wavelengths in the line spectrum, and vice versa.
• Define and distinguish between shells, subshells, and orbitals.
• Explain the relationships between the quantum numbers.
• Use quantum numbers to label electrons in atoms.
• Describe and compare atomic orbitals given the n and ell quantum numbers.
• List a set of subshells in order of increasing energy.
• Write electron configurations* for atoms in either the subshell or orbital box notations.
• Write electron configurations of ions.
• Use electron configurations to predict the magnetic properties of atoms.
Lecture outline
The quantum theory was used to show how the wavelike behavior of electrons leads to quantized energy states when the electrons are bound or trapped. In this section, we'll use the quantum theory to explain the origin of spectral lines and to describe the electronic structure of atoms.
Emission Spectra
• experimental key to atomic structure: analyze light emitted by high temperature gaseous elements
• experimental setup: spectroscopy
• atoms emit a characteristic set of discrete wavelengths- not a continuous spectrum!
• atomic spectrum can be used as a "fingerprint" for an element
• hypothesis: if atoms emit only discrete wavelengths, maybe atoms can have only discrete energies
• an analogy
A turtle sitting on a ramp can have any height above the ground- and so, any potential energy
A turtle sitting on a staircase can take on only certain discrete energies
• energy is required to move the turtle up the steps (absorption)
• energy is released when the turtle moves down the steps (emission)
• only discrete amounts of energy are absorbed or released (energy is said to be quantized)
• energy staircase diagram for atomic hydrogen
• bottom step is called the ground state
• higher steps are called excited states
• computing line wavelengths using the energy staircase diagram
• computing energy steps from wavelengths in the line spectrum
• summary: line spectra arise from transitions between discrete (quantized) energy states
The quantum mechanical atom
• Electrons in atoms have quantized energies
• Electrons in atoms are bound to the nucleus by electrostatic attraction
• Electron waves are standing matter waves
• standing matter waves have quantized energies, as with the "electron on a wire" model
• Electron standing matter waves are 3 dimensional
• The electron on a wire model was one dimensional; one quantum number was required to describe the state of the electron
• A 3D model requires three quantum numbers
• A three-dimensional standing matter wave that describes the state of an electron in an atom is called an atomic orbital
• The energies and mathematical forms of the orbitals can be computed using the Schrödinger equation
• quantization isn't assumed; it arises naturally in solution of the equation
• every electron adds 3 variables (x, y, z) to the equation; it's very hard to solve equations with lots of variables.
• energy-level separations computed with the Schrödinger equation agree very closely with those computed from atomic spectral lines
Quantum numbers
• Think of the quantum numbers as addresses for electrons
• the principal quantum number, n
• determines the size of an orbital (bigger n = bigger orbitals)
• largely determines the energy of the orbital (bigger n = higher energy)
• can take on integer values n = 1, 2, 3, ...,
• all electrons in an atom with the same value of n are said to belong to the same shell
• spectroscopists use the following names for shells
Spectroscopist's notation for shells*.
nshell name nshell name
1K 5O
2L 6P
3M 7Q
• the azimuthal quantum number, ell
• designates the overall shape of the orbital within a shell
• affects orbital energies (bigger ell = higher energy)
• all electrons in an atom with the same value of ell are said to belong to the same subshell
• only integer values between 0 and n-1 are allowed
• sometimes called the orbital angular momentum quantum number
• spectroscopists use the following notation for subshells
Spectroscopist's notation for subshells*.
ellsubshell name
• the magnetic quantum number, mell
• determines the orientation of orbitals within a subshell
• does not affect orbital energy (except in magnetic fields!)
• only integer values between -ell and +ell are allowed
• the number of mell values within a subshell is the number of orbitals within a subshell
The number of possible mell values determines the number of orbitals* in a subshell.
ell possible values of mell number of orbitals in this subshell
0 0 1
1 -1, 0, +1 3
2 -2, -1, 0, +1, +2 5
• the spin quantum number, ms
• several experimental observations can be explained by treating the electron as though it were spinning
• spin makes the electron behave like a tiny magnet
• spin can be clockwise or counterclockwise
• spin quantum number can have values of +1/2 or -1/2
Electron configurations of atoms
• a list showing how many electrons are in each orbital or subshell in an atom or ion
• subshell notation: list subshells of increasing energy, with number of electrons in each subshell as a superscript
• examples
• 1s2 2s2 2p5 means "2 electrons in the 1s subshell, 2 electrons in the 2s subshell, and 5 electrons in the 2p subshell"
• 1s2 2s2 2p6 3s2 3p3 is an electron configuration with 15 electrons total; 2 electrons have n=1 (in the 1s subshell); 8 electrons have n=2 (2 in the 2s subshell, and 6 in the 2p subshell); and 5 electrons have n=3 (2 in the 3s subshell, and 3 in the 3p subshell).
• ground state* configurations fill the lowest energy orbitals first
Electron configurations of the first 11 elements, in subshell notation. Notice how configurations can be built by adding one electron at a time.
atomZground state electronic configuration
H 1 1s1
He 2 1s2
Li 3 1s2 2s1
Be 4 1s2 2s2
B 5 1s2 2s2 2p1
C 6 1s2 2s2 2p2
N 7 1s2 2s2 2p3
O 8 1s2 2s2 2p4
F 9 1s2 2s2 2p5
Ne 10 1s2 2s2 2p6
Na 11 1s2 2s2 2p6 3s1
Writing electron configurations
• strategy: start with hydrogen, and build the configuration one electron at a time (the Aufbau principle*)
1. fill subshells in order by counting across periods, from hydrogen up to the element of interest:
Filling order of subshells from the periodic table
2. rearrange subshells (if necessary) in order of increasing n & l
• examples: Give the ground state electronic configurations for:
• Al
• Fe
• Ba
• Hg
• watch out for d & f block elements; orbital interactions cause exceptions to the Aufbau principle
• half-filled and completely filled d and f subshells have extra stability
Know these exceptions to the Aufbau principle in the 4th period. (There are many others at the bottom of the table, but don't worry about them now.)
exception configuration predicted by the Aufbau principle true ground state configuration
Cr 1s2 2s2 2p6 3s2 3p6 3d4 4s2 1s2 2s2 2p6 3s2 3p6 3d5 4s1
Cu 1s2 2s2 2p6 3s2 3p6 3d9 4s2 1s2 2s2 2p6 3s2 3p6 3d10 4s1
Electron configurations including spin
• unpaired electrons give atoms (and molecules) special magnetic and chemical properties
• when spin is of interest, count unpaired electrons using orbital box diagrams
Examples of ground state electron configurations in the orbital box notation that shows electron spins.
atomorbital box diagram
• drawing orbital box diagrams
1. write the electron configuration in subshell notation
2. draw a box for each orbital.
• Remember that s, p, d, and f subshells contain 1, 3, 5, and 7 degenerate* orbitals, respectively.
• Remember that an orbital can hold 0, 1, or 2 electrons only, and if there are two electrons in the orbital, they must have opposite (paired) spins (Pauli principle*)
3. within a subshell (depicted as a group of boxes), spread the electrons out and line up their spins as much as possible (Hund's rule*)
• the number of unpaired electrons can be counted experimentally
• configurations with unpaired electrons are attracted to magnetic fields (paramagnetism*)
• configurations with only paired electrons are weakly repelled by magnetic fields (diamagnetism*)
Core and valence electrons
• chemistry involves mostly the shell* with the highest value of principal quantum number*, n, called the valence shell*
• the noble gas core* under the valence shell is chemically inert
• simplify the notation for electron configurations by replacing the core with a noble gas symbol in square brackets:
Examples of electron configurations written with the core/valence notation
atom full configuration core valence configuration full configuration using core/valence notation
O 1s2 2s2 2p4 He 2s2 2p4 [He] 2s2 2p4
Cl 1s2 2s2 2p6 3s2 3p5 Ne 3s2 3p5 [Ne] 3s2 3p5
Al 1s2 2s2 2p6 3s2 3p1 Ne 3s2 3p1 [Ne] 3s2 3p1
• electrons in d and f subshells outside the noble gas core are called pseudocore electrons
Examples of electron configurations containing pseudocore electrons
atom core pseudocore valence full configuration
Fe Ar 3d6 4s2 [Ar] 3d6 4s2
Sn Kr 4d10 5s2 5p2 [Kr] 4d10 5s2 5p2
Hg Xe 4f14 5d10 6s2 [Xe] 4f14 5d10 6s2
Pu Rn 5f6 7s2 [Rn] 5f6 7s2
Sign up for a free monthly
newsletter describing updates,
new features, and changes
on this site.
General Chemistry Online! Electrons in atoms
Copyright © 1997-2005 by Fred Senese
Comments & questions to fsenese@frostburg.edu |
39a2c631c803a2b8 | Density matrix
From Wikipedia, the free encyclopedia
(Redirected from Von Neumann equation)
Jump to: navigation, search
A density matrix is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states. This should be contrasted with a single state vector that describes a quantum system in a pure state. The density matrix is the quantum-mechanical analogue to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics.
Mixed states arise in situations where the experimenter does not know which particular states are being manipulated. Examples include a system in thermal equilibrium (or additionally chemical equilibrium) or a system with an uncertain or randomly varying preparation history (so one does not know which pure state the system is in). Also, if a quantum system has two or more subsystems that are entangled, then each subsystem must be treated as a mixed state even if the complete system is in a pure state.[1] The density matrix is also a crucial tool in quantum decoherence theory.
The density matrix is a representation of a linear operator called the density operator. The density matrix is obtained from the density operator by choice of basis in the underlying space. In practice, the terms density matrix and density operator are often used interchangeably. Both matrix and operator are self-adjoint (or Hermitian), positive semi-definite, of trace one, and may be infinite-dimensional.[2]
The formalism of density operators and matrices was introduced by John von Neumann[3] in 1927 and independently, but less systematically by Lev Landau[4] and Felix Bloch[5] in 1927 and 1946 respectively.
Pure and mixed states[edit]
In quantum mechanics, the state of a quantum system is represented by a state vector (or ket) . A quantum system with a state vector is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors: For example, there may be a 50% probability that the state vector is and a 50% chance that the state vector is . This system would be in a mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix.
A mixed state is different from a quantum superposition. The probabilities in a mixed state are classical probabilities (as in the probabilities one learns in classic probability theory / statistics), unlike the quantum probabilites in a quantum superposition. In fact, a quantum superposition of pure states is another pure state, for example . In this case, the coefficients are not probabilities, but rather probability amplitudes.
Example: Light polarization[edit]
The incandescent light bulb (1) emits completely random polarized photons (2) with mixed state density matrix
After passing through vertical plane polarizer (3), the remaining photons are all vertically polarized (4) and have pure state density matrix
An example of pure and mixed states is light polarization. Photons can have two helicities, corresponding to two orthogonal quantum states, (right circular polarization) and (left circular polarization). A photon can also be in a superposition state, such as (vertical polarization) or (horizontal polarization). More generally, it can be in any state (with ), corresponding to linear, circular, or elliptical polarization. If we pass polarized light through a circular polarizer which allows either only polarized light, or only polarized light, intensity would be reduced by half in both cases. This may make it seem like half of the photons are in state and the other half in state . But this is not correct: Both and photons are partly absorbed by a vertical linear polarizer, but the light will pass through that polarizer with no absorption whatsoever.
However, unpolarized light (such as the light from an incandescent light bulb) is different from any state like (linear, circular, or elliptical polarization). Unlike linearly or elliptically polarized light, it passes through a polarizer with 50% intensity loss whatever the orientation of the polarizer; and unlike circularly polarized light, it cannot be made linearly polarized with any wave plate because randomly oriented polarization will emerge from a wave plate with random orientation. Indeed, unpolarized light cannot be described as any state of the form in a definite sense. However, unpolarized light can be described with ensemble averages, e.g. that each photon is either with 50% probability or with 50% probability. The same behavior would occur if each photon was either vertically polarized with 50% probability or horizontally polarized with 50% probability.
Therefore, unpolarized light cannot be described by any pure state, but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. One of the advantages of the density matrix is that there is just one density matrix for each mixed state, whereas there are many statistical ensembles of pure states for each mixed state. Nevertheless, the density matrix contains all the information necessary to calculate any measurable property of the mixed state.
Where do mixed states come from? To answer that, consider how to generate unpolarized light. One way is to use a system in thermal equilibrium, a statistical mixture of enormous numbers of microstates, each with a certain probability (the Boltzmann factor), switching rapidly from one to the next due to thermal fluctuations. Thermal randomness explains why an incandescent light bulb, for example, emits unpolarized light. A second way to generate unpolarized light is to introduce uncertainty in the preparation of the system, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the beam acquire different polarizations. A third way to generate unpolarized light uses an EPR setup: A radioactive decay can emit two photons traveling in opposite directions, in the quantum state . The two photons together are in a pure state, but if you only look at one of the photons and ignore the other, the photon behaves just like unpolarized light.
More generally, mixed states commonly arise from a statistical mixture of the starting state (such as in thermal equilibrium), from uncertainty in the preparation procedure (such as slightly different paths that a photon can travel), or from looking at a subsystem entangled with something else.
Mathematical description[edit]
The state vector of a pure state completely determines the statistical behavior of a measurement. For concreteness, take an observable quantity, and let A be the associated observable operator that has a representation on the Hilbert space of the quantum system. For any real-valued, analytical function F defined on the real numbers,[6] suppose that F(A) is the result of applying F to the outcome of a measurement. The expectation value of F(A) is
Now consider a mixed state prepared by statistically combining two different pure states and , with the associated probabilities p and 1 − p, respectively. The associated probabilities mean that the preparation process for the quantum system ends in the state with probability p and in the state with probability 1 − p.
It is not hard to show that the statistical properties of the observable for the system prepared in such a mixed state are completely determined. However, there is no state vector which determines this statistical behaviour in the sense that the expectation value of F(A) is
Nevertheless, there is a unique operator ρ such that the expectation value of F(A) can be written as
where the operator ρ is the density operator of the mixed system. A simple calculation shows that the operator ρ for the above discussion is given by
For the above example of unpolarized light, the density operator is
For a finite-dimensional function space, the most general density operator is of the form
where the coefficients pj are non-negative and add up to one. This represents a statistical mixture of pure states. If the given system is closed, then one can think of a mixed state as representing a single system with an uncertain preparation history, as explicitly detailed above; or we can regard the mixed state as representing an ensemble of systems, i.e. a large number of copies of the system in question, where pj is the proportion of the ensemble being in the state . An ensemble is described by a pure state if every copy of the system in that ensemble is in the same state, i.e. it is a pure ensemble. If the system is not closed, however, then it is simply not correct to claim that it has some definite but unknown state vector, as the density operator may record physical entanglements to other systems.
Consider a quantum ensemble of size N with occupancy numbers n1, n2,...,nk corresponding to the orthonormal states , respectively, where n1+...+nk = N, and, thus, the coefficients pj = nj /N. For a pure ensemble, where all N particles are in state , we have nj = 0, for all ji, from which we recover the corresponding density operator . However, the density operator of a mixed state does not capture all the information about the ingredients that went into the mixture; in particular, the coefficients pj and the kets ψj are not recoverable from the operator ρ without additional information. This non-uniqueness implies that different ensembles or mixtures may correspond to the same density operator. Such equivalent ensembles or mixtures cannot be distinguished by measurement of observables alone. This equivalence can be characterized precisely. Two ensembles ψ, ψ' define the same density operator if and only if there is a matrix U with
i.e., U is unitary and such that
This is simply a restatement of the following fact from linear algebra: for two square matrices M and N, M M* = N N* if and only if M = NU for some unitary U. (See square root of a matrix for more details.) Thus there is a unitary freedom in the ket mixture or ensemble that gives the same density operator. However, if the kets making up the mixture are restricted to be orthonormal, then the original probabilities pj are recoverable as the eigenvalues of the density matrix.
In operator language, a density operator is a positive semidefinite, hermitian operator of trace 1 acting on the state space.[7] A density operator describes a pure state if it is a rank one projection. Equivalently, a density operator ρ describes a pure state if and only if
i.e. the state is idempotent. This is true regardless of whether H is finite-dimensional or not.
Geometrically, when the state is not expressible as a convex combination of other states, it is a pure state.[8] The family of mixed states is a convex set and a state is pure if it is an extremal point of that set.
It follows from the spectral theorem for compact self-adjoint operators that every mixed state is a countable convex combination of pure states. This representation is not unique. Furthermore, a theorem of Andrew Gleason states that certain functions defined on the family of projections and taking values in [0,1] (which can be regarded as quantum analogues of probability measures) are determined by unique mixed states. See quantum logic for more details.
Let A be an observable of the system, and suppose the ensemble is in a mixed state such that each of the pure states occurs with probability pj. Then the corresponding density operator is:
The expectation value of the measurement can be calculated by extending from the case of pure states (see Measurement in quantum mechanics):
where denotes trace. Moreover, if A has spectral resolution
where , the corresponding density operator after the measurement is given by:
Note that the above density operator describes the full ensemble after measurement. The sub-ensemble for which the measurement result was the particular value ai is described by the different density operator
This is true assuming that is the only eigenket (up to phase) with eigenvalue ai; more generally, Pi in this expression would be replaced by the projection operator into the eigenspace corresponding to eigenvalue ai.
The von Neumann entropy of a mixture can be expressed in terms of the eigenvalues of or in terms of the trace and logarithm of the density operator . Since is a positive semi-definite operator, it has a spectral decomposition such that where are orthonormal vectors, and . Then the entropy of a quantum system with density matrix is
Also it can be shown that
when have orthogonal support, where is the Shannon entropy. This entropy can increase but never decrease with a projective measurement, however generalised measurements can decrease entropy.[9][10] The entropy of a pure state is zero, while that of a proper mixture always greater than zero. Therefore, a pure state may be converted into a mixture by a measurement, but a proper mixture can never be converted into a pure state. Thus the act of measurement induces a fundamental irreversible change on the density matrix; this is analogous to the "collapse" of the state vector, or wavefunction collapse. Perhaps counterintuitively, the measurement actually decreases information by erasing quantum interference in the composite system—cf. quantum entanglement, einselection, and quantum decoherence.
(A subsystem of a larger system can be turned from a mixed to a pure state, but only by increasing the von Neumann entropy elsewhere in the system. This is analogous to how the entropy of an object can be lowered by putting it in a refrigerator: The air outside the refrigerator's heat-exchanger warms up, gaining even more entropy than was lost by the object in the refrigerator. See second law of thermodynamics. See Entropy in thermodynamics and information theory.)
The von Neumann equation for time evolution[edit]
Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville–von Neumann equation) describes how a density operator evolves in time (in fact, the two equations are equivalent, in the sense that either can be derived from the other.) The von Neumann equation dictates that[11][12]
where the brackets denote a commutator.
Note that this equation only holds when the density operator is taken to be in the Schrödinger picture, even though this equation seems at first look to emulate the Heisenberg equation of motion in the Heisenberg picture, with a crucial sign difference:
where is some Heisenberg picture operator; but in this picture the density matrix is not time-dependent, and the relative sign ensures that the time derivative of the expected value comes out the same as in the Schrödinger picture.
Taking the density operator to be in the Schrödinger picture makes sense, since it is composed of 'Schrödinger' kets and bras evolved in time, as per the Schrödinger picture. If the Hamiltonian is time-independent, this differential equation can be easily solved to yield
For a more general Hamiltonian, if is the wavefunction propagator over some interval, then the time evolution of the density matrix over that same interval is given by
"Quantum Liouville", Moyal's equation[edit]
The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function,
The equation for the time-evolution of the Wigner function is then the Wigner-transform of the above von Neumann equation,
where H(q,p) is the Hamiltonian, and { { •,• } } is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of vanishing Planck's constant ħ, W(q,p,t) reduces to the classical Liouville probability density function in phase space.
The classical Liouville equation can be solved using the method of characteristics for partial differential equations, the characteristic equations being Hamilton's equations. The Moyal equation in quantum mechanics similarly admits formal solutions in terms of quantum characteristics, predicated on the ∗−product of phase space, although, in actual practice, solution-seeking follows different methods.
Composite systems[edit]
The joint density matrix of a composite system of two systems A and B is described by . Then the subsystems are described by their reduced density operator.
is called partial trace over system B. If A and B are two distinct and independent systems then which is a product state.
C*-algebraic formulation of states[edit]
It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable.[13][14] For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces which realize A as a subalgebra of operators.
Geometrically, a pure state on a C*-algebra A is a state which is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A.
The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics.
The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures, as noted in the introduction.
See also[edit]
Notes and references[edit]
1. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, p. 419, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-4614-7116-5
2. ^ Fano, Ugo (1957), "Description of States in Quantum Mechanics by Density Matrix and Operator Techniques", Reviews of Modern Physics, 29: 74–93, Bibcode:1957RvMP...29...74F, doi:10.1103/RevModPhys.29.74.
3. ^ von Neumann, John (1927), "Wahrscheinlichkeitstheoretischer Aufbau der Quantenmechanik", Göttinger Nachrichten, 1: 245–272
4. ^ Schlüter, Michael and Lu Jeu Sham (1982), "Density functional theory", Physics Today, 35 (2): 36, Bibcode:1982PhT....35b..36S, doi:10.1063/1.2914933
6. ^ Technically, F must be a Borel function
7. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, p. 423, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-4614-7116-5
8. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, p. 439, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-4614-7116-5
9. ^ Nielsen, Michael; Chuang, Isaac (2000), Quantum Computation and Quantum Information, Cambridge University Press, ISBN 978-0-521-63503-5 . Chapter 11: Entropy and information, Theorem 11.9, "Projective measurements cannot decrease entropy"
10. ^ Everett, Hugh (1973), "The Theory of the Universal Wavefunction (1956) Appendix I. "Monotone decrease of information for stochastic processes"", The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press, pp. 128–129, ISBN 978-0-691-08131-1
11. ^ Breuer, Heinz; Petruccione, Francesco (2002), The theory of open quantum systems, p. 110, ISBN 978-0-19-852063-4
12. ^ Schwabl, Franz (2002), Statistical mechanics, p. 16, ISBN 978-3-540-43163-3
13. ^ See appendix, Mackey, George Whitelaw (1963), Mathematical Foundations of Quantum Mechanics, Dover Books on Mathematics, New York: Dover Publications, ISBN 978-0-486-43517-6
14. ^ Emch, Gerard G. (1972), Algebraic methods in statistical mechanics and quantum field theory, Wiley-Interscience, ISBN 978-0-471-23900-0 |
d9e2609a9ff80961 | Saturday, November 14, 2015
Microtubules and Consciousness
Roger Penrose, PhD, OM, FRS1, and Stuart Hameroff, MD2
1Emeritus Rouse Ball Professor, Mathematical Institute, Emeritus Fellow, Wadham College,
University of Oxford, Oxford, UK
2Professor, Anesthesiology and Psychology, Director, Center for Consciousness Studies, The University of Arizona, Tucson, Arizona, USA
The nature of consciousness, its occurrence in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990's that consciousness depends on biologically 'orchestrated' quantum computations in collections of microtubules within brain neurons, that these quantum computations correlate with and regulate neuronal activity, and that the continuous Schrödinger evolution of each quantum computation terminates in accordance with the specific Diósi–Penrose (DP) scheme of 'objective reduction' of the quantum state (OR). This orchestrated OR activity (Orch OR) is taken to result in a moment of conscious awareness and/or choice. This particular (DP) form of OR is taken to be a quantum-gravity process related to the fundamentals of spacetime geometry, so Orch OR suggests a connection between brain biomolecular processes and fine-scale structure of the universe. Here we review and update Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We conclude that consciousness plays an intrinsic role in the universe. KEY WORDS: Consciousness, microtubules, OR, Orch OR, quantum computation, quantum gravity
1. Introduction: Consciousness, Brain and Evolution
Consciousness implies awareness: subjective experience of internal and external phenomenal worlds. Consciousness is central also to understanding, meaning and volitional choice with the experience of free will. Our views of reality, of the universe, of ourselves depend on consciousness. Consciousness defines our existence.
(A) Consciousness is not an independent quality but arose as a natural evolutionary consequence of the biological adaptation of brains and nervous systems. The most popular scientific view is that consciousness emerged as a property of complex biological computation during the course of evolution. Opinions vary as to when, where and how consciousness appeared, e.g. only recently in humans, or earlier in lower organisms. Consciousness as evolutionary adaptation is commonly assumed to be epiphenomenal (i.e. a secondary effect without independent influence), though it is frequently argued to confer beneficial advantages to conscious species (Dennett, 1991; 1995; Wegner, 2002).
(B) Consciousness is a quality that has always been in the universe. Spiritual and religious approaches assume consciousness has been in the universe all along, e.g. as the 'ground of being', 'creator' or component of an omnipresent 'God'. Panpsychists attribute consciousness to all matter. Idealists contend consciousness is all that exists, the material world an illusion (Kant, 1781).
(C) Precursors of consciousness have always been in the universe; biology evolved a mechanism to convert conscious precursors to actual consciousness. This is the view implied by Whitehead (1929; 1933) and taken in the Penrose-Hameroff theory of 'orchestrated objective reduction' ('Orch OR'). Precursors of consciousness, presumably with proto-experiential qualities, are proposed to exist as the potential ingredients of actual consciousness, the physical basis of these proto-conscious elements not necessarily being part of our current theories of the laws of the universe (Penrose and Hameroff, 1995; Hameroff and Penrose, 1996a; 1996b).
2. Ideas for how consciousness arises from brain action
How does the brain produce consciousness? An enormous amount of detailed knowledge about brain function has accrued; however the mechanism by which the brain produces consciousness remains mysterious (Koch, 2004). The prevalent scientific view is that consciousness somehow emerges from complex computation among simple neurons which each receive and integrate synaptic inputs to a threshold for bit-like firing. The brain as a network of 1011 'integrate-and-fire' neurons computing by bit-like firing and variable-strength chemical synapses is the standard model for computer simulations of brain function, e.g. in the field of artificial intelligence ('AI').
The brain-as-computer view can account for non-conscious cognitive functions including much of our mental processing and control of behavior. Such non-conscious cognitive processes are deemed 'zombie modes', 'auto-pilot', or 'easy problems'. The 'hard problem' (Chalmers, 1996) is the question of how cognitive processes are accompanied or driven by phenomenal conscious experience and subjective feelings, referred to by philosophers as 'qualia'. Other issues also suggest the brain-as-computer view may be incomplete, and that other approaches are required. The conventional brain-as-computer view fails to account for:
The 'hard problem' Distinctions between conscious and non-conscious processes are not addressed; consciousness is assumed to emerge at a critical level (neither specified nor testable) of computational complexity mediating otherwise non-conscious processes.
'Non-computable' thought and understanding, e.g. as shown by Gödel's theorem (Penrose, 1989; 1994).
'Binding and synchrony', the problem of how disparate neuronal activities are bound into unified conscious experience, and how neuronal synchrony, e.g. gamma synchrony EEG (30 to 90 Hz), the best measurable correlate of consciousness does not derive from neuronal firings.
Causal efficacy of consciousness and any semblance of free will. Because measurable brain activity corresponding to a stimulus often occurs after we've responded (seemingly consciously) to that stimulus, the brain-as-computer view depicts consciousness as epiphenomenal illusion (Dennett, 1991; 1995; Wegner, 2002).
Cognitive behaviors of single cell organisms. Protozoans like Paramecium can swim, find food and mates, learn, remember and have sex, all without synaptic computation (Sherrington, 1957).
In the 1980s Penrose and Hameroff (separately) began to address these issues, each against the grain of mainstream views. 3. Microtubules as Biomolecular Computers
Hameroff had been intrigued by seemingly intelligent, organized activities inside cells, accomplished by protein polymers called microtubules (Hameroff and Watt, 1982; Hameroff, 1987). Major components of the cell's structural cytoskeleton, microtubules also accounted for precise separation of chromosomes in cell division, complex behavior of Paramecium, and regulation of synapses within brain neurons (Figure 1). The intelligent function and periodic lattice structure of microtubules suggested they might function as some type of biomolecular computer.
Microtubules are self-assembling polymers of the peanut-shaped protein dimer tubulin, each tubulin dimer (110,000 atomic mass units) being composed of an alpha and beta monomer (Figure 2). Thirteen linear tubulin chains ('protofilaments') align side-to-side to form hollow microtubule cylinders (25 nanometers diameter) with two types of hexagonal lattices. The A-lattice has multiple winding patterns which intersect on protofilaments at specific intervals matching the Fibonacci series found widely in nature and possessing a helical symmetry (Section 9), suggestively sympathetic to large-scale quantum processes.
Figure 1. Schematic of portions of two neurons. A terminal axon (left) forms a synapse with a dendritic spine of a second neuron (right). Interiors of both neurons show cytoskeletal structures including microtubules, actin and microtubule-associated proteins (MAPs). Dendritic microtubules are arrayed in mixed polarity local networks, interconnected by MAPs. Synaptic inputs are conveyed to dendritic microtubules by ion flux, actin filaments, second messengers (e.g. CaMKII, see Hameroff et al, 2010) and MAPs. Along with actin and other cytoskeletal structures, microtubules establish cell shape, direct growth and organize function of cells including brain neurons. Various types of microtubule-associated proteins ('MAPs') bind at specific lattice sites and bridge to other microtubules, defining cell architecture like girders and beams in a building. One such MAP is tau, whose displacement from microtubules results in neurofibrillary tangles and the cognitive dysfunction of Alzheimer's disease (Brunden et al, 2011). Motor proteins (dynein, kinesin) move rapidly along microtubules, transporting cargo molecules to specific locations.
Figure 2. Left: Portion of single microtubule composed of tubulin dimer proteins (black and white) in A-lattice configuration. Right, top: According to pre-Orch OR microtubule automata theory (e.g. Hameroff and Watt, 1982; Rasmussen et al, 1990), each tubulin in a microtubule lattice switches between alternate (black and white) 'bit' states, coupled to electron cloud dipole London forces in internal hydrophobic pocket. Right, bottom: According to Orch OR, each tubulin can also exist as quantum superposition (quantum bit, or 'qubit') of both states, coupled to superposition of London force dipoles in hydrophobic pocket.
Microtubules also fuse side-by-side in doublets or triplets. Nine such doublets or triplets then align to form barrel-shaped mega-cylinders called cilia, flagella and centrioles, organelles responsible for locomotion, sensation and cell division. Either individually or in these larger arrays, microtubules are responsible for cellular and intra-cellular movements requiring intelligent spatiotemporal organization. Microtubules have a lattice structure comparable to computational systems. Could microtubules process information?
The notion that microtubules process information was suggested in general terms by Sherrington (1957) and Atema (1973). With physicist colleagues through the 1980s, Hameroff developed models of microtubules as information processing devices, specifically molecular ('cellular') automata, self-organizing computational devices (Figure 3). Cellular automata are computational systems in which fundamental units, or 'cells' in a grid or lattice can each exist in specific states, e.g. 1 or 0, at a given time (Wolfram, 2002). Each cell interacts with its neighbor cells at discrete, synchronized time steps, the state of each cell at any particular time step determined by its state and its neighbor cell states at the previous time step, and rules governing the interactions. In such ways, using simple neighbor interactions in simple lattice grids, cellular automata can perform complex computation and generate complex patterns.
Cells in cellular automata are meant to imply fundamental units. But biological cells are not necessarily simple, as illustrated by the clever Paramecium. Molecular automata are cellular automata in which the fundamental units, bits or cells are states of molecules, much smaller than biological cells. A dynamic, interactive molecular grid or lattice is required.
Microtubules are lattices of tubulin dimers which Hameroff and colleagues modeled as molecular automata. Discrete states of tubulin were suggested to act as bits, switching between states, and interacting (via dipole-dipole coupling) with neighbor tubulin bit states in 'molecular automata' computation (Hameroff and Watt, 1982; Rasmussen et al., 1990; Tuszynski et al., 1995). The mechanism for bit-like switching at the level of each tubulin was proposed to depend on the van der Waals–London force in non-polar, water-excluding regions ('hydrophobic pockets') within each tubulin.
Proteins are largely heterogeneous arrays of amino acid residues, including both water-soluble polar and water-insoluble non-polar groups, the latter including phenylalanine and tryptophan with electron resonance clouds (e.g. phenyl and indole rings). Such non-polar groups coalesce during protein folding to form homogeneous water-excluding 'hydrophobic' pockets within which instantaneous dipole couplings between nearby electron clouds operate. These are London forces which are extremely weak but numerous and able to act collectively in hydrophobic regions to influence and determine protein state (Voet and Voet, 1995).
London forces in hydrophobic pockets of various neuronal proteins are the mechanisms by which anesthetic gases selectively erase consciousness (Franks and Lieb, 1984). Anesthetics bind by their own London force attractions with electron clouds of the hydrophobic pocket, presumably impairing normally-occurring London forces governing protein switching required for consciousness (Hameroff, 2006).
In Figure 2, and as previously used in Orch OR, London forces are illustrated in cartoon fashion. A single hydrophobic pocket is depicted in tubulin, with portions of two electron resonance rings in the pocket. Single electrons in each ring repel each other, as their electron cloud net dipole flips (London force oscillation). London forces in hydrophobic pockets were used as the switching mechanism to distinguish discrete states for each tubulin in microtubule automata. In recent years tubulin hydrophobic regions and switching in the Orch OR proposal that we describe below have been clarified and updated (see Section 8).
To synchronize discrete time steps in microtubule automata, tubulins in microtubules were assumed to oscillate synchronously in a manner proposed by Fröhlich for biological coherence. Biophysicist Herbert Fröhlich (1968; 1970; 1975) had suggested that biomolecular dipoles constrained in a common geometry and voltage field would oscillate coherently, coupling, or condensing to a common vibrational mode. He proposed that biomolecular dipole lattices could convert ambient energy to coherent, synchronized dipole excitations, e.g. in the gigahertz (109 s−1) frequency range. Fröhlich coherence or condensation can be either quantum coherence (e.g. Bose-Einstein condensation) or classical synchrony (Reimers et al., 2009).
In recent years coherent excitations have been found in living cells emanating from microtubules at 8 megahertz (Pokorny et al., 2001; 2004). Bandyopadhyay (2011) has found a series of coherence resonance peaks in single microtubules ranging from 12 kilohertz to 8 megahertz.
Figure 3. Microtubule automata (Rasmussen et al, 1990). Top: 4 time steps (e.g. at 8 megahertz, Pokorny et al, 2001) showing propagation of information states and patterns ('gliders' in cellular automata parlance). Bottom: At different dipole coupling parameter, bi-directional pattern movement and computation occur. Rasmussen et al (1990) applied Fröhlich synchrony (in classical mode) as a clocking mechanism for computational time steps in simulated microtubule automata. Based on dipole couplings between neighboring tubulins in the microtubule lattice geometry, they found traveling gliders, complex patterns, computation and learning. Microtubule automata within brain neurons could potentially provide another level of information processing in the brain.
Approximately 108 tubulins in each neuron switching and oscillating in the range of 107 per second (e.g. Pokorny 8 MHz) gives an information capacity at the microtubule level of 1015 operations per second per neuron. This predicted capacity challenged and annoyed AI whose estimates for information processing at the level of neurons and synapses were virtually the same as this single-cell value, but for the entire brain (1011 neurons, 103 synapses per neuron, 102 transmissions per synapse per second = 1016 operations per second). Total brain capacity when taken at the microtubule level (in 1011 neurons) would potentially be 1026 operations per second, pushing the goalpost for AI brain equivalence farther into the future, and down into the quantum regime.
High capacity microtubule-based computing inside brain neurons could account for organization of synaptic regulation, learning and memory, and perhaps act as the substrate for consciousness. But increased brain information capacity per se didn't address most unanswered questions about consciousness (Section 2). Something was missing.
4. Objective Reduction (OR)
In 1989 Penrose published The Emperor's New Mind, which was followed in 1994 by Shadows of the Mind. Critical of AI, both books argued, by appealing to Gödel's theorem and other considerations, that certain aspects of human consciousness, such as understanding, must be beyond the scope of any computational system, i.e. 'non-computable'. Non-computability is a perfectly well-defined mathematical concept, but it had not previously been considered as a serious possibility for the result of physical actions. The non-computable ingredient required for human consciousness and understanding, Penrose suggested, would have to lie in an area where our current physical theories are fundamentally incomplete, though of important relevance to the scales that are pertinent to the operation of our brains. The only serious possibility was the incompleteness of quantum theory—an incompleteness that both Einstein and Schrödinger had recognized, despite quantum theory having frequently been argued to represent the pinnacle of 20th century scientific achievement. This incompleteness is the unresolved issue referred to as the 'measurement problem', which we consider in more detail below, in Section 5. One way to resolve it would be to provide an extension of the standard framework of quantum mechanics by introducing an objective form of quantum state reduction—termed 'OR' (objective reduction), an idea which we also describe more fully below, in Section 6.
In Penrose (1989), the tentatively suggested OR proposal would have its onset determined by a condition referred to there as 'the one-graviton' criterion. However, in Penrose (1995), a much better-founded criterion was used, now sometimes referred to as the Diósi–Penrose proposal (henceforth 'DP'; see Diósi 1987, 1989, Penrose 1993, 1996, 2000, 2009). This is an objective physical threshold, providing a plausible lifetime for quantum-superposed states. Other such OR proposals had also been put forward, from time to time (e.g. Kibble 1981, Pearle 1989, Pearle and Squires 1994, Ghirardi et al., 1986, 1990; see Ghirardi 2011, this volume) as solutions to the measurement problem, but had not originally been suggested as having anything to do with the consciousness issue. The Diósi-Penrose proposal is sometimes referred to as a 'quantum-gravity' scheme, but it is not part of the normal ideas used in quantum gravity, as will be explained below (Section 6). Moreover, the proposed connection between consciousness and quantum measurement is almost opposite, in the Orch OR scheme, to the kind of idea that had frequently been put forward in the early days of quantum mechanics (see, for example, Wigner 1961) which suggests that a 'quantum measurement' is something that occurs only as a result of the conscious intervention of an observer. This issue, also, will be discussed below (Section 5).
5. The Nature of Quantum Mechanics and its Fundamental Problem
The term 'quantum' refers to a discrete element of energy in a system, such as the energy E of a particle, or of some other subsystem, this energy being related to a fundamental frequency ν of its oscillation, according to Max Planck's famous formula (where h is Planck's constant):
E = h ν.
The laws governing these submicroscopic quantum entities differ from those governing our everyday classical world. For example, quantum particles can exist in two or more states or locations simultaneously, where such a multiple coexisting superposition of alternatives (each alternative being weighted by a complex number) would be described mathematically by a quantum wavefunction. We don't see superpositions in the consciously perceived world; we see objects and particles as material, classical things in specific locations and states.
Another quantum property is 'non-local entanglement,' in which separated components of a system become unified, the entire collection of components being governed by one common quantum wavefunction. The parts remain somehow connected, even when spatially separated by significant distances (e.g. over 10 kilometres, Tittel et al., 1998). Quantum superpositions of bit states (quantum bits, or qubits) can be interconnected with one another through entanglement in quantum computers. However, quantum entanglements cannot, by themselves, be used to send a message from one part of an entangled system to another; yet entanglement can be used in conjunction with classical signaling to achieve strange effects—such as the strange phenomenon referred to as quantum teleportation—that classical signalling cannot achieve by itself (e.g. Bennett and Wiesner, 1992; Bennett et al., 1993; Bouwmeester et al., 1997; Macikic et al., 2002).
The issue of why we don't directly perceive quantum superpositions is a manifestation of the measurement problem referred to in Section 4. Put more precisely, the measurement problem is the conflict between the two fundamental procedures of quantum mechanics. One of these procedures, referred to as unitary evolution, denoted here by U, is the continuous deterministic evolution of the quantum state (i.e. of the wavefunction of the entire system) according to the fundamental Schrödinger equation, The other is the procedure that is adopted whenever a measurement of the system—or observation—is deemed to have taken place, where the quantum state is discontinuously and probabilistically replaced by another quantum state (referred to, technically, as an eigenstate of a mathematical operator that is taken to describe the measurement). This discontinuous jumping of the state is referred to as the reduction of the state (or the 'collapse of the wavefunction'), and will be denoted here by the letter R. The conflict that is termed the measurement problem (or perhaps more accurately as the measurement paradox) arises when we consider the measuring apparatus itself as a quantum entity, which is part of the entire quantum system consisting of the original system under observation together with this measuring apparatus. The apparatus is, after all, constructed out of the same type of quantum ingredients (electrons, photons, protons, neutrons etc.—or quarks and gluons etc.) as is the system under observation, so it ought to be subject also to the same quantum laws, these being described in terms of the continuous and deterministic U. How, then, can the discontinuous and probabilistic R come about as a result of the interaction (measurement) between two parts of the quantum system? This is the measurement problem (or paradox).
There are many ways that quantum physicists have attempted to come to terms with this conflict (see, for example, Bell 1966, Bohm 1951, Rae 1994, Polkinghorne 2002, Penrose, 2004). In the early 20th century, the Danish physicist Niels Bohr, together with Werner Heisenberg, proposed the pragmatic 'Copenhagen interpretation', according to which the wavefunction of a quantum system, evolving according to U, is not assigned any actual physical 'reality', but is taken as basically providing the needed 'book-keeping' so that eventually probability values can be assigned to the various possible outcomes of a quantum measurement. The measuring device itself is explicitly taken to behave classically and no account is taken of the fact that the device is ultimately built from quantum-level constituents. The probabilities are calculated, once the nature of the measuring device is known, from the state that the wavefunction has U-evolved to at the time of the measurement. The discontinuous “jump” that the wavefunction makes upon measurement, according to R, is attributed to the change in 'knowledge' that the result of the measurement has on the observer. Since the wavefunction is not assigned physical reality, but is considered to refer merely to the observer's knowledge of the quantum system, the jumping is considered simply to reflect the jump in the observer's knowledge state, rather than in the quantum system under consideration.
Many physicists remain unhappy with such a point of view, however, and regard it largely as a 'stop-gap', in order that progress can be made in applying the quantum formalism, without this progress being held up by a lack of a serious quantum ontology, which might provide a more complete picture of what is actually going on. One may ask, in particular, what it is about a measuring device that allows one to ignore the fact that it is itself made from quantum constituents and is permitted to be treated entirely classically. A good many proponents of the Copenhagen standpoint would take the view that while the physical measuring apparatus ought actually to be treated as a quantum system, and therefore part of an over-riding wavefunction evolving according to U , it would be the conscious observer, examining the readings on that device, who actually reduces the state, according to R , thereby assigning a physical reality to the particular observed alternative resulting from the measurement. Accordingly, before the intervention of the observer's consciousness, the various alternatives of the result of the measurement including the different states of the measuring apparatus would, in effect, still coexist in superposition, in accordance with what would be the usual evolution according to U . In this way, the Copenhagen viewpoint puts consciousness outside science, and does not seriously address the nature and physical role of superposition itself nor the question of how large quantum superpositions like Schrödinger's superposed live and dead cat (see below) might actually become one thing or another.
A more extreme variant of this approach is the 'multiple worlds hypothesis' of Everett (1957) in which each possibility in a superposition evolves to form its own universe, resulting in an infinite multitude of coexisting 'parallel' worlds. The stream of consciousness of the observer is supposed somehow to 'split', so that there is one in each of the worlds—at least in those worlds for which the observer remains alive and conscious. Each instance of the observer's consciousness experiences a separate independent world, and is not directly aware of any of the other worlds.
A more 'down-to-earth' viewpoint is that of environmental decoherence, in which interaction of a superposition with its environment 'erodes' quantum states, so that instead of a single wavefunction being used to describe the state, a more complicated entity is used, referred to as a density matrix. However decoherence does not provide a consistent ontology for the reality of the world, in relation to the density matrix (see, for example, Penrose 2004, Sections 29.3-6), and provides merely a pragmatic procedure. Moreover, it does not address the issue of how R might arise in isolated systems, nor the nature of isolation, in which an external 'environment' would not be involved, nor does it tell us which part of a system is to be regarded as the 'environment' part, and it provides no limit to the size of that part which can remain subject to quantum superposition.
Still other approaches include various types of objective reduction (OR) in which a specific objective threshold is proposed to cause quantum state reduction (e.g. Kibble 1981; Pearle 1989; Ghirardi et al., 1986; Percival, 1994; Ghirardi, 2011). The specific OR scheme that is used in Orch OR will be described in Section 6.
The quantum pioneer Erwin Schrödinger took pains to point out the difficulties that confront the U-evolution of a quantum system with his still-famous thought experiment called 'Schrödinger's cat'. Here, the fate of a cat in a box is determined by magnifying a quantum event (say the decay of a radioactive atom, within a specific time period that would provide a 50% probability of decay) to a macroscopic action which would kill the cat, so that according to Schrödinger's own U-evolution the cat would be in a quantum superposition of being both dead and alive at the same time. If this U-evolution is maintained until the box is opened and the cat observed, then it would have to be the conscious human observing the cat that results in the cat becoming either dead or alive (unless, of course, the cat's own consciousness could be considered to have already served this purpose). Schrödinger intended to illustrate the absurdity of the direct applicability of the rules of quantum mechanics (including his own U-evolution) when applied at the level of a cat. Like Einstein, he regarded quantum mechanics as an incomplete theory, and his 'cat' provided an excellent example for emphasizing this incompleteness. There is a need for something to be done about quantum mechanics, irrespective of the issue of its relevance to consciousness.
6. The Orch OR Scheme
Orch OR depends, indeed, upon a particular OR extension of current quantum mechanics, taking the bridge between quantum- and classical-level physics as a 'quantum-gravitational' phenomenon. This is in contrast with the various conventional viewpoints (see Section 5), whereby this bridge is claimed to result, somehow, from 'environmental decoherence', or from 'observation by a conscious observer', or from a 'choice between alternative worlds', or some other interpretation of how the classical world of one actual alternative may be taken to arise out of fundamentally quantum-superposed ingredients.
It must also be made clear that the Orch OR scheme involves a different interpretation of the term 'quantum gravity' from what is usual. Current ideas of quantum gravity (see, for example Smolin, 2002) normally refer, instead, to some sort of physical scheme that is to be formulated within the bounds of standard quantum field theory—although no particular such theory, among the multitude that has so far been put forward, has gained anything approaching universal acceptance, nor has any of them found a fully consistent, satisfactory formulation. 'OR' here refers to the alternative viewpoint that standard quantum (field) theory is not the final answer, and that the reduction R of the quantum state ('collapse of the wavefunction') that is adopted in standard quantum mechanics is an actual physical phenomenon which is not part of the conventional unitary formalism U of quantum theory (or quantum field theory) and does not arise as some kind of convenience or effective consequence of environmental decoherence, etc., as the conventional U formalism would seem to demand. Instead, OR is taken to be one of the consequences of melding together the principles of Einstein's general relativity with those of the conventional unitary quantum formalism U, and this demands a departure from the strict rules of U. According to this OR viewpoint, any quantum measurement—whereby the quantum-superposed alternatives produced in accordance with the U formalism becomes reduced to a single actual occurrence—is real objective physical phenomenon, and it is taken to result from the mass displacement between the alternatives being sufficient, in gravitational terms, for the superposition to become unstable.
In the DP (Diósi–Penrose) scheme for OR, the superposition reduces to one of the alternatives in a time scale τ that can be estimated (for a superposition of two states each of which can be taken to be stationary on its own) according to the formula
τ ≈ ℏ/EG.
Here (=h/2π) is Dirac's form of Planck's constant h and EG is the gravitational self-energy of the difference between the two mass distributions of the superposition. (For a superposition for which each mass distribution is a rigid translation of the other, EG is the energy it would cost to displace one component of the superposition in the gravitational field of the other, in moving it from coincidence to the quantum-displaced location; see Disói 1989, Penrose 1993, 2000, 2009).
According to Orch OR, the (objective) reduction is not the entirely random process of standard theory, but acts according to some non-computational new physics (see Penrose 1989, 1994). The idea is that consciousness is associated with this (gravitational) OR process, but occurs significantly only when the alternatives are part of some highly organized structure, so that such occurrences of OR occur in an extremely orchestrated form. Only then does a recognizably conscious event take place. On the other hand, we may consider that any individual occurrence of OR would be an element of proto-consciousness.
The OR process is considered to occur when quantum superpositions between slightly differing space-times take place, differing from one another by an integrated space-time measure which compares with the fundamental and extremely tiny Planck (4-volume) scale of space-time geometry. Since this is a 4-volume Planck measure, involving both time and space, we find that the time measure would be particularly tiny when the space-difference measure is relatively large (as with Schrödinger's cat), but for extremely tiny space-difference measures, the time measure might be fairly long, such as some significant fraction of a second. We shall be seeing this in more detail shortly, together with its particular relevance to microtubules. In any case, we recognize that the elements of proto-consciousness would be intimately tied in with the most primitive Planck-level ingredients of space-time geometry, these presumed 'ingredients' being taken to be at the absurdly tiny level of 10−35m and 10−43s, a distance and a time some 20 orders of magnitude smaller than those of normal particle-physics scales and their most rapid processes. These scales refer only to the normally extremely tiny differences in space-time geometry between different states in superposition, and OR is deemed to take place when such space-time differences reach the Planck level. Owing to the extreme weakness of gravitational forces as compared with those of the chemical and electric
Figure 4. From Penrose, 1994 (P. 338). With four spatiotemporal dimensions condensed to a 2-dimensional spacetime sheet, mass location may be represented as a particular curvature of that sheet, according to general relativity. Top: Two different mass locations as alternative spacetime curvatures. Bottom: a bifurcating spacetime is depicted as the union ("glued together version") of the two alternative spacetime histories that are depicted at the top of the Figure. Hence a quantum superposition of simultaneous alternative locations may be seen as a separation in fundamental spacetime geometry. forces of biology, the energy EG is liable to be far smaller than any energy that arises directly from biological processes. However, EG is not to be thought of as being in direct competition with any of the usual biological energies, as it plays a completely different role, supplying a needed energy uncertainty that then allows a choice to be made between the separated space-time geometries. It is the key ingredient of the computation of the reduction time τ. Nevertheless, the extreme weakness of gravity tells us there must be a considerable amount of material involved in the coherent mass displacement between superposed structures in order that τ can be small enough to be playing its necessary role in the relevant OR processes in the brain. These superposed structures should also process information and regulate neuronal physiology. According to Orch OR, microtubules are central to these structures, and some form of biological quantum computation in microtubules (most probably primarily in the more symmetrical A-lattice microtubules) would have to have evolved to provide a subtle yet direct connection to Planck-scale geometry, leading eventually to discrete moments of actual conscious experience.
The degree of separation between the space-time sheets is mathematically described in terms of a symplectic measure on the space of 4-dimensional metrics (cf. Penrose, 1993). The separation is, as already noted above, a space-time separation, not just a spatial one. Thus the time of separation contributes as well as the spatial displacement. Roughly speaking, it is the product of the temporal separation T with the spatial separation S that measures the overall degree of separation, and OR takes place when this overall separation reaches a critical amount. This critical amount would be of the order of unity, in absolute units, for which the Planck-Dirac constant ℏ, the gravitational constant G, and the velocity of light c, all take the value unity, cf. Penrose, 1994 - pp. 337-339. For small S, the lifetime τ ≈T of the superposed state will be large; on the other hand, if S is large, then τ will be small.
To estimate S, we compute (in the Newtonian limit of weak gravitational fields) the gravitational self-energy EG of the difference between the mass distributions of the two superposed states. (That is, one mass distribution counts positively and the other, negatively; see Penrose, 1993; 1995.) The quantity S is then given by:
and T≈ τ, whence
τ ≈ /EG , i.e. EG /τ.
Thus, the DP expectation is that OR occurs with the resolving out of one particular space-time geometry from the previous superposition when, on the average, τ≈ℏ/EG. Moreover, according to Orch OR, this is accompanied by an element of proto-consciousness.
Environmental decoherence need play no role in state reduction, according to this scheme. The proposal is that state reduction simply takes place spontaneously, according to this criterion. On the other hand, in many actual physical situations, there would be much material from the environment that would be entangled with the quantum-superposed state, and it could well be that the major mass displacement—and therefore the major contribution to EG —would occur in the environment rather than in the system under consideration. Since the environment will be quantum-entangled with the system, the state-reduction in the environment will effect a simultaneous reduction in the system. This could shorten the time for the state reduction R to take place very considerably. It would also introduce an uncontrollable random element into the result of the reduction, so that any non-random (albeit non-computable, according to Orch OR) element influencing the particular choice of state that is actually resolved out from the superposition would be completely masked by this randomness. In these circumstances the OR-process would be indistinguishable from the R-process of conventional quantum mechanics. If the suggested non-computable effects of this OR proposal are to be laid bare, if EG is to be able to evolve and be orchestrated for conscious moments, we indeed need significant isolation from the environment.
As yet, no experiment has been refined enough to determine whether this (DP) OR proposal is actually respected by Nature, but the experimental testing of the scheme is fairly close to the borderline of what can be achieved with present-day technology (see, for example, Marshall et al. 2003). One ought to begin to see the effects of this OR scheme if a small object, such as a 10-micron cube of crystalline material could be held in a superposition of two locations, differing by about the diameter of an atomic nucleus, for some seconds, or perhaps minutes.
A point of importance, in such proposed experiments, is that in order to calculate EG it may not be enough to base the calculation on an average density of the material in the superposition, since the mass will be concentrated in the atomic nuclei, and for a displacement of the order of the diameter of a nucleus, this inhomogeneity in the density of the material can be crucial, and can provide a much larger value for EG than would be obtained if the material is assumed to be homogeneous. The Schrödinger equation (more correctly, in the zero-temperature approximation, the Schrödinger–Newton equation, see Penrose 2000; Moroz et al. 1998) for the static unsuperposed material would have to be solved, at least approximately, in order to derive the expectation value of the mass distribution, where there would be some quantum spread in the locations of the particles constituting the nuclei.
For Orch OR to be operative in the brain, we would need coherent superpositions of sufficient amounts of material, undisturbed by environmental entanglement, where this reduces in accordance with the above OR scheme in a rough time scale of the general order of time for a conscious experience to take place. For an ordinary type of experience, this might be say about τ =10−1s which concurs with neural correlates of consciousness, such as particular frequencies of electroencephalograhy (EEG).
Penrose (1989; 1994) suggested that processes of the general nature of quantum computations were occurring in the brain, terminated by OR. In quantum computers (Benioff 1982, Deutsch 1985, Feynman 1986), information is represented not just as bits of either 1 or 0, but also as quantum superposition of both 1 and 0 together (quantum bits or qubits) where, moreover, large-scale entanglements between qubits would also be involved. These qubits interact and compute following the Schrödinger equation, potentially enabling complex and highly efficient parallel processing. As envisioned in technological quantum computers, at some point a measurement is made causing quantum state reduction (with some randomness introduced). The qubits reduce, or collapse to classical bits and definite states as the output.
The proposal that some form of quantum computing could be acting in the brain, this proceeding by the Schrödinger equation without decoherence until some threshold for self-collapse due to a form of non-computable OR could be reached, was made in Penrose 1989. However, no plausible biological candidate for quantum computing in the brain had been available to him, as he was then unfamiliar with microtubules.
Figure 5. Three descriptions of an Orch OR conscious event by EG =/τ. A. Microtubule automata. Quantum (gray) tubulins evolve to meet threshold after Step 3, a moment of consciousness occurs and tubulin states are selected. For actual event (e.g. 25 msec), billions of tubulins are required; a small number is used here for illustration. B. Schematic showing U-like evolution until threshold. C. Space-time sheet with superposition separation reaches threshold and selects one reality/spacetime curvature. 7. Penrose-Hameroff Orchestrated Objective Reduction ('Orch OR')
Penrose and Hameroff teamed up in the early 1990s. Fortunately, by then, the DP form of OR mechanism was at hand to be applied to the microtubule-automata models for consciousness as developed by Hameroff. A number of questions were addressed.
How does τ≈ℏ/EG relate to consciousness? Orch OR considers consciousness as a sequence of discrete OR events in concert with neuronal-level activities. In τ≈ℏ/EG , τ is taken to be the time for evolution of the pre-conscious quantum wavefunction between OR events, i.e. the time interval between conscious moments, during which quantum superpositions of microtubule states evolve according to the continuous Schrödinger equation before reaching (on the average) the τ≈ℏ/EG OR threshold in time τ, when quantum state reduction and a moment of conscious awareness occurs (Figure 5).
The best known temporal correlate for consciousness is gamma synchrony EEG, 30 to 90 Hz, often referred to as coherent 40 Hz. One possible viewpoint might be to take this oscillation to represent a succession of 40 or so conscious moments per second (τ=25 milliseconds). This would be reasonably consistent with neuroscience (gamma synchrony), with certain ideas expressed in philosophy (e.g. Whitehead 'occasions of experience'), and perhaps even with ancient Buddhist texts which portray consciousness as 'momentary collections of mental phenomena' or as 'distinct, unconnected and impermanent moments which perish as soon as they arise.' (Some Buddhist writings quantify the frequency of conscious moments. For example the Sarvaastivaadins, according to von Rospatt 1995, described 6,480,000 'moments' in 24 hours—an average of one 'moment' per 13.3 msec, ~75 Hz—and some Chinese Buddhism as one "thought" per 20 msec, i.e. 50 Hz.) These accounts, even including variations in frequency, could be considered to be consistent with Orch OR events in the gamma synchrony range. Accordingly, on this view, gamma synchrony, Buddhist 'moments of experience', Whitehead 'occasions of experience', and our proposed Orch OR events might be viewed as corresponding tolerably well with one another.
Putting τ=25msec in EG ≈ℏ/τ, we may ask what is EG in terms of superpositioned microtubule tubulins? EG may be derived from details about the superposition separation of mass distribution. Three types of mass separation were considered in Hameroff–Penrose 1996a for peanut-shaped tubulin proteins of 110,000 atomic mass units: separation at the level of (1) protein spheres, e.g. by 10 percent volume, (2) atomic nuclei (e.g. carbon, ~ 2.5 Fermi length), (3) nucleons (protons and neutrons). The most plausible calculated effect might be separation at the level of atomic nuclei, giving EG as superposition of 2 x 1010 tubulins reaching OR threshold at 25 milliseconds.
Brain neurons each contain roughly 108 tubulins, so only a few hundred neurons would be required for a 25msec, gamma synchrony OR event if 100 percent of tubulins in those neurons were in superposition and avoided decoherence. It seems more likely that a fraction of tubulins per neuron are in superposition. Global macroscopic states such as superconductivity ensue from quantum coherence among only very small fractions of components. If 1 percent of tubulins within a given set of neurons were coherent for 25msec, then 20,000 such neurons would be required to elicit OR. In human brain, cognition and consciousness are, at any one time, thought to involve tens of thousands of neurons. Hebb's (1949) 'cell assemblies', Eccles's (1992) 'modules', and Crick and Koch's (1990) 'coherent sets of neurons' are each estimated to contain some 10,000 to 100,000 neurons which may be widely distributed throughout the brain (Scott, 1995).
Adopting τ≈ℏ/EG , we find that, with this point of view with regard to Orch-OR, a spectrum of possible types of conscious event might be able to occur, including those at higher frequency and intensity. It may be noted that Tibetan monk meditators have been found to have 80 Hz gamma synchrony, and perhaps more intense experience (Lutz et al. 2004). Thus, according to the viewpoint proposed above, where we interpret this frequency to be associated with a succession of Orch-OR moments, then EG ≈ℏ/τ would appear to require that there is twice as much brain involvement required for 80 Hz than for consciousness occurring at 40 Hz (or √2 times as much if the displacement is entirely coherent, since then the mass enters quadratically in EG ). Even higher (frequency), expanded awareness states of consciousness might be expected, with more neuronal brain involvement.
On the other hand, we might take an alternative viewpoint with regard to the probable frequency of Orch-OR actions, and to the resulting frequency of elements of conscious experience. There is the possibility that the discernable moments of consciousness are events that normally occur at a much slower pace than is suggested by the considerations above, and that they happen only at rough intervals of the order of, say, one half a second or so, i.e. ~500msec, rather than ~25msec. One might indeed think of conscious influences as perhaps being rather slow, in contrast with the great deal of vastly faster unconscious computing that might be some form of quantum computing, but without OR. At the present stage of uncertainty about such matters it is perhaps best not to be dogmatic about how the ideas of Orch OR are to be applied. In any case, the numerical assignments provided above must be considered to be extremely rough, and at the moment we are far from being in a position to be definitive about the precise way in which the Orch-OR is to operate. Alternative possibilities will need to be considered with an open mind.
How do microtubule quantum computation avoid decoherence? Technological quantum computers using e.g. ion traps as qubits are plagued by decoherence, disruption of delicate quantum states by thermal vibration, and require extremely cold temperatures and vacuum to operate. Decoherence must be avoided during the evolution toward time τ (≈ℏ/EG ), so that the non-random (non-computable) aspects of OR can be playing their roles. How does quantum computing avoid decoherence in the 'warm, wet and noisy' brain?
It was suggested (Hameroff and Penrose, 1996a) that microtubule quantum states avoid decoherence by being pumped, laser-like, by Fröhlich resonance, and shielded by ordered water, C-termini Debye layers, actin gel and strong mitochondrial electric fields. Moreover quantum states in Orch OR are proposed to originate in hydrophobic pockets in tubulin interiors, isolated from polar interactions, and involve superposition of only atomic nuclei separation. Moreover, geometrical resonances in microtubules, e.g. following helical pathways of Fibonacci geometry are suggested to enable topological quantum computing and error correction, avoiding decoherence perhaps effectively indefinitely (Hameroff et al 2002) as in a superconductor.
The analogy with high-temperature superconductors may indeed be appropriate, in fact. As yet, there is no fully accepted theory of how such superconductors operate, avoiding loss of quantum coherence from the usual processes of environmental decoherence. Yet there are materials which support superconductivity at temperatures roughly halfway between room temperature and absolute zero (He et al., 2010). This is still a long way from body temperature, of course, but there is now some experimental evidence (Bandyopadhyay 2011) that is indicative of something resembling superconductivity (referred to as 'ballistic conductance'), that occurs in living A-lattice microtubules at body temperature. This will be discussed below.
Physicist Max Tegmark (2000) published a critique of Orch OR based on his calculated decoherence times for microtubules of 10-13 seconds at biological temperature, far too brief for physiological effects. However Tegmark didn't include Orch OR stipulations and in essence created, and then refuted his own quantum microtubule model. He assumed superpositions of solitons separated from themselves by a distance of 24 nanometers along the length of the microtubule. As previously described, superposition separation in Orch OR is at the Fermi length level of atomic nuclei, i.e. 7 orders of magnitude smaller than Tegmark's separation value, thus underestimating decoherence time by 7 orders of magnitude, i.e. from 10-13 secs to microseconds at 10-6 seconds. Hagan et al (2001) used Tegmark's same formula and recalculated microtubule decoherence times using Orch OR stipulations, finding 10-4 to 10-3 seconds, or longer due to topological quantum effects. It seemed likely biology had evolved optimal information processing systems which can utilize quantum computing, but there was no real evidence either way.
Beginning in 2003, published research began to demonstrate quantum coherence in warm biological systems. Ouyang and Awschalom (2003) showed that quantum spin transfer through phenyl rings (the same as those in protein hydrophobic pockets) is enhanced at increasingly warm temperatures. Other studies showed that quantum coherence occurred at ambient temperatures in proteins involved in photosynthesis, that plants routinely use quantum coherence to produce chemical energy and food (Engel et al, 2007). Further research has demonstrated warm quantum effects in bird brain navigation (Gauger et al, 2011), ion channels (Bernroider and Roy, 2005), sense of smell (Turin, 1996), DNA (Rieper et al., 2011), protein folding (Luo and Lu, 2011), biological water (Reiter et al., 2011) and microtubules.
Recently Anirban Bandyopadhyay and colleagues at the National Institute of Material Sciences in Tsukuba, Japan have used nanotechnology to study electronic conductance properties of single microtubules assembled from porcine brain tubulin. Their preliminary findings (Bandyopadhyay, 2011) include: (1) Microtubules have 8 resonance peaks for AC stimulation (kilohertz to 10 megahertz) which appear to correlate with various helical conductance pathways around the geometric microtubule lattice. (2) Excitation at these resonant frequencies causes microtubules to assemble extremely rapidly, possibly due to Fröhlich condensation. (3) In assembled microtubules AC excitation at resonant frequencies causes electronic conductance to become lossless, or 'ballistic', essentially quantum conductance, presumably along these helical quantum channels. Resonance in the range of kilohertz demonstrates microtubule decoherence times of at least 0.1 millisecond. (4) Eight distinct quantum interference patterns from a single microtubule, each correlating with one of the 8 resonance frequencies and pathways. (5) Ferroelectric hysteresis demonstrates memory capacity in microtubules. (6) Temperature-independent conductance also suggests quantum effects. If confirmed, such findings would demonstrate Orch OR to be biologically feasible.
How does microtubule quantum computation and Orch OR fit with recognized neurophysiology? Neurons are composed of multiple dendrites and a cell body/soma which receive and integrate synaptic inputs to a threshold for firing outputs along a single axon. Microtubule quantum computation in Orch OR is assumed to occur in dendrites and cell bodies/soma of brain neurons, i.e. in regions of integration of inputs in integrate-and-fire neurons. As opposed to axonal firings, dendritic/somatic integration correlates best with local field potentials, gamma synchrony EEG, and action of anesthetics erasing consciousness. Tononi (2004) has identified integration of information as the neuronal function most closely associated with consciousness. Dendritic microtubules are uniquely arranged in local mixed polarity networks, well-suited for integration of synaptic inputs.
Membrane synaptic inputs interact with post-synaptic microtubules by activation of microtubule-associated protein 2 ('MAP2', associated with learning), and calcium-calmodulin kinase II (CaMKII, Hameroff et al, 2010). Such inputs were suggested by Penrose and Hameroff (1996a) to 'tune', or 'orchestrate' OR-mediated quantum computations in microtubules by MAPs, hence 'orchestrated objective reduction', 'Orch OR'.
Proposed mechanisms for microtubule avoidance of decoherence were described above, but another question remains. How would microtubule quantum computations which are isolated from the environment, still interact with that environment for input and output? One possibility that Orch OR suggests is that perhaps phases of isolated quantum computing alternate with phases of classical environmental interaction, e.g. at gamma synchrony, roughly 40 times per second. (Computing pioneer Paul Benioff suggested such a scheme of alternating quantum and classical phases in a science fiction story about quantum computing robots.)
With regard to outputs resulting from processes taking place at the level of microtubules in Orch-OR quantum computations, dendritic/somatic microtubules receive and integrate synaptic inputs during classical phase. They then become isolated quantum computers and evolve to threshold for Orch OR at which they reduce their quantum states at an average time interval τ (given by by τ≈ℏ/EG). The particular tubulin states chosen in the reduction can then trigger axonal firing, adjust firing threshold, regulate synapses and encode memory. Thus Orch OR can have causal efficacy in conscious actions and behavior, as well as providing conscious experience and memory.
Orch OR in evolution In the absence of Orch OR, non-conscious neuronal activities might proceed by classical neuronal and microtubule-based computation. In addition there could be quantum computations in microtubules that do not reach the Orch OR level, and thereby also remain unconscious.
This last possibility is strongly suggested by considerations of natural selection, since some relatively primitive microtubule infrastructure, still able to support quantum computation, would have to have preceded the more sophisticated kind that we now find in conscious animals. Natural selection proceeds in steps, after all, and one would not expect that the capability of the substantial level of coherence across the brain that would be needed for the non-computable OR of human conscious understanding to be reached, without something more primitive having preceded it. Microtubule quantum computing by U evolution which avoids decoherence would well be advantageous to biological processes without ever reaching threshold for OR.
Microtubules may have appeared in eukaryotic cells 1.3 billion years ago due to symbiosis among prokaryotes, mitochondria and spirochetes, the latter the apparent origin of microtubules which provided movement to previously immobile cells (e.g. Margulis and Sagan, 1995). Because Orch OR depends on τ≈ℏ/EG, more primitive consciousness in simple, small organisms would involve smaller EG, and longer times τ to avoid decoherence. As simple nervous systems and arrangements of microtubules grew larger and developed anti-decoherence mechanisms, inevitably a system would avoid decoherence long enough to reach threshold for Orch OR conscious moments. Central nervous systems around 300 neurons, such as those present at the early Cambrian evolutionary explosion 540 million years ago, could have τ near one minute, and thus be feasible in terms of avoiding decoherence (Hameroff, 1998d). Perhaps the onset of Orch OR and consciousness with relatively slow and simple conscious moments, precipitated the accelerated evolution.
Only at a much later evolutionary stage would the selective advantages of a capability for genuine understanding come about. This would require the non-computable capabilities of Orch OR that go beyond those of mere quantum computation, and depend upon larger scale infrastructure of efficiently functioning microtubules, capable of operating quantum-computational processes. Further evolution providing larger sets of microtubules (larger EG) able to be isolated from decoherence would enable, by τ≈ℏ/EG, more frequent and more intense moments of conscious experience. It appears human brains could have evolved to having Orch OR conscious moments perhaps as frequently as every few milliseconds.
How could microtubule quantum states in one neuron extend to those in other neurons throughout the brain? Assuming microtubule quantum state phases are isolated in a specific neuron, how could that quantum state involve microtubules in other neurons throughout the brain without traversing membranes and synapses? Orch OR proposes that quantum states can extend by tunneling, leading to entanglement between adjacent neurons through gap junctions.
Figure 6. Portions of two neurons connected by a gap junction with microtubules (linked by microtubule-associated proteins, 'MAPs') computing via states (here represented as black or white) of tubulin protein subunits. Wavy lines suggest entanglement among quantum states (not shown) in microtubules. Gap junctions are primitive electrical connections between cells, synchronizing electrical activities. Structurally, gap junctions are windows between cells which may be open or closed. When open, gap junctions synchronize adjacent cell membrane polarization states, but also allow passage of molecules between cytoplasmic compartments of the two cells. So both membranes and cytoplasmic interiors of gap-junction-connected neurons are continuous, essentially one complex 'hyper-neuron' or syncytium. (Ironically, before Ramon-y-Cajal showed that neurons were discrete cells, the prevalent model for brain structure was a continuous threaded-together syncytium as proposed by Camille Golgi.) Orch OR suggests that quantum states in microtubules in one neuron could extend by entanglement and tunneling through gap junctions to microtubules in adjacent neurons and glia (Figure 6), and from those cells to others, potentially in brain-wide syncytia.
Open gap junctions were thus predicted to play an essential role in the neural correlate of consciousness (Hameroff, 1998a). Beginning in 1998, evidence began to show that gamma synchrony, the best measureable correlate of consciousness, depended on gap junctions, particularly dendritic-dendritic gap junctions (Dermietzel, 1998; Draguhn et al, 1998; Galaretta and Hestrin, 1999). To account for the distinction between conscious activities and non-conscious 'auto-pilot' activities, and the fact that consciousness can occur in various brain regions, Hameroff (2009) developed the “Conscious pilot' model in which syncytial zones of dendritic gamma synchrony move around the brain, regulated by gap junction openings and closings, in turn regulated by microtubules. The model suggests consciousness literally moves around the brain in a mobile synchronized zone, within which isolated, entangled microtubules carry out quantum computations and Orch OR. Taken together, Orch OR and the conscious pilot distinguish conscious from non-conscious functional processes in the brain.
Libet's backward time referral In the 1970s neurophysiologist Benjamin Libet performed experiments on patients having brain surgery while awake, i.e. under local anesthesia (Libet et al., 1979). Able to stimulate and record from conscious human brain, and gather patients' subjective reports with precise timing, Libet determined that conscious perception of a stimulus required up to 500 msec of brain activity post-stimulus, but that conscious awareness occurred at 30 msec post-stimulus, i.e. that subjective experience was referred 'backward in time'.
Bearing such apparent anomalies in mind, Penrose put forward a tentative suggestion, in The Emperor's New Mind, that effects like Libet's backward time referral might be related to the fact that quantum entanglements are not mediated in a normal causal way, so that it might be possible for conscious experience not to follow the normal rules of sequential time progression, so long as this does not lead to contradictions with external causality. In Section 5, it was pointed out that the (experimentally confirmed) phenomenon of 'quantum teleportation' (Bennett et al., 1993; Bouwmeester et al., 1997; Macikic et al., 2002) cannot be explained in terms of ordinary classical information processing, but as a combination of such classical causal influences and the acausal effects of quantum entanglement. It indeed turns out that quantum entanglement effects—referred to as 'quantum information' or 'quanglement' (Penrose 2002, 2004)—appear to have to be thought of as being able to propagate in either direction in time (into the past or into the future). Such effects, however, cannot by themselves be used to communicate ordinary information into the past. Nevertheless, in conjunction with normal classical future-propagating (i.e. 'causal') signalling, these quantum-teleportation influences can achieve certain kinds of 'signalling' that cannot be achieved simply by classical future-directed means.
The issue is a subtle one, but if conscious experience is indeed rooted in the OR process, where we take OR to relate the classical to the quantum world, then apparent anomalies in the sequential aspects of consciousness are perhaps to be expected. The Orch OR scheme allows conscious experience to be temporally non-local to a degree, where this temporal non-locality would spread to the kind of time scale τ that would be involved in the relevant Orch OR process, which might indeed allow this temporal non-locality to spread to a time τ=500ms. When the 'moment' of an internal conscious experience is timed externally, it may well be found that this external timing does not precisely accord with a time progression that would seem to apply to internal conscious experience, owing to this temporal non-locality intrinsic to Orch OR.
Measurable brain activity correlated with a stimulus often occurs several hundred msec after that stimulus, as Libet showed. Yet in activities ranging from rapid conversation to competitive athletics, we respond to a stimulus (seemingly consciously) before the above activity that would be correlated with that stimulus occurring in the brain. This is interpreted in conventional neuroscience and philosophy (e.g. Dennett, 1991; Wegner, 2002) to imply that in such cases we respond non-consciously, on auto-pilot, and subsequently have only an illusion of conscious response. The mainstream view is that consciousness is epiphenomenal illusion, occurring after-the-fact as a false impression of conscious control of behavior. We are merely 'helpless spectators' (Huxley, 1986).
However, the effective quantum backward time referral inherent in the temporal non-locality resulting from the quanglement aspects of Orch OR, as suggested above, enables conscious experience actually to be temporally non-local, thus providing a means to rescue consciousness from its unfortunate characterization as epiphenomenal illusion. Accordingy, Orch OR could well enable consciousness to have a causal efficacy, despite its apparently anomalous relation to a timing assigned to it in relation to an external clock, thereby allowing conscious action to provide a semblance of free will.
8. Orch OR Criticisms and Responses
Orch OR has been criticized repeatedly since its inception. Here we review and summarize major criticisms and responses.
Grush and Churchland, 1995. Philosophers Grush and Churchland (1995) took issue with the Gödel's theorem argument, as well as several biological factors. One objection involved the microtubule-disabling drug colchicine which treats diseases such as gout by immobilizing neutrophil cells which cause painful inflammation in joints. Neutrophil mobility requires cycles of microtubule assembly/disassembly, and colchicine prevents re-assembly, impairing neutrophil mobility and reducing inflammation. Grush and Churchland pointed out that patients given colchicine do not lose consciousness, concluding that microtubules cannot be essential for consciousness. Penrose and Hameroff (1995) responded point-by-point to every objection, e.g. explaining that colchicine does not cross the blood brain barrier, and so doesn't reach the brain. Colchicine infused directly into the brains of animals does cause severe cognitive impairment and apparent loss of consciousness (Bensimon and Chemat, 1991).
Tuszynski et al, 1998. Tuszynski et al (1998) questioned how extremely weak gravitational energy in Diósi-Penrose OR could influence tubulin protein states. In Hameroff and Penrose (1996a), the gravitational self-energy EG for tubulin superposition was calculated for separation of tubulin from itself at the level of its atomic nuclei. Because the atomic (e.g. carbon) nucleus displacement is greater than its radius (the nuclei separate completely), the gravitational self-energy EG is given by: EG=Gm2/ac, where ac is the carbon nucleus sphere radius equal to 2.5 Fermi distances, m is the mass of tubulin, and G is the gravitational constant. Brown and Tuszynski calculated EG (using separation at the nanometer level of the entire tubulin protein), finding an appropriately small energy E of 10-27 electron volts (eV) per tubulin, infinitesimal compared with ambient energy kT of 10-4eV. Correcting for the smaller superposition separation distance of 2.5 Fermi lengths in Orch OR gives a significantly larger, but still tiny 10-21eV per tubulin. With 2×1010 tubulins per 25msec, the conscious Orch OR moment would be roughly 10-10eV (10-29 joules), still insignificant compared to kT at 10-4eV.
All this serves to illustrate the fact that the energy EG does not actually play a role in physical processes as an energy, in competition with other energies that are driving the physical (chemical, electronic) processes of relevance. In a clear sense EG is, instead, an energy uncertainty—and it is this uncertainty that allows quantum state reduction to take place without violation of energy conservation. The fact that EG is far smaller than the other energies involved in the relevant physical processes is a necessary feature of the consistency of the OR scheme. It does not supply the energy to drive the physical processes involved, but it provides the energy uncertainty that allows the freedom for processes having virtually the same energy as each other to be alternative actions. In practice, all that EG is needed for is to tell us how to calculate the lifetime τ of the superposition. EG would enter into issues of energy balance only if gravitational interactions between the parts of the system were important in the processes involved. (The Earth's gravitational field plays no role in this either, because it cancels out in the calculation of EG.) No other forces of nature directly contribute to EG, which is just as well, because if they did, there would be a gross discrepancy with observational physics.
Tegmark, 2000. Physicist Max Tegmark (2000) confronted Orch OR on the basis of decoherence. This was discussed at length in Section 7.
Koch and Hepp, 2006. In a challenge to Orch OR, neuroscientists/physicists Koch and Hepp published a thought experiment in Nature, describing a person observing a superposition of a cat both dead and alive with one eye, the other eye distracted by a series of images (binocular rivalry). They asked 'Where in the observer's brain would reduction occur?', apparently assuming Orch OR followed the Copenhagen interpretation in which conscious observation causes quantum state reduction. This is precisely the opposite of Orch OR in which consciousness is the orchestrated quantum state reduction given by OR.
Orch OR can account for the related issue of bistable perceptions (e.g. the famous face/vase illusion, or Necker cube). Non-conscious superpositions of both possibilities (face and vase) during pre-conscious quantum superposition then reduce by OR at time τ to conscious perception of one or the other, face or vase. The reduction would occur among microtubules within neurons interconnected by gap junctions in various areas of visual and pre-frontal cortex and other brain regions.
Figure 7. Simulating Fröhlich coherence in microtubules. A) Linear column of tubulins (protofilament) as simulated by Reimers et al (2010) which showed only weak Fröhlich condensation. B) and C) 2-dimensional tubulin sheets with toroidal boundary conditions (approximating 3-dimensional microtubule) simulated by Samsonovich et al (1992) shows long range Fröhlich resonance, with long-range symmetry, and nodes matching experimentally-observed MAP attachment patterns. Reimers et al (2009) described three types of Fröhlich condensation (weak, strong and coherent, the first classical and the latter two quantum). They validated 8 MHz coherence measured in microtubules by Pokorny (2001; 2004) as weak condensation. Based on simulation of a 1-dimensional linear chain of tubulin dimers representing a microtubule, they concluded only weak Fröhlich condensation occurs in microtubules. Claiming Orch OR requires strong or coherent Fröhlich condensation, they concluded Orch OR is invalid. However Samsonovich et al (1992) simulated a microtubule as a 2-dimensional lattice plane with toroidal boundary conditions and found Fröhlich resonance maxima at discrete locations in super-lattice patterns on the simulated microtubule surface which precisely matched experimentally observed functional attachment sites for microtubule-associated proteins (MAPs). Further, Bandyopadhyay (2011) has experimental evidence for strong Fröhlich coherence in microtubules at multiple resonant frequencies.
McKemmish et al (2010) challenged the Orch OR contention that tubulin switching is mediated by London forces, pointing out that mobile π electrons in a benzene ring (e.g. a phenyl ring without attachments) are completely delocalized, and hence cannot switch between states, nor exist in superposition of both states. Agreed. A single benzene cannot engage in switching. London forces occur between two or more electron cloud ring structures, or other non-polar groups. A single benzene ring cannot support London forces. It takes two (or more) to tango. Orch OR has always maintained two or more non-polar groups are necessary (Figure 8). McKemmish et al are clearly mistaken on this point.
Figure 8. A) Phenyl ring/benzene of 6 carbons with three extra π electrons/double bonds which oscillate between two configurations according to valence theory. B) Phenyl ring/benzene according to molecular orbital theory in which π electrons/double bonds are delocalized, thus preventing oscillation between alternate states. No oscillation/switching can occur. C) Two adjacent phenyl rings/benzenes in which π electrons/double bonds are coupled, i.e. van der Waals London (dipole dispersion) forces. Two versions are shown: In top version, lines represent double bond locations; in bottom version, dipoles are filled in to show negative charge locations. D) Complex of 4 rings with London forces. McKemmish et al further assert that tubulin switching in Orch OR requires significant conformational structural change (as indicated in Figure 2), and that the only mechanism for such conformational switching is due to GTP hydrolysis, i.e. conversion of guanosine triphophate (GTP) to guanosine diphosphate (GDP) with release of phosphate group energy, and tubulin conformational flexing. McKemmish et al correctly point out that driving synchronized microtubule oscillations by hydrolysis of GTP to GDP and conformational changes would be prohibitive in terms of energy requirements and heat produced. This is agreed. However, we clarify that tubulin switching in Orch OR need not actually involve significant conformational change (e. g. as is illustrated in Figure 2), that electron cloud dipole states (London forces) are sufficient for bit-like switching, superposition and qubit function. We acknowledge tubulin conformational switching as discussed in early Orch OR publications and illustrations do indicate significant conformational changes. They are admittedly, though unintentionally, misleading.
Figure 9. Left: Molecular simulation of tubulin with beta tubulin (dark gray) on top and alpha tubulin (light gray) on bottom. Non-polar amino acids phenylalanine and tryptophan with aromatic phenyl and indole rings are shown. (By Travis Craddock and Jack Tuszynski.) Right: Schematic tubulin with non-polar hydrophobic phenyl rings approximating actually phenyl and indole rings. Scale bar: 1 nanometer. The only tubulin conformational factor in Orch OR is superposition separation involved in EG, the gravitational self-energy of the tubulin qubit. As previously described, we calculated EG for tubulin separated from itself at three possible levels: 1) the entire protein (e.g. partial separation, as suggested in Figure 2), 2) its atomic nuclei, and 3) its nucleons (protons and neutrons). The dominant effect is 2) separation at the level of atomic nuclei, e.g. 2.5 Fermi length for carbon nuclei (2.5 femtometers; 2.5 x 10-15 meters). This shift may be accounted for by London force dipoles with Mossbauer nuclear recoil and charge effects (Hameroff, 1998). Tubulin switching in Orch OR requires neither GTP hydrolysis nor significant conformational changes.
Figure 10. Four versions of the schematic Orch OR tubulin bit (superpositioned qubit states not shown). A) Early version showing conformational change coupled to/driven by single hydrophobic pocket with two aromatic rings. B) Updated version with single hydrophobic pocket composed of 4 aromatic rings. C) McKemmish et al (2009) mis-characterization of Orch OR tubulin bit as irreversible conformational change driven by GTP hydrolysis. D) Current version of Orch OR bit with no significant conformational change (change occurs at the level of atomic nuclei) and multiple hydrophobic pockets arranged in channels. Schematic depiction of the tubulin bit, qubit and hydrophobic pockets in Orch OR has evolved over the years. An updated version is described in the next Section.
Figure 11. 2011 Orch OR tubulin qubit. Top: Alternate states of tubulin dimer (black and white) due to collective orientation of London force electron cloud dipoles in non-polar hydrophobic regions. There is no evident conformational change as suggested in previous versions; conformational change occurs at the level of atomic nuclei. Bottom: Depiction of tubulin (gray) superpositioned in both states. 9. Topological Quantum Computing in Orch OR
Quantum processes in Orch OR have consistently been ascribed to London forces in tubulin hydrophobic pockets, non-polar intra-protein regions, e.g. of π electron resonance rings of aromatic amino acids including tryptophan and phenylalanine. This assertion is based on (1) Fröhlich's suggestion that protein states are synchronized by electron cloud dipole oscillations in intra-protein non-polar regions, and (2) anesthetic gases selectively erasing consciousness by London forces in non-polar, hydrophobic regions in various neuronal proteins (e.g. tubulin, membrane proteins, etc.). London forces are weak, but numerous and able to act cooperatively to regulate protein states (Voet and Voet, 1995).
The structure of tubulin became known in 1998 (Nogales et al, 1998), allowing identification of non-polar amino acids and hydrophobic regions. Figure 9 shows locations of phenyl and indole π electron resonance rings of non-polar aromatic amino acids phenylalanine and tryptophan in tubulin. The ring locations are clustered along somewhat continuous pathways (within 2 nanometers) through tubulin. Thus, rather than hydrophobic pockets, tubulin may have within it quantum hydrophobic channels, or streams, linear arrays of electron resonance clouds suitable for cooperative, long-range quantum London forces. These quantum channels within each tubulin appear to align with those in adjacent tubulins in microtubule lattices, matching helical winding patterns (Figure 12). This in turn may support topological quantum computing in Orch OR.
Quantum bits, or qubits in quantum computers are generally envisioned as information bits in superposition of simultaneous alternative representations, e.g. both 1 and 0. Topological qubits are superpositions of alternative pathways, or channels which intersect repeatedly on a surface, forming 'braids'. Quasiparticles called anyons travel along such pathways, the intersections forming logic gates, with particular braids or pathways corresponding with particular information states, or bits. In superposition, anyons follow multiple braided pathways simultaneously, then reduce, or collapse to one particular pathway and functional output. Topological qubits are intrinsically resistant to decoherence.
An Orch OR qubit based on topological quantum computing specific to microtubule polymer geometry was suggested in Hameroff et al. (2002). Conductances along particular microtubule lattice geometry, e.g. Fibonacci helical pathways, were proposed to function as topological bits and qubits. Bandyopadhyay (2011) has preliminary evidence for ballistic conductance along different, discrete helical pathways in single microtubules
As an extension of Orch OR, we suggest topological qubits in microtubules based on quantum hydrophobic channels, e.g. continuous arrays of electron resonance rings within and among tubulins in microtubule lattices, e.g. following Fibonacci pathways. Cooperative London forces (electron cloud dipoles) in quantum hydrophobic channels may enable long-range coherence and topological quantum computing in microtubules necessary for optimal brain function and consciousness.
Figure 12. Left: Microtubule A-lattice configuration with lines connecting proposed hydrophobic channels of near-contiguous (<2 nanometer separation) electron resonance rings of phenylalanine and tryptophan. Right: Microtubule B-lattice with fewer such channels and lacking Fibonacci pathways. B-lattice microtubules have a vertical seam dislocation (not shown).
Figure 13. Extending microtubule A-lattice hydrophobic channels (Figure 12) results in helical winding patterns matching Fibonacci geometry. Bandyopadhyay (2011) has evidence for ballistic conductance and quantum inteference along such helical pathways which may be involved in topological quantum computing. Quantum electronic states of London forces in hydrophobic channels result in slight superposition separation of atomic nuclei, sufficient EG for Orch OR. This image may be taken to represent superposition of four possible topological qubits which, after time T=tau, will undergo OR, and reduce to specific pathway(s) which then implement function. 10. Conclusion: Consciousness in the Universe
Our criterion for proto-consciousness is OR . It would be unreasonable to refer to OR as the criterion for actual consciousness, because, according to the DP scheme, OR processes would be taking place all the time, and would be providing the effective randomness that is characteristic of quantum measurement. Quantum superpositions will continually be reaching the DP threshold for OR in non-biological settings as well as in biological ones, and usually take place in the purely random environment of a quantum system under measurement. Instead, our criterion for consciousness is Orch OR, conditions for which are fairly stringent: superposition must be isolated from the decoherence effects of the random environment for long enough to reach the DS threshold. Small superpositions are easier to isolate, but require longer reduction times τ. Large superpositions will reach threshold quickly, but are intrinsically more difficult to isolate. Nonetheless, we believe that there is evidence that such superpositions could occur within sufficiently large collections of microtubules in the brain for τ to be some fraction of a second.
Very large mass displacements can also occur in the universe in quantum-mechanical situations, for example in the cores of neutron stars. By OR , such superpositions would reduce extremely quickly, and classically unreasonable superpositions would be rapidly eliminated. Nevertheless, sentient creatures might have evolved in parts of the universe that would be highly alien to us. One possibility might be on neutron star surfaces, an idea that was developed ingeniously and in great detail by Robert Forward in two science-fiction stories (Dragon's Egg in 1980, Starquake in 1989). Such creatures (referred to as 'cheelas' in the books, with metabolic processes and OR-like events occurring at rates of around a million times that of a human being) could arguably have intense experiences, but whether or not this would be possible in detail is, at the moment, a very speculative matter. Nevertheless, the Orch OR proposal offers a possible route to rational argument, as to whether life of a totally alien kind such as this might be possible, or even probable, somewhere in the universe.
Such speculations also raise the issue of the 'anthropic principle', according to which it is sometimes argued that the particular dimensionless constants of Nature that we happen to find in our universe are 'fortuitously' favorable to human existence. (A dimensionless physical constant is a pure number, like the ratio of the electric to the gravitational force between the electron and the proton in a hydrogen atom, which in this case is a number of the general order of 1040.) The key point is not so much to do with human existence, but the existence of sentient beings of any kind. Is there anything coincidental about the dimensionless physical constants being of such a nature that conscious life is possible at all? For example, if the mass of the neutron had been slightly less than that of the proton, rather than slightly larger, then neutrons rather than protons would have been stable, and this would be to the detriment of the whole subject of chemistry. These issues are frequently argued about (see Barrow and Tipler 1986), but the Orch OR proposal provides a little more substance to these arguments, since a proposal for the possibility of sentient life is, in principle, provided.
The recently proposed cosmological scheme of conformal cyclic cosmology (CCC) (Penrose 2010) also has some relevance to these issues. CCC posits that what we presently regard as the entire history of our universe, from its Big-Bang origin (but without inflation) to its indefinitely expanding future, is but one aeon in an unending succession of similar such aeons, where the infinite future of each matches to the big bang of the next via an infinite change of scale. A question arises whether the dimensionless constants of the aeon prior to ours, in the CCC scheme, are the same as those in our own aeon, and this relates to the question of whether sentient life could exist in that aeon as well as in our own. These questions are in principle answerable by observation, and again they would have a bearing on the extent or validity of the Orch OR proposal. If Orch OR turns out to be correct, in it essentials, as a physical basis for consciousness, then it opens up the possibility that many questions may become answerable, such as whether life could have come about in an aeon prior to our own, that would have previously seemed to be far beyond the reaches of science.
Moreover, Orch OR places the phenomenon of consciousness at a very central place in the physical nature of our universe, whether or not this 'universe' includes aeons other than just our own. It is our belief that, quite apart from detailed aspects of the physical mechanisms that are involved in the production of consciousness in human brains, quantum mechanics is an incomplete theory. Some completion is needed, and the DP proposal for an OR scheme underlying quantum theory's R-process would be a definite possibility. If such a scheme as this is indeed respected by Nature, then there is a fundamental additional ingredient to our presently understood laws of Nature which plays an important role at the Planck-scale level of space-time structure. The Orch OR proposal takes advantage of this, suggesting that conscious experience itself plays such a role in the operation of the laws of the universe.
Acknowledgment We thank Dave Cantrell, University of Arizona Biomedical Communications for artwork.
Atema, J. (1973). Microtubule theory of sensory transduction. Journal of Theoretical Biology, 38, 181-90.
Bandyopadhyay A (2011) Direct experimental evidence for quantum states in microtubules and topological invariance. Abstracts: Toward a Science of Consciousness 2011, Sockholm, Sweden, HYPERLINK ""
Barrow, J.D. and Tipler, F.J. (1986) The Anthropic Cosmological Principle (OUP, Oxford).
Bell, J.S. (1966) Speakable and Unspeakable in Quantum Mechanics (Cambridge Univ. Press, Cambridge; reprint 1987).
Benioff, P. (1982). Quantum mechanical Hamiltonian models of Turing Machines. Journal of Statistical Physics, 29, 515‑46.
Bennett C.H., and Wiesner, S.J. (1992). Communication via 1- and 2-particle operators on Einstein-Podolsky-Rosen states. Physical Reviews Letters, 69, 2881-84.
Bensimon G, Chemat R (1991) Microtubule disruption and cognitive defects: effect of colchicine on teaming behavior in rats. Pharmacol. Biochem. Behavior 38:141-145.
Bohm, D. (1951) Quantum Theory (Prentice–Hall, Englewood-Cliffs.) Ch. 22, sect. 15-19. Reprinted as: The Paradox of Einstein, Rosen and Podolsky, in Quantum Theory and Measurement, eds., J.A. Wheeler and W.H. Zurek (Princeton University Press, Princeton, 1983).
Bernroider, G. and Roy, S. (2005) Quantum entanglement of K ions, multiple channel states and the role of noise in the brain. SPIE 5841-29:205–14.
Bouwmeester, D., Pan, J.W., Mattle, K., Eibl, M., Weinfurter, H. and Zeilinger, A. (1997) Experimental quantum teleportation. Nature 390 (6660): 575-579.
Brunden K.R., Yao Y., Potuzak J.S., Ferrer N.I., Ballatore C., James M.J., Hogan A.M., Trojanowski J.Q., Smith A.B. 3rd and Lee V.M. (2011) The characterization of microtubule-stabilizing drugs as possibletherapeutic agents for Alzheimer's disease and related taupathies. Pharmacological Research, 63(4), 341-51.
Chalmers, D. J., (1996). The conscious mind ‑ In search of a fundamental theory. Oxford University Press, New York.
Crick, F., and Koch, C., (1990). Towards a neurobiological theory of consciousness. Seminars in the Neurosciences, 2, 263‑75.
Dennett, D.C. (1991). Consciousness explained. Little Brown, Boston. MA.
Dennett, D.C. (1995) Darwin's dangerous idea: Evolution and the Meanings of Life, Simon and Schuster.
Dermietzel, R. (1998) Gap junction wiring: a 'new' principle in cell-to-cell communication in the nervous system? Brain Research Reviews. 26(2-3):176-83.
Deutsch, D. (1985) Quantum theory, the Church–Turing principle and the universal quantum computer, Proceedings of the Royal Society (London) A400, 97-117.
Diósi, L. (1987) A universal master equation for the gravitational violation of quantum mechanics, Physics Letters A 120 (8):377-381.
Diósi, L. (1989). Models for universal reduction of macroscopic quantum fluctuations Physical Review A, 40, 1165-74.
Draguhn A, Traub RD, Schmitz D, Jefferys (1998). Electrical coupling underlies high-frequency oscillations in the hippocampus in vitro. Nature, 394(6689), 189-92.
Eccles, J.C. (1992). Evolution of consciousness. Proceedings of the National Academy of Sciences, 89, 7320-24.
Engel GS, Calhoun TR, Read EL, Ahn T-K, Mancal T, Cheng Y-C, Blankenship RE, Fleming GR (2007) Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446:782-786.
Everett, H. (1957). Relative state formulation of quantum mechanics. In Quantum Theory and Measurement, J.A. Wheeler and W.H. Zurek (eds.) Princeton University Press, 1983; originally in Reviews of Modern Physics, 29, 454-62.
Feynman, R.P. (1986). Quantum mechanical computers. Foundations of Physics, 16(6), 507‑31.
Forward, R. (1980) Dragon's Egg. Ballentine Books.
Forward, R. (1989) Starquake. Ballentine Books.
Fröhlich, H. (1968). Long‑range coherence and energy storage in biological systems. International Journal of Quantum Chemistry, 2, 641-9.
Fröhlich, H. (1970). Long range coherence and the actions of enzymes. Nature, 228, 1093.
Fröhlich, H. (1975). The extraordinary dielectric properties of biological materials and the action of enzymes. Proceedings of the National Academy of Sciences, 72, 4211‑15.
Galarreta, M. and Hestrin, S. (1999). A network of fast-spiking cells in the neocortex connected by electrical synapses. Nature, 402, 72-75.
Gauger E., Rieper E., Morton J.J.L., Benjamin S.C., Vedral V. (2011) Sustained quantum coherence and entanglement in the avian compass
Ghirardi, G.C., Rimini, A., and Weber, T. (1986). Unified dynamics for microscopic and macroscopic systems. Physical Review D, 34, 470.
Ghirardi, G.C., Grassi, R., and Rimini, A. (1990). Continuous-spontaneous reduction model involving gravity. Physical Review A, 42, 1057-64.
Grush R., Churchland P.S. (1995), 'Gaps in Penrose's toilings', J. Consciousness Studies, 2 (1):10-29.
Hagan S, Hameroff S, and Tuszynski J, (2001). Quantum Computation in Brain Microtubules? Decoherence and Biological Feasibility, Physical Review E, 65, 061901.
Hameroff, S.R., and Watt R.C. (1982). Information processing in microtubules. Journal of Theoretical Biology, 98, 549‑61.
Hameroff, S.R.(1987) Ultimate computing: Biomolecular consciousness and nanotechnology. Elsevier North-Holland, Amsterdam.
Hameroff, S.R., and Penrose, R., (1996a). Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In: Toward a Science of Consciousness ; The First Tucson Discussions and Debates. Hameroff, S.R., Kaszniak, and Scott, A.C., eds., 507-540, MIT Press, Cambridge MA, 507-540. Also published in Mathematics and Computers in Simulation (1996) 40:453-480.
Hameroff, S.R., and Penrose, R. (1996b). Conscious events as orchestrated spacetime selections. Journal of Consciousness Studies, 3(1), 36‑53.
Hameroff, S. (1998a). Quantum computation in brain microtubules? The Penrose-Hameroff "Orch OR" model of consciousness. Philosophical Transactions of the Royal Society (London) Series A, 356, 1869-1896.
Hameroff, S. (1998b). 'Funda-mentality': is the conscious mind subtly linked to a basic level of the universe? Trends in Cognitive Science, 2, 119-127.
Hameroff, S. (1998c). Anesthesia, consciousness and hydrophobic pockets – A unitary quantum hypothesis of anesthetic action. Toxicology Letters, 100, 101, 31-39.
Hameroff, S. (1998d). HYPERLINK ""Did consciousness cause the Cambrian evolutionary explosion? In: Toward a Science of Consciousness II: The Second Tucson Discussions and Debates. Eds. Hameroff, S.R., Kaszniak, A.W., and Scott, A.C., MIT Press, Cambridge, MA.
Hameroff, S., Nip, A., Porter, M., and Tuszynski, J. (2002). Conduction pathways in microtubules, biological quantum computation and microtubules. Biosystems, 64(13), 149-68.
Hameroff S.R., & Watt R.C. (1982) Information processing in microtubules. Journal of Theoretical Biology 98:549‑61.
Hameroff, S.R. (2006) The entwined mysteries of anesthesia and consciousness. Anesthesiology 105:400-412.
Hameroff, S.R, Craddock TJ, Tuszynski JA (2010) Memory 'bytes' – Molecular match for CaMKII phosphorylation encoding of microtubule lattices. Journal of Integrative Neuroscience 9(3):253-267.
He, R-H., Hashimoto, M., Karapetyan. H., Koralek, J.D., Hinton, J.P., Testaud, J.P., Nathan, V., Yoshida, Y., Yao, H., Tanaka, K., Meevasana, W., Moore, R.G., Lu, D.H.,Mo, S-K., Ishikado, M., Eisaki, H., Hussain, Z., Devereaux, T.P., Kivelson, S.A., Orenstein, Kapitulnik, J.A., Shen, Z-X. (2011) From a Single-Band Metal to a High Temperature Superconductor via Two Thermal Phase Transitions. Science, 2011;331 (6024): 1579-1583.
Hebb, D.O. (1949). Organization of Behavior: A Neuropsychological Theory, John Wiley and Sons, New York.
Huxley TH (1893; 1986) Method and Results: Essays.
Kant I (1781) Critique of Pure Reason (Translated and edited by Paul Guyer and Allen W. Wood, Cambridge University Press, 1998).
Kibble, T.W.B. (1981). Is a semi-classical theory of gravity viable? In Quantum Gravity 2: a Second Oxford Symposium; eds. C.J. Isham, R. Penrose, and D.W. Sciama (Oxford University Press, Oxford), 63-80.
Koch, C., (2004) The Quest for Consciousness: A Neurobiological Approach, Englewood, CO., Roberts and Co.
Koch C, Hepp K (2006) Quantm mechanics in the brain. Nature 440(7084):611.
Libet, B., Wright, E.W. Jr., Feinstein, B., & Pearl, D.K. (1979) Subjective referral of the timing for a conscious sensory experience. Brain 102:193‑224.
Luo L, Lu J (2011) Temperature dependence of protein folding deduced from quantum transition.
Lutz A, Greischar AL, Rawlings NB, Ricard M, Davidson RJ (2004) Long-term meditators self-induce high-amplitude gamma synchrony during mental practice The Proceedings of the National Academy of Sciences USA 101(46)16369-16373.
Macikic I., de Riedmatten H., Tittel W., Zbinden H. and Gisin N. (2002) Long-distance teleportation of qubits at telecommunication wavelengths Nature 421, 509-513.
Margulis, L. and Sagan, D. 1995. What is life? Simon and Schuster, N.Y.
Marshall, W, Simon, C., Penrose, R., and Bouwmeester, D (2003). Towards quantum superpositions of a mirror. Physical Review Letters 91, 13-16; 130401.
McKemmish LK, Reimers JR, McKenzie RH, Mark AE, Hush NS (2009) Penrose-Hameroff orchestrated objective-reduction proposal for human consciousness is not biologically feasible. Physical Review E. 80(2 Pt 1):021912.
Moroz, I.M., Penrose, R., and Tod, K.P. (1998) Spherically-symmetric solutions of the Schrödinger–Newton equations:. Classical and Quantum Gravity, 15, 2733-42.
Nogales E, Wolf SG, Downing KH. (1998) HYPERLINK ""Structure of the αβ-tubulin dimer by electron crystallography. Nature. 391, 199-203.
Ouyang, M., & Awschalom, D.D. (2003) Coherent spin transfer between molecularly bridged quantum dots. Science 301:1074-78.
Pearle, P. (1989). Combining stochastic dynamical state-vector reduction with spontaneous localization. Physical Review A, 39, 2277-89.
Pearle, P. and Squires, E.J. (1994). Bound-state excitation, nucleon decay experiments and models of wave-function collapse. Physical Review Letters, 73(1), 1-5.
Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, Oxford.
Penrose, R. (1993). Gravity and quantum mechanics. In General Relativity and Gravitation 13. Part 1: Plenary Lectures 1992. Proceedings of the Thirteenth International Conference on General Relativity and Gravitation held at Cordoba, Argentina, 28 June - 4 July 1992. Eds. R.J.Gleiser, C.N.Kozameh, and O.M.Moreschi (Inst. of Phys. Publ. Bristol and Philadelphia), 179-89.
Penrose, R. (1994). Shadows of the Mind; An Approach to the Missing Science of Consciousness. Oxford University Press, Oxford.
Penrose, R. (1996). On gravity's role in quantum state reduction. General Relativity and Gravitation, 28, 581-600.
Penrose, R. (2000). Wavefunction collapse as a real gravitational effect. In Mathematical Physics 2000, Eds. A.Fokas, T.W.B.Kibble, A.Grigouriou, and B.Zegarlinski. Imperial College Press, London, 266-282.
Penrose, R. (2002). John Bell, State Reduction, and Quanglement. In Quantum Unspeakables: From Bell to Quantum Information, Eds. Reinhold A. Bertlmann and Anton Zeilinger , Springer-Verlag, Berlin, 319-331.
Penrose, R. (2004). The Road to Reality: A Complete Guide to the Laws of the Universe. Jonathan Cape, London.
Penrose, R. (2009). Black holes, quantum theory and cosmology (Fourth International Workshop DICE 2008), Journal of Physics, Conference Series 174, 012001.
Penrose, R. (2010). Cycles of Time: An Extraordinary New View of the Universe. Bodley Head, London.
Penrose R. and Hameroff S.R. (1995) What gaps? Reply to Grush and Churchland. Journal of Consciousness Studies.2:98-112.
Percival, I.C. (1994) Primary state diffusion. Proceedings of the Royal Society (London) A, 447, 189-209.
Pokorný, J., Hasek, J., Jelínek, F., Saroch, J. & Palan, B. (2001) Electromagnetic activity of yeast cells in the M phase. Electro Magnetobiol 20, 371–396.
Pokorný, J. (2004) Excitation of vibration in microtubules in living cells. Bioelectrochem. 63: 321-326.
Polkinghorne, J. (2002) Quantum Theory, A Very Short Introduction. Oxford University Press, Oxford.
Rae, A.I.M. (1994) Quantum Mechanics. Institute of Physics Publishing; 4th edition 2002.
Reimers JR, McKemmish LK, McKenzie RH, Mark AE, Hush NS (2009) Weak, strong, and coherent regimes of Frohlich condensation and their applications to terahertz medicine and quantum consciousness Proceedings of the National Academy of Sciences USA 106(11):4219-24
Reiter GF, Kolesnikov AI, Paddison SJ, Platzman PM, Moravsky AP, Adams MA, Mayers J (2011) Evidence of a new quantum state of nano-confined water
Rieper E, Anders J, Vedral V (2011) Quantum entanglement between the electron clouds of nucleic acids in DNA.
Samsonovich A, Scott A, Hameroff S (1992) Acousto-conformational transitions in cytoskeletal microtubules: Implications for intracellular information processing. Nanobiology 1:457-468.
Sherrington, C.S. (1957) Man on His Nature, Second Edition, Cambridge University Press.
Smolin, L. (2002). Three Roads to Quantum Gravity. Basic Books. New York.
Tegmark, M. (2000) The importance of quantum decoherence in brain processes. Physica Rev E 61:4194-4206.
Tittel, W, Brendel, J., Gisin, B., Herzog, T., Zbinden, H., and Gisin, N. (1998) Experimental demonstration of quantum correlations over more than 10 km, Physical Reiew A, 57:3229-32.
Tononi G (2004) An information integration theory of consciousness BMC Neuroscience 5:42.
Turin L (1996) A spectroscopic mechanism for primary olfactory reception Chem Senses 21(6) 773-91.
Tuszynski JA, Brown JA, Hawrylak P, Marcer P (1998) Dielectric polarization, electrical conduction, information processing and quantum computation in microtubules. Are they plausible? Phil Trans Royal Society A 356:1897-1926.
Tuszynski, J.A., Hameroff, S., Sataric, M.V., Trpisova, B., & Nip, M.L.A. (1995) Ferroelectric behavior in microtubule dipole lattices; implications for information processing, signaling and assembly/disassembly. Journal of Theoretical Biology 174:371–80.
Voet, D., Voet, J.G. 1995. Biochemistry, 2nd edition. Wiley, New York.
Wegner, D.M. (2002) The illusion of conscious will Cambridge MA, MIT Press.
Whitehead, A.N., (1929) Process and Reality. New York, Macmillan.
Whitehead, A.N. (1933) Adventure of Ideas, London, Macmillan.
Wigner E.P. (1961). Remarks on the mind-body question, in The Scientist Speculates, ed. I.J. Good (Heinemann, London). In Quantum Theory and Measurement, eds., J.A. Wheeler and W.H. Zurek, Princeton Univsity Press, Princeton, MA. (Reprinted in E. Wigner (1967), Symmetries and Reflections, Indiana University Press, Bloomington).
Wolfram, S. (2002) A New Kind of Science. Wolfram Media incorporated. |
986c88fa1b0e6ad0 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
Isaiah Berlin
Bernard Berofsky
Robert Bishop
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
George Boole
Émile Boutroux
Joseph Keim Campbell
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Carl Ginet
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Christine Korsgaard
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
Paul E. Meehl
Uwe Meixner
Alfred Mele
John Stuart Mill
Dickinson Miller
C. Lloyd Morgan
Thomas Nagel
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
L. Susan Stebbing
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Richard Taylor
Kevin Timpe
Mark Twain
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Bernard Baars
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
E. H. Culverwell
Charles Darwin
Terrence Deacon
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
Joseph Fourier
Michael Gazzaniga
GianCarlo Ghirardi
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Stuart Hameroff
Patrick Haggard
Augustin Hamon
Sam Harris
Martin Heisenberg
Werner Heisenberg
William Stanley Jevons
Pascual Jordan
Simon Kochen
Stephen Kosslyn
Ladislav Kovàč
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Henry Margenau
James Clerk Maxwell
Ernst Mayr
Ulrich Mohrhoff
Jacques Monod
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
Henry Stapp
Antoine Suarez
Leo Szilard
William Thomson (Kelvin)
Peter Tse
John von Neumann
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Schrödinger's Cat
Erwin Schrödinger's intention for his infamous cat-killing box was to discredit certain non-intuitive implications of quantum mechanics, of which his wave mechanics was the second formulation. Schrödinger's wave mechanics is more continuous mathematically, and apparently more deterministic, than Werner Heisenberg's matrix mechanics.
Schrödinger did not like Niels Bohr's idea of "quantum jumps" between Bohr's "stationary states" - the different "energy levels" in an atom. Bohr's "quantum postulate" said that the jumps between discrete states emitted (or absorbed) energy in the amount hν = E2 - E1.
Bohr did not accept Albert Einstein's 1905 hypothesis that the radiation was a discrete quantum of energy hν. Bohr (and Max Planck believed radiation was continuous waves. This was the question of wave-particle duality, which Einstein saw as early as 1909.
It was Einstein who originated the suggestion that the superposition of Schrödinger's wave functions implied that two different physical states could exist at the same time. This was a serious interpretational error that plagues the foundation of quantum physics to this day.
This error is found frequently in discussions of so-called "entangled" states (see the Einstein-Podolsky-Rosen experiment).
Entanglement occurs only for atomic level phenomena and over limited distances that preserve the coherence of two-particle wave functions by isolating the systems (and their eigenfunctions) from interactions with the environment.
We never actually "see" or measure any system (whether a microscopic electron or a macroscopic cat) in two distinct states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. And these probability predictions are borne out by the statistics of large numbers of identical experiments.
The Pauli Exclusion Principle says (correctly) that two identical indistinguishable (fermion) particles cannot be in the same place at the same time. Entanglement is often interpreted (incorrectly) as saying that a single particle can be in two places at the same time. Dirac's Principle of Superposition does not say that a particle
is in two states at the same time, only that there is a non-zero probability of finding it in either state should it be measured.
Einstein wrote to Schrödinger with the idea that the decay of a radioactive nucleus could be arranged to set off a large explosion. Since the moment of decay is unknown, Einstein argued that the superposition of decayed and undecayed nuclear states implies the superposition of an explosion and no explosion. It does not. In both the microscopic and macroscopic cases, quantum mechanics simply estimates the probability amplitudes for the two cases.
Many years later, Richard Feynman made Einstein's suggestion into a nuclear explosion! (What is it about some scientists?)
Einstein and Schrödinger did not like the fundamental randomness implied by quantum mechanics. They wanted to restore determinism to physics. Indeed Schrödinger's wave equation predicts a perfectly deterministic time evolution of the wave function. But what is evolving deterministically is only abstract probabilities. And these probabilities are confirmed only in the statistics of large numbers of identically prepared experiments. Randomness enters only when a measurement is made and the wave function "collapses" into one of the possible states of the system.
Schrödinger devised a variation in which the random radioactive decay would kill a cat. Observers could not know what happened until the box is opened.
The details of the tasteless experiment include:
• a Geiger counter which produces an avalanche of electrons when an alpha particle passes through it
• a bit of radioactive material with a decay half-life likely to emit an alpha particle in the direction of the Geiger counter during a time T
• an electrical circuit energized by the electrons which drops a hammer
• a flask of a deadly hydrocyanic acid gas, smashed open by the hammer.
The gas will kill the cat, but the exact time of death is unpredictable and random because of the irreducible quantum indeterminacy in the time of decay (and the direction of the decay particle, which might miss the Geiger counter!).
This thought experiment is widely misunderstood. It was meant (by both Einstein and Schrödinger) to suggest that quantum mechanics describes the simultaneous (and obviously contradictory) existence of a live and dead cat. Here is the famous diagram with a cat both dead and alive.
What's wrong with this picture?
Quantum mechanics claims only that the time evolution of the Schrödinger wave functions for the probability amplitudes of nuclear decay accurately predict the proportion of nuclear decays that will occur in a given time interval.
The (no classical) probabilities (no interference between terms) simply predict the number of live and dead cats that will be observed in a large number of identical experiments
More specifically, quantum mechanics provides us with the accurate prediction that if this experiment is repeated many times (the SPCA would disapprove), half of the experiments will result in dead cats.
Note that this is a problem in epistemology. What knowledge is it that quantum physics provides?
If we open the box at the time T when there is a 50% probability of an alpha particle emission, the most a physicist can know is that there is a 50% chance that the radioactive decay will have occurred and the cat will be observed as dead or dying.
If the box were opened earlier, say at T/2, there is only a 25% chance that the cat has died. Schrödinger's superposition of live and dead cats would look like this.
If the box were opened later, say at 2T, there is only a 25% chance that the cat is still alive. Quantum mechanics is giving us only statistical information - knowledge about probabilities.
Schrödinger is simply wrong that the mixture of nuclear wave functions that accurately describes decay can be magnified to the macroscopic world to describe a similar mixture of live cat and dead cat wave functions and the simultaneous existence of live and dead cats.
The kind of coherent superposition of states needed to describe an atomic system as in a linear combination of states (see Paul Dirac's explanation of superposition using three polarizers) does not describe macroscopic systems.
Instead of a linear combination of pure quantum states, with quantum interference between the states, i.e.,
| Cat > = ( 1/√2) | Live > + ( 1/√2) | Dead >,
quantum mechanics tells us only that there is 50% chance of finding the cat in either the live or dead state, i.e.,
Cats = (1/2) Live + (1/2) Dead.
Just as in the quantum case, this probability prediction is confirmed by the statistics of repeated identical experiments, but no interference between these states is seen.
What do exist simultaneously in the macroscopic world are genuine alternative possibilities for future events. There is the real possibility of a live or dead cat in any particular experiment. Which one is found is irreducibly random, unpredictable, and a matter of pure chance.
Genuine alternative possibilities is what bothered physicists like Einstein, Schrödinger, and Max Planck who wanted a return to deterministic physics. It also bothers determinist and compatibilist philosophers who have what William James calls an "antipathy to chance." Ironically, it was Einstein himself, in 1916, who discovered the existence of irreducible chance, in the elementary interactions of matter and radiation.
Until the information comes into existence, the future is indeterministic. Once information is macroscopically encoded, the past is determined.
How does information physics resolve the paradox?
As soon as the alpha particle sets off the avalanche of electrons in the Geiger counter (an irreversible event with a significant entropy increase), new information is created in the world.
For example, a simple pen-chart recorder attached to the Geiger counter could record the time of decay, which a human observer could read at any later time. Notice that, as usual in information creation, the energy expended by a recorder increases the entropy more than the increased information decreases it, thus satisfying the second law of thermodynamics.
Even without a mechanical recorder, the cat's death sets in motion biological processes that constitute an equivalent, if gruesome, recording. When a dead cat is the result, a sophisticated autopsy can provide an approximate time of death, because the cat's body is acting as an event recorder. There never is a superposition (in the sense of the simultaneous existence) of live and dead cats.
The paradox points clearly to the Information Philosophy solution to the problem of measurement. Human observers are not required to make measurements. In this case, the cat is the observer.
In most physics measurements, the new information is captured by apparatus well before any physicist has a chance to read any dials or pointers that indicate what happened. Indeed, in today's high-energy particle interaction experiments, the data may be captured but not fully analyzed until many days or even months of computer processing establishes what was observed. In this case, the experimental apparatus is the observer.
And, in general, the universe is its own observer, able to record (and sometimes preserve) the information created.
The basic assumption made in Schrödinger's cat thought experiments is that the deterministic Schrödinger equation describing a microscopic superposition of decayed and non-decayed radioactive nuclei evolves deterministically into a macroscopic superposition of live and dead cats.
But since the essence of a "measurement" is an interaction with another system (quantum or classical) that creates information to be seen (later) by an observer, the interaction between the nucleus and the cat is more than enough to collapse the wave function. Calculating the probabilities for that collapse allows us to estimate the probabilities of live and dead cats. These are probabilities, not probability amplitudes. They do not interfere with one another.
After the interaction, they are not in a superposition of states. We always have either a live cat or a dead cat, just as we always observe a complete photon after a polarization measurement and not a superposition of photon states, as P.A.M.Dirac explains so simply and clearly.
According to quantum mechanics the result of this experiment will be that sometimes one will find a whole photon, of energy equal to the energy of the incident photon, on the back side and other times one will find nothing. When one finds a whole photon, it will be polarized perpendicular to the optic axis. One will never find only a part of a photon on the back side. If one repeats the experiment a large number of times, one will find the photon on the back side in a fraction sin2α of the total number of times.
Quantum mechanics similarly gives us only the probability of finding live cats (or dead cats) in a large number of identically prepared experiments (pace the SPCA)
Thus we may say that the photon has a probability sin2α of passing through the tourmaline and appearing on the back side polarized perpendicular to the axis and a probability cos2α of being absorbed. These values for the probabilities lead to the correct classical results for an incident beam containing a large number of photons.
In this way we preserve the individuality of the photon in all cases. We are able to do this, however, only because we abandon the determinacy of the classical theory. The result of an experiment is not determined, as it would be according to classical ideas, by the conditions under the control of the experimenter. The most that can be predicted is a set of possible results, with a probability of occurrence for each...
When we make the photon meet a tourmaline crystal, we are subjecting it to an observation. We are observing whether it is polarized parallel or perpendicular to the optic axis. The effect of making this observation is to force the photon entirely into the state of parallel or entirely into the state of perpendicular polarization. It has to make a sudden jump from being partly in each of these two states to being entirely in one or other of them. Which of the two states it will jump into cannot be predicted, but is governed only by probability laws. If it jumps into the parallel state it gets absorbed and if it jumps into the perpendicular state it passes through the crystal and appears on the other side preserving this state of polarization.
Superposition and Indeterminacy
The non-classical nature of the superposition process is brought out clearly if we consider the superposition of two states, A and B, such that there exists an observation which, when made on the system in state A, is certain to lead to one particular result, a say, and when made on the system in state B is certain to lead to some different result, b say. What will be the result of the observation when made on the system in the superposed state? The answer is that the result will be sometimes a and sometimes b, according to a probability law depending on the relative weights of A and B in the superposition process. It will never be different from both a and b.
There is no justification for assuming an intermediate (and absurd) condition of simultaneous live and dead cats. The thing that is "intermediate" is the probability, not the outcome.
The intermediate character of the state formed by superposition thus expresses itself through the probability of a particular result for an observation being intermediate between the corresponding probabilities for the original states,† not through the result itself being intermediate between the corresponding results for the original states.
In this way we see that such a drastic departure from ordinary ideas as the assumption of superposition relationships between the states is possible only on account of the recognition of the importance of the disturbance accompanying an observation and of the consequent indeterminacy in the result of the observation. When an observation is made on any atomic system that is in a given state, in general the result will not be determinate, i.e., if the experiment is repeated several times under identical conditions several different results may be obtained. It is a law of nature, though, that if the experiment is repeated a large number of times, each particular result will be obtained in a definite fraction of the total number of times, so that there is a definite probability of its being obtained. This probability is what the theory sets out to calculate.
Decoherence and the Lack of Macroscopic Superpositions
Despite the claims of decoherence theorists, microscopic superpositions of quantum states do not allow us to "see" a system in two different states. Quantum mechanics simply predicts a significant probability of the system being found in these different states. Thus it is no surprise that we do not see macroscopic "superpositions of live and dead cats" at the same time. What does exist at any given time is the probabilities of the two states (in the macroscopic world) and the probability amplitude of the two states (which can coherently interfere with one another) in the microscopic world.
Decoherence theorists claim that they explain the "mysterious" non-appearance of macroscopic superpositions of states. But quantum mechanics does not predict such states, despite the popular idea of macroscopic superposition of live and dead cats.
Normal | Teacher | Scholar |
c3bc3d2ccde30db1 | Tuesday, November 06, 2012
A Few Comments On Firewalls
I was stupid enough to agree to talk about Firewalls in our strings lunch seminar this Wednesday without having read the paper (or what other people say about them) except for talking to Raphael Busso at the Strings 2012 conference and reading Joe Polichinski's guest post over at the Cosmic Variance blog.
In this post, I would like to share some of my thoughts in my endeavor to decode these papers but probably they are to you even more confusing than the original papers to me. But maybe you can spot my mistakes and correct me in the comment section.
I had a long discussion with Cristiano Germani on these matters for which I am extremely grateful. If this post contains any insight it is his while all errors are for course mine.
What is the problem?
I have a very hard time not to believe in "no drama", i.e. that anything special can happen at an event horizon. First of all, the event horizon is a global concept and its location now does in general depend on what happens in the future (e.g. how much further stuff is thrown in the black hole). So who can it be that the location of a anything like a firewall can depend on future events?
Furthermore, I have never seen such a firewall so far. But I might have already passed an event horizon (who knows what happens at cosmological scales?). Even more, I cannot see a local difference between a true event horizon like that of a black hole and the horizon of an accelerated observer in the case of the Unruh-effect. That the later I am pretty sure I have crossed already many times and I have never seen a firewall.
So I was trying to understand why there should be one. And whenever I tried to flesh out the argument for one they way I understood it it fell apart. So, here are some of my thoughts;
The classical situation
No question, Hawking radiation is a quantum effect (even though it happens at tree level in QFT on curved space-time and is usually derived in a free theory or, equivalently, by studying the propagator). But apart from that not much of the discussion (besides possibly the monogamy of entanglement, see below) seems to be particular quantum. Thus we might gain some mileage by studying classical field theory on the space time of a forming and decaying black hole as given by the causal diagram:
A decaying black hole, image stolen from Sabine Hossenfelder.
Issues of causality a determined by the characteristics of the PDE in question (take for example the wave equation) and those are invariant under conformal transformations even if the field equation is not. So, it is enough to consider the free wave equation on the causal diagram (rather than the space-time related to it by a conformal transformation).
For example we can give initial data on I- (and have good boundary conditions at the r=0 vertical lines). At the dashed horizontal line, the location of the singularity, we just stop evolving (free boundary conditions) and then we can read off outgoing radiation at I+. The only problematic point is the right end of the singularity: This is the end of the black hole evaporation and to me it is not clear how we can here start to impose again some boundary condition at the new r=0 line without affecting what we did earlier. But anyway, this is in a region of strong curvature, where quantum gravity becomes essential and thus what we conclude should better not depend too much on what's going on there as we don't have a good understanding of that regime.
The firewall paper, when it explains the assumptions of complementarity mentions an S-matrix where it tries to formalize the notion of unitary time evolution. But it seems to me, this might be the wrong formalization as the S-matrix is only about asymptotic states and even fails in much simpler situations when there are bound states and the asymptotic Hilbert spaces are not complete. Furthermore, strictly speaking, this (in the sense of LSZ reduction) is not what we can observe: Our detectors are never at spatial infinity, even if CMS is huge, so we should better come up with a more local concept.
Two regions M and N on a Cauchy surface C with their causal shadows
In the case of the wave equation, this can be encoded in terms of domains of dependence: By giving initial data on a region of a Cauchy surface I determine the solution on its causal shadow (in the full quantum theory maybe plus/minus an epsilon for quantum uncertainties). In more detail: If I have two sets of initial data on one Cauchy surface that agree on a local region. Than the two solutions have to agree on the causal shadow of this region no matter what the initial data looks like elsewhere. This encodes that "my time-evolution is good and I do not lose information on the way" in a local fashion.
Some of my confusion comes from talking about states in a way that at least when taken at face value is in conflict with how we understand states both in classical and in better understood quantum (both quantum mechanics and quantum field theory) circumstances.
First of all (and quite trivially), a state is always at one instant of time, that is it lives on a Cauchy surface (or at least a space-like hyper surface, as our space-time might not be globally hyperbolic), not in a region of space-time. Hilbert space, as the space of (pure) states thus also lives on a Cauchy surface (and not for example in the region behind the horizon). If one event is after another (i.e. in its forward light-cone) it does not make sense to say they belong to different tensor factors of the Hilbert (or different Hilbert spaces for that matter).
Furthermore, a state is always a global concept, it is everywhere (in space, but not in time!). There is nothing like "the space of this observer". What you can do of course is restrict a state to a subset of observables (possibly those that are accessible to one observer) by tracing out a tensor factor of the Hilbert space. But in general, the total state cannot be obtained by merging all these restricted states as those lack information about correlations and possible entanglement.
This brings me to the next confusion: There is nothing wrong with states containing correlations of space-like separated observables. This is not even a distinguishing property of quantum physics, as this happens all the time even in classical situations: In the morning, I pick a pair of socks from my drawer without turning on the light and put it on my feet. Thus I do not know which socks I am wearing, in particular, I don't know their color. But as I combined matching socks when they came from the washing machine (as far as this is possible given the tendency of socks going missing) I know by looking at the sock on my right foot what the color of the sock on my left foot is, even when my two feet are spatially separated. Before looking, the state of the color of the socks was a statistical mixture but with non-local correlations. And of course there is nothing quantum about my socks (even if in German "Quanten" is not only "quantum" but also a pejorative word for feet). This would even be true (and still completely trivial) if I had put one of my feet through an event horizon while the other one is still outside. This example shows that locality is not a property that I should demand of states in order to be sure my theory is free of time travel. The important locality property is not in the states, it is in the observables: The measurement of an observable here must not depend of whether or not I apply an operator at a space-like distance. Otherwise that would imply I could send signals faster than the speed of light. But it is the operators, not the states that have to be local (i.e. commute for spatial separation).
If two operators, however, are time-like separated (i.e. one is after the other in its forward light cone), I can of course influence one's measurement by applying the other. But this is not about correlations, this is about influence. In particular, if I write something in my notebook and then throw it across the horizon of a black hole, there is no point in saying that there is a correlation (or even entanglement) between the notebook's state now and after crossing the horizon. It's just the former influencing the later.
Which brings us to entanglement. This must not be confused with correlation, the former being a strict quantum property whereas the other can be both quantum or classical. Unfortunately, you can often see this in popular talks about quantum information where many speakers claim to explain entanglement but in fact only explain correlations. As a hint: For entanglement, one must discuss non-commuting observables (like different components of a the same spin) as otherwise (by the GNS reconstruction theorem) one deals with a commutative operator algebra which always has a classical interpretation (functions on a classical space). And of course, it is entanglement which violates Bell's inequality or shows up in the GHZ experiment. But you need something of this complexity (i.e. involving non-commuting observables) to make use of the quantumness of the situation. And it is only this entanglement (and not correlation) that is "monogamous": You cannot have three systems that are fully entangled for all pairs. You can have three spins that are entangled, but once you only look at two they are no longer entangles (which makes quantum cryptography work as the eavesdropper cannot clone the entanglement that is used for coding).
And once more, entanglement is a property of a state when it is split according to a tensor product decomposition of the Hilbert space. And thus lives on a Cauchy surface. You can say that a state contains entanglement of two regions on a Cauchy surface but it makes no sense to say to regions that are time-like to each other to be entangled (like the notebook before and after crossing the horizon). And therefore monogamy cannot be invoked with respect to also taking the outgoing radiation in as the third player.
Monday, September 24, 2012
The future of blogging (for me) and in particular twitter
As you might have noticed, breaks between two posts here get bigger and bigger. This is mainly due to lack of ideas on my side but also as I am busy with other things (now that with Ella H. kid number two has joined the family but there is also a lot of TMP admin stuff to do).
This is not only true for me writing blog posts but also about reading: Until about a year ago, I was using google reader not to miss a single blog post of a list of about 50 blogs. I have completely stopped this and systematically read blogs only very occasionally (that is other than being directed to a specific post by a link from somewhere else).
What I still do (and more than ever) is use facebook (mainly to stay in contact with not so computer affine friends) and of course twitter (you will know that I am @atdotde there). Twitter seems to be the ideal way to stay current on a lot of matters you are interested in (internet politics for example) while not wasting too much time given the 140 character limit.
Twitter's only problem is that they don't make (a lot of) money. This is no problem for the original inventors of the site (they have sold their shares to investors) but the current owners now seem desperate to change this. From what they say they want to move twitter more to a many to one (marketing) communication platform and force users to see ads they mix among the genuine tweets.
One of the key aspects of the success of twitter was its open API (application programmers interface): Everybody could write programs (and for example I did) that interacted with twitter so for example everybody can choose their favourite client program on any OS to read and write tweets. Since the recent twitter API policy changes this is no longer the case: A client can now have only 100,000 users (or if they already have more can double the number of users), a small number given the allegedly about 4,000,000 million twitter accounts. And there are severe restrictions how you may display tweets to your users (e.g. you are not allowed to use them in any kind of cloud service or mix them with other social media sites, i.e. blend them with Facebook updates). The message that this sends is clearly: "developers go away" (the idea seems to be to force users to use the twitter website and their own clients) and anybody who still invests in twitter developing is betting on a dead horse. But it is not hard to guess that in the long run this will also make the while twitter unattractive to a lot of (if not eventually all) their users.
People (often addicted to twitter feeds) are currently evaluating alternatives (like app.net) but this morning I realized that maybe the twitter managers are not so stupid as they seem to be (or maybe they just want to cash in what they have and don't care if this ruins the service), there is still an alternative that would make twitter profitable and would secure the service in the long run: They could offer to developers to allow them to use the old API guidelines but for a fee (say a few $/Euros per user per month): This would bring in the cash they are apparently looking for while still keeping the healthy ecosystem of many clients and other programs. twitter.com would only be dealing with developers while those would forward the costs to their users and recollect the money by selling their apps (so twitter would not have to collect money from millions of users).
But maybe that's too optimistic and they just want to earn advertising money NOW.
Tuesday, February 07, 2012
Last week, Subir Sachdev came to Munich to give three Arnold Sommerfeld Lectures. I want to take this opportunity to write about a subject that has attracted a lot of attention in recent years, namely applying AdS/CFT techniques to condensed matter systems like trying to write gravity duals for D-wave superconducturs or strange metals (it's surprisingly hard to find a good link for this keyword).
My attitude towards this attempt has somewhat changed from "this will never work" to "it's probably as good as anything else" and in this post I will explain why I think this. I should mention as well that Sean Hartnoll has been essential in this phase transition of my mind.
Let me start by sketching (actually: caricaturing) what I am talking about. You want to understand some material, typically the electrons in a horribly complicated lattice like bismuth strontium calcium copper oxide, or BSCCO. To this end, you come up with a five dimensional theory of gravity coupled to your favorite list of other fields (gauge fields, scalars with potentials, you name it) and place that in an anti-de-Sitter background (or better, for finite temperature, in an asymptotically anti-de-Sitter black hole). Now, you compute solutions with prescribed behavior at infinity and interpret these via Witten's prescription as correlators in your condensed matter theory. For example you can read off Green functions and (frequency dependent) conductivities, densities of state.
How can this ever work, how are you supposed to guess the correct field content (there is no D-brane/string description anywhere near that could help you out) and how can you ever be sure you got it right?
The answer is you cannot but it does not matter. It does not matter as it does not matter elsewhere in condensed matter physics. To clarify this, we have to be clear about what it means for a condensed matter theorist to "understand" a system. Expressed in our high energy lingo, most of the time, the "microscopic theory" is obvious: It is given by the Schrödinger equation for $10^23$ electrons plus as similar number of noclei feeling the Coulomb potential of the nuclei and interacting themselves with Coulomb repulsion. There is nothing more to be known about this. Except that this is obviously not what we want. These are far too many particles to worry about and, what is more important, we are interested in the behavior at much much lower energy scales and longer wave lengths, at which all the details of the lattice structure are smoothed out and we see only the effect of a few electrons close to the Fermi surface. As an estimate, one should compare the typical energy scale of the Coulomb interactions, the binding energies of the electrons to the nucleus (Z times 13.6 eV) or in terms of temperature (where putting in the constants equates 1eV to about 10,000K) to the milli-eV binding energy of Cooper pairs or the typical temperature where superconductivity plays a role.
In the language of the renormalization group, the Coulomb interactions are the UV theory but we want to understand the effective theory that this flows to in the IR. The convenient thing about such effective theories is that they do not have to be unique: All we want is a simple to understand theory (in which we can compute many quantities that we would like to know) that is in the same universality class as the system we started from. Differences in relevant operators do not matter (at least to leading order).
Surprisingly often, one can find free theories or weakly (and thus almost free) theories that can act as the effective theory we are looking for. BCS is a famous example, but Landau's Fermi Liquid Theory is another: There the idea is that you can almost pretend that your fermions are free (and thus you can just add up energies taking into account the Pauli exclusion principle giving you Fermi-surfaces etc) even though your electrons are interacting (remember, there is always the Coulomb interaction around). The only effect the interactions have, is to renormalize the mass, to deform the Fermi surface away from a ball and to change the hight of the jump in the T=0 occupation number. Experience shows that this is an excellent description in more than one dimension (that has the exception of the Luttinger liquid) and can probably traced back to the fact that a four-Fermi-interaction is non-renormalizable and thus invisible in the IR.
Only, it is important to remember that the fields/particles in that effective theories are not really the electrons you started with but just quasi-particles that are build in complicated ways out of the microscopic particles carrying around clouds of other particles and deforming the lattice they move in. But these details don't matter and that is the point.
It is only important to guess the effective theory in the same universality class. You never derive this (or: hardly ever). Following an exact renormalization group flow is just way beyond what is possible. You make a hopefully educated guess (based on symmetries etc) and then check that you get good descriptions. But only the fact, that there are not too many universality classes makes this process of guessing worthwhile.
Free or weakly coupled theories are not the only possible guesses for effective field theories in which one can calculate. 2d conformal field theories are others. And now, AdS-technology gives us another way of writing down correlation functions just as Feynman-rules give us correlation functions for weakly coupled theories. And that is all one needs: Correlation functions of effective field theory candidates. Once you have those you can check if you are lucky and get evidence that you are in the correct universality class. You don't have to derive the IR theory from the UV. You never do this. You always just guess. And often enough this is good enough to work. And strictly speaking, you never know if your next measurement shows deviations from what you thought would be an effective theory for your system.
In a sense, it is like the mystery that chemistry works: The periodic table somehow pretends that the electrons in atoms are arranged in states that group together like for the hydrogen atom, you get the same n,l,m,s quantum numbers and the shells are roughly the same (although with some overlap encoded in the Aufbau principle) as for hydrogen. This pretends that the only effect of the electron-electron Coulomb potential is to shield the charge of the nucleus and every electron sees effectively a hydrogen like atom (although not necessarily with integer charge Z) and Pauli's exclusion principle regulates that no state is filled more than once. One could have thought that the effect of n-1 electrons on the last is much bigger, after all, they have a total charge that is almost the same of the nucleous, but it seems, the last electron only sees the nucleus with a 1/r potential although with reduced charge.
If you like, the only thing one should might worry about is that the Witten prescription to obtain boundary correlators from bulk configurations really gives you valid n-point functions of a quantum theory (if you feel sufficient mathematical masochism for example in the sense of Wightman) but you don't want to show that it is the quantum field theory corresponding to the material you started with.
Friday, February 03, 2012
Not much to say, but I would like to mention that, finally, we have been able two finalize two write-ups that I have announced here in the past:
First, there are the notes of a block course that I have in the summer on how to fix some mathematicla lose ends in QFT (notes written by our students Mario Flory and Constantin Sluka):
How I Learned to Stop Worrying and Love QFT
Lecture notes of a block course explaining why quantum field theory might be in a better mathematical state than one gets the impression from the typical introduction to the topic. It is explained how to make sense of a perturbative expansion that fails to converge and how to express Feynman loop integrals and their renormalization using the language of distribtions rather than divergent, ill-defined integrals.
Then there are the contributions to a seminar on "Foundations of Quantum Mechanics" (including an introduction by your's truly) that I taught a year ago. From the contents:
1. C*-algebras, GNS-construction, states, (Sebastian)
2. Stone-von-Neumann Theorem (Dennis)
3. Pure Operations, POVMs (Mario)
4. Measurement Problem (Anupam, David)
5. EPR and Entanglement, Bell's Theorem, Kochen–Specker theorem (Isabel, Matthias)
6. Decoherence (Kostas, Cosmas)
7. Pointer Basis (Greeks again)
8. Consistent Histories (Hao)
9. Many Worlds (Max)
10. Bohmian Interpretation (Henry, Franz)
See also the seminar's wiki page.
Have fun!
Wednesday, November 30, 2011
More than one nature for natural units
Hey blog, long time no see! Bee has put together a nice video on natural units. There are one or two aspects that I would put slightly differently and rather than writing a comment I thought it might better be to write a post myself.
The first thing is that strictly speaking, there is not the natural unit system, it depends on the problem you are interested in. For example, if you are interested in atoms, the typical mass is that of the electron, so you will likely be interested in masses as multiples of $m_e$. Then, interactions are Coulomb and you will want to express charges as multiples of the electron charge $e$. Finally, quantum mechanics is your relevant framework, so it is natural to express actions in multiples of $\hbar$. Then a quick calculation shows that this unit system of setting $m_e=e=\hbar=1$ implies that distances are dimensionless and the distance $r=1$ happens to be the Bohr radius that sets the natural scale for the size of atoms. Naturalness here lets you guess the size of an atom from just identifying the electron mass, the electric charge and quantum mechanics to be the relevant ingredients.
When you are doing high energy particle physics quantum physics and special relativity are relevant and thus it is convenient to use units in which $\hbar=c=1$ which is Bee's example. In this unit system, masses and energy have inverse units of length.
If you are a classical relativist contemplating solutions of Einstein's equations, then quantum mechanics (and thus $\hbar$) does not concern you but Newton's constant $G$ does. These people thus use units with $c=G=1$. Confusingly, in this unit system, masses have units of length (and not inverse length as above). In particular, the length scale of a black hole with mass M, the Schwarzschild radius is $R=2M$ (the 2 being there to spice up life a bit). So you have to be a bit careful when you convert energies to lengths, you have to identify if you are in a quantum field theory or in a classical gravity situation.
My other remark is that it is conventional how many independent units you have. Many people think, that in mechanics you need three (e.g. length, mass and time, meters, kilograms and seconds in the SI system) and a fourth if you include thermodynamics (like temperature measured in Kelvins) and a fifth if there is electromagnetism (like charge or alternatively current, Amperes in SI). But these numbers are just what we are used to. This number can change when we change our understanding of a relation from "physical law" to "conversion factor". The price is a dimensionful constant: In the SI system, it is a law that in equipartition of energy $E=\frac 12k_bT$ and Coulombs law equates a mechanical force to an electrostatic expression via $F=\frac{qQ} 1{4\pi\epsilon_0r}$ and it is a law that light moves at a speed $c=s/t$.
But alternatively, we could use these laws to define what we actually mean by Temperature (then measured in units of energy), charge (effectively setting $4\pi\epsilon_0$ to unity and thereby expressing charge in mechanical units) and length (expressing a distance by the time light need to traverse it). This eliminates a law and a unit. What remains of the law is only the fact that one can do that without reference to circumstances, that a distance from here to Paris does not depend for example on the time of the year (and thus on the direction of the velocity of the earth on its orbit around the sun and thus potentially relative to the ether). If the speed of light would not be constant and we would try to measure distances by the time it takes light to traverse them then distances would suddenly vary when we would say that the speed of light varies.
There is even an example that you can increase the number of units to more than what we are used to (although a bit artificial): It is not god given what kinds of things we consider 'of the same type' and thus possible to be measured in the same units. We are used to measuring all distances in the same unit (like for example meters) or derived units like kilometers or feet (with a fixed numerical conversion factor). But in nautical situations it is common to treat horizontal distance to be entirely different from vertical distances. Horizontal distances like the way to the next island you would measure in nautical miles while vertical distances (like the depth of water) you measure in fathoms. It is then a natural law that the ratio between a given depth and a given horizontal distance is constant over time and there is dimensionful constant (fathoms per mile) of nature that allows to compute a horizontal distance from a depth.
Friday, June 03, 2011
Bitcoin explained
As me, you might have recently heared about "Bitcoin", the internet currency that tries to be safe without a central authority like a bank or a credit card company that say which transactions are legitimate. So far, all mentions in blogs, podcasts or the press that I have seen had in common that they did not say how it works, what are the mechanisms that make sure Bitcoins operate like money. So I looked it up and this is what I found:
Bitcoin uses to cryptographic primitives: hashes and public key encryption. I case you don't know what these are: A hash is a function that reads in a string (or file or number, those are technically all the same) and produces some sort of checksum. The important properties are that everybody can do this computation (with some small amount of effort) and produce the same checksum. On the other hand, it is "random" in the sense that you cannot work backwards, i.e. if you only know the checksum you effectively have no idea about the original string. It is computationally hard to find a string for a given checksum (more or less the best you can do is guess random strings, compute their checksums until you succeed). A related hard problem is to find such a string with prescribed first $N$ characters.
This can be used as a proof of effort: You can pose the problem to find a string (possibly with prescribed first characters) such that the first $M$ digits of the checksum have a prescribed value. In binary notation you could for example you could ask for $M$ zeros. Then on the average you have to make $2^M$ guesses for the string until you succeed. Presenting such a string then proves you have invested an effort of $O(2^M)$. The nice thing is that this effort is additive: You can start your string with the characters "The message '....' has checksum 000000xxxxxxxxxxx" and continue it such that the checksum of the total string starts with many zeros. That proves that in addition to the zeros your new string has, somebody has already spent some work on the string I wrote as dots. Common hash functions are SHA-1 (and older and not as reliable: MD5).
The second cryptographic primitive is public key encryption. Here you have two keys $A$, the public key which you tell everybody about and $B$ your secret key (you tell nobody about). These have the properties that you can use one of the keys to "encrypt" a string and then the other key can be used to recover the original string. In particular, you need to know the private key to produce a message that can be decrypted with the public key. This is called a "signature": You have a message $M$ and encrypt it using $B$. Let us call the result $B(M)$. Then you can show $A$ and $M$ and $B(M)$ to somebody to prove that you are in possession of $B$ without revealing $B$ since that person can verify that $B(M)$ can be decrypted using $A$. Here, an example is the RSA algorithm.
Now to Bitcoin. Let's go through the list of features that you want your money to have. The first is that you want to be able to prove that your coins belong to you. This is done by making coins files that contain the public key $A$ of their owner. Then, as explained in the previous paragraph you can prove that you are the legitimate owner of the private key belonging to that coin and thus you are its owner. Note that you can have as many public-private key pairs as you like possibly one for every coin. It is just there to equate knowing of a secret (key) to owning the coin.
Second you want to be able to transfer ownership of the coin. Let us assume that the recipient has the public key $A'$. Then you transfer the coin (which already contains your public key $A$) by appending the string "This coin is transfered to the owner of the secrete key to the public key $A'$". Then you sign the whole thing with your private key $B$. The recipient can now prove that the coin was transferred to him as the coin contains both your public key (from before) and your statement of the transfer (which only you, knowing $B$ can have authorized. This can be checked by everybody by checking the signature). So the recipient can prove you owned the coin and agreed to transfer it to him.
The last property is that once you transfered the coin to somebody else you cannot give it to a third person as you do not own it anymore. Or put differently: If you try to transfer a coin a second time that should not work and the recipient should not accept it or at least it should be illegitimate.
But what happens if two people claim they own the same coin, how can we resolve this conflict? This is done via a public time-line that is kept collaboratively between all participants. Once you receive a coin you want to be able to prove later that you already owned it at a specific time (in particular at the time when somebody else claims he received it).
This is done as follows: You compute the hash function of the transfer (or the coin after transfer, see a,bove including the signature of the previous owner of the coin that he has given it to you) and add it to the time line. This means you take the hash value of the time line so far, at the hash of the transfer and compute new hash. This whole package you then send to your network peers and ask them to also include your transfer in their version of the time line.
So the time line is a record of all the transfers that have happened in the past and each participant in the network keeps his own copy of it.
There could still be a conflict when two incompatible time lines are around. Which is the correct one that should be trusted? One could have a majority vote amongst the participants but (as everybody knows from internet discussions) nothing is easier than to come up with a large number of sock puppets that swing any poll. Here comes the proof of work that I mentioned above in relation to hash functions: There is a field in the time line that can be filled with anything in the attempt to construct something that has a hash with as many zeros as possible. Remember, producing $N$ leading zeros amounts to $O(2^N)$ work. Having a time line with many zeros demonstrates that were willing to put a lot of effort into this time line. But as explained above, this proof of effort is additive and all the participants in the network continuously try to add zeros to their time line hashes. But if they share and combine their time lines often enough such that they stay coherent they are (due to additivity) all working on fining zeros on the same time line. So rather than everybody working for themselves everybody works together as long as their time lines stay coherent. And going back through a time line it is easy to see how much zero finding work has been but in. Thus in the case of conflicting time lines one simply takes that that contains more zero finding work. If you wanted to establish an alternative time line (possibly one where at some point in time you did not transfer a coin but rather kept it to yourself so you could give it to somebody else later) to establish it you would have to outperform all other computers in the network that are all busy working on computing zeros for the other, correct, time line.
Of course, if you want to receive a bitcoin you should make sure that in the generally accepted time line that same coin has not already been given to somebody else. This is why the transfers take some time: You want to wait for a bit that the information that the coin has been transferred to you has been significantly spread on the network and included in the collective time line that it cannot be reversed anymore.
There are some finer points like how subdividing coins (currently worth about 13 dollars) is done and how new coins can be created (again with a lot CPU work) but I think they are not as essential in case you want to understand the technical basis of bitcoin before you but real money in.
BTW, if you liked this exposition (or some other here) feel free to transfer me some bitcoins (or fractions of it). My receiving address is
Thursday, March 24, 2011
Mixed superrationality does not beat pure in prisoner's dilemma
|
2b52c17c5cd12f83 | The Official String Theory Web Site:--> Basics --> Why strings? (basic / advanced)
Why did strings enter the story?
Once special relativity was on firm observational and theoretical footing, it was appreciated that the Schrödinger equation of quantum mechanics was not Lorentz invariant, therefore quantum mechanics as it was so successfully developed in the 1920s was not a reliable description of nature when the system contained particles that would move at or near the speed of light.
The problem is that the Schrödinger equation is first order in time derivatives but second order in spatial derivatives. The Klein-Gordon equation is second order in both time and space and has solutions representing particles with spin 0:
Klein-Gordon equation
Dirac came up with "square root" of Klein-Gordon equation using matrices called "gamma matrices", and the solutions turned out to be particles of spin 1/2:
Dirac equation
where the matrix hmn is the metric of flat spacetime. But the problem with relativistic quantum mechanics is that the solutions of the Dirac and Klein-Gordon equation have instabilities that turn out to represent the creation and annihilation of virtual particles from essentially empty space.
Further understanding led to the development of relativistic quantum field theory, beginning with quantum electrodynamics, or QED for short, pioneered by Feynman, Schwinger and Tomonaga in the 1940s. In quantum field theory, the behaviors and properties of elementary particles can calculated using a series of diagrams, called Feynman diagrams, that properly account for the creation and annihilation of virtual particles.
The set of the Feynman diagrams for the scattering of two electrons looks like
Moller scattering 1 + Moller scattering 2+Moller scattering 3+ ...
The straight black lines represent electrons. The green wavy line represents a photon, or in classical terms, the electromagnetic field between the two electrons that makes them repel one another. Each small black loop represents a photon creating an electron and a positron, which then annihilate one another and produce a photon, in what is called a virtual process. The full scattering amplitude is the sum of all contributions from all possible loops of photons, electrons, positrons, and other available particles.
The quantum loop calculation comes with a very big problem. In order to properly account for all virtual processes in the loops, one must integrate over all possible values of momentum, from zero momentum to infinite momentum. But these loop integrals for an particle of spin J in D dimensions take the approximate form
Loop integral
If the quantity 4J + D - 8 is negative, then the integral behaves fine for infinite momentum (or zero wavelength, by the de Broglie relation.) If this quantity is zero or positive, then the integral takes an infinite value, and the whole theory threatens to make no sense because the calculations just give infinite answers.
The world that we see has D=4, and the photon has spin J=1. So for the case of electron-electron scattering, these loop integrals can still take infinite values. But the integrals go to infinity very slowly, like the logarithm of momentum, and it turns out that in this case, the theory can be renormalized so that the infinities can be absorbed into a redefinition of a small number of parameters in the theory, such as the mass and charge of the electron.
Quantum electrodynamics was a renormalizable theory, and by the 19402, this was regarded as a solved relativistic quantum theory. But the other known particle forces -- the weak nuclear force that makes radioactivity, the strong nuclear force that hold neurons and protons together, and the gravitational force that holds us on the earth -- weren't so quickly conquered by theoretical physics.
In the 1960s, particle physicists reached towards something called a dual resonance model in an attempt to describe the strong nuclear force. The dual model was never that successful at describing particles, but it was understood by 1970 that the dual models were actually quantum theories of relativistic vibrating strings and displayed very intriguing mathematical behavior. Dual models came to be called string theory as a result.
But in 1971, a new type of quantum field theory came on the scene that explained the weak nuclear force by uniting it with electromagnetism into electroweak theory, and it was shown to be renormalizable. Then similar wisdom was applied to the strong nuclear force to yield quantum chromodynamics, or QCD, and this theory was also renormalizable.
Which left one force -- gravity -- that couldn't be turned into a renormalizable field theory no matter how hard anyone tried. One big problem was that classical gravitational waves carry spin J=2, so one should assume that a graviton, the quantum particle that carries the gravitational force, has spin J=2. But for J=2, 4 J - 8 + D = D, and so for D=4, the loop integral for the gravitational force would become infinite like the fourth power of momentum, as the momentum in the loop became infinite.
And that was just hard cheese for particle physicists, and for many years the best people worked on quantum gravity to no avail.
But the string theory that was once proposed for the strong interactions contained a massless particle with spin J=2.
In 1974 the question finally was asked: could string theory be a theory of quantum gravity?
The possible advantage of string theory is that the analog of a Feynman diagram in string theory is a two-dimensional smooth surface, and the loop integrals over such a smooth surface lack the zero-distance, infinite momentum problems of the integrals over particle loops.
In string theory infinite momentum does not even mean zero distance, because for strings, the relationship between distance and momentum is roughly like
String uncertainty principle
The parameter a' (pronounced alpha prime) is related to the string tension, the fundamental parameter of string theory, by the relation
String tension
The above relation implies a minimum observable length for a quantum string theory of
The zero-distance behavior which is so problematic in quantum field theory becomes irrelevant in string theories, and this makes string theory very attractive as a theory of quantum gravity.
If string theory is a theory of quantum gravity, then this minimum length scale should be at least the size of the Planck length, which is the length scale made by the combination of Newton's constant, the speed of light and Planck's constant
although as we shall see later, the question of length scales in string theory is complicated by string duality, which can relate two theories with seemingly different length scales.
<< Previous
Next >>
Pointlike interactions present problems
Particle physics interactions can occur at zero distance -- but Einstein's theory of gravity makes no sense at zero distance.
Black Holes
A string vertex is not a point
String interactions don't occur at one point but are spread out in a way that leads to more sensible quantum behavior.
|
319d1c6e8baba906 | Dismiss Notice
Join Physics Forums Today!
A new nonlinear Schrodinger equation
1. Nov 19, 2005 #1
At this link:
is a recent paper by Carlos Castro on a new nonlinear Schrodinger equation--for those that work in this area.
2. jcsd
3. Nov 20, 2005 #2
I have red your paper and, if you give me the permission, I shall give you now my sensations about it.
1) Firstly, I got a surprise with the datum: January 2006, because we are in November 2005;
2) It seems to be a new review of physics because it is volume 1 and there is no page. These two first points are acting on me like a warning signal: what is this? Where does it come from? Certainly a quantum experiment (smile), I mean a work coming from the future … (smile)
3) The idea of fractal trajectories is (at least in my mind) a great one because it could be an elegant way of reconciliation between a classical and a quantum point of view; and (personal remark) is in someway what I am personally working about when I speak from photons springing from a piece of geodesic to another one (etgb28.pdf); the only point was: I didn’t knew that I was dealing with fractal trajectories as Mister Jourdain in Molières work (French writer in the 1700…) didn’t knew that he was doing “prose” when he was writing and speaking.
4) I remark that the Schrödinger equation is obtained with a classical formulation of the Newton’s Law (which is avoiding a relativistic approach before any calculation has been done) if one considers the left part of equation (7) page 2, but with a special (due to Nottale) formulation of the acceleration if one observes the middle part of the same equation; so, it is not really the classical Newton’s Law but an extension of it including a “complex-time” derivative operator (page 2 relation3). Why not?
5) One point is not clear for me; does the introduction of a complex number D (instead of the use of a real one at the beginning of the article) allows an extension of the work of Nottale and all to non-flat universe?
6) One other point is not clear: is not the middle term page 6 in (40 on the left hand) and in (42; the integrand) equal to zero if the particle-wave following a geodesic (as required by GR in a 4D space)?
Thank you for explanations.
4. Nov 20, 2005 #3
Yes, I should have also included this link to the journal table of contents. http://www.geocities.com/ptep_online/2006.html
This is an "online" journal, thus papers are submitted and approved ahead of time of final publish, not uncommon in this age of internet access to journals.
As to your detailed questions on the physics, I have no answers, you would need to contact the prime author, Dr. Castro--there is an email address on the paper.
Similar Discussions: A new nonlinear Schrodinger equation
1. Schrodinger equation? (Replies: 8)
2. Schrodinger equation (Replies: 1)
3. Schrodinger equation (Replies: 4) |
eda7d60f5fa01d4e | Hidden variable theory
From Wikipedia, the free encyclopedia
(Redirected from God does not play dice)
Jump to: navigation, search
This article is about a class of mechanics theories. For hidden variables in economics, see Latent variable. For other uses, see Hidden variables (disambiguation).
Historically, in physics, hidden variable theories were espoused by some physicists who argued that the state of a physical system, as formulated by quantum mechanics, does not give a complete description for the system; i.e., that quantum mechanics is ultimately incomplete, and that a complete theory would provide descriptive categories to account for all observable behavior and thus avoid any indeterminism. The existence of indeterminacy for some measurements is a characteristic of prevalent interpretations of quantum mechanics; moreover, bounds for indeterminacy can be expressed in a quantitative form by the Heisenberg uncertainty principle.
Albert Einstein, the most famous proponent of hidden variables, objected to the fundamentally probabilistic nature of quantum mechanics,[1] and famously declared "I am convinced God does not play dice".[2] Einstein, Podolsky, and Rosen argued that "elements of reality" (hidden variables) must be added to quantum mechanics to explain entanglement without action at a distance.[3][4] Later, Bell's theorem suggested that local hidden variables of certain types are impossible, or that they evolve non-locally. A famous non-local theory is De Broglie–Bohm theory.
Under the Copenhagen interpretation, quantum mechanics is non-deterministic, meaning that it generally does not predict the outcome of any measurement with certainty. Instead, it indicates what the probabilities of the outcomes are, with the indeterminism of observable quantities constrained by the uncertainty principle. The question arises whether there might be some deeper reality hidden beneath quantum mechanics, to be described by a more fundamental theory that can always predict the outcome of each measurement with certainty: if the exact properties of every subatomic particle were known the entire system could be modeled exactly using deterministic physics similar to classical physics.
In other words, it is conceivable that the standard interpretation of quantum mechanics is an incomplete description of nature. The designation of variables as underlying "hidden" variables depends on the level of physical description (so, for example, "if a gas is described in terms of temperature, pressure, and volume, then the velocities of the individual atoms in the gas would be hidden variables"[5]). Physicists supporting De Broglie–Bohm theory maintain that underlying the observed probabilistic nature of the universe is a deterministic objective foundation/property—the hidden variable. Others, however, believe that there is no deeper deterministic reality in quantum mechanics.[citation needed]
A lack of a kind of realism (understood here as asserting independent existence and evolution of physical quantities, such as position or momentum, without the process of measurement) is crucial in the Copenhagen interpretation. Realistic interpretations (which were already incorporated, to an extent, into the physics of Feynman[6]), on the other hand, assume that particles have certain trajectories. Under such view, these trajectories will almost always be continuous, which follows both from the finitude of the perceived speed of light ("leaps" should rather be precluded) and, more importantly, from the principle of least action, as deduced in quantum physics by Dirac. But continuous movement, in accordance with the mathematical definition, implies deterministic movement for a range of time arguments;[7] and thus realism is, under modern physics, one more reason for seeking (at least certain limited) determinism and thus a hidden variable theory (especially that such theory exists: see De Broglie–Bohm interpretation).
Although determinism was initially a major motivation for physicists looking for hidden variable theories, non-deterministic theories trying to explain what the supposed reality underlying the quantum mechanics formalism looks like are also considered hidden variable theories; for example Edward Nelson's stochastic mechanics.
"God does not play dice"[edit]
In June 1926, Max Born published a paper, "Zur Quantenmechanik der Stoßvorgänge" ("Quantum Mechanics of Collision Phenomena") in the scientific journal Zeitschrift für Physik, in which he was the first to clearly enunciate the probabilistic interpretation of the quantum wavefunction, which had been introduced by Erwin Schrödinger earlier in the year. Born concluded the paper as follows:
Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive.
Born's interpretation of the wavefunction was criticized by Schrödinger, who had previously attempted to interpret it in real physical terms, but Albert Einstein's response became one of the earliest and most famous assertions that quantum mechanics is incomplete:
Niels Bohr reportedly replied to Einstein's later expression of this sentiment by advising him to "stop telling God what to do."[10]
Early attempts at hidden variable theories[edit]
Shortly after making his famous "God does not play dice" comment, Einstein attempted to formulate a deterministic counterproposal to quantum mechanics, presenting a paper at a meeting of the Academy of Sciences in Berlin, on 5 May 1927, titled "Bestimmt Schrödinger's Wellenmechanik die Bewegung eines Systems vollständig oder nur im Sinne der Statistik?" ("Does Schrödinger's wave mechanics determine the motion of a system completely or only in the statistical sense?").[11] However, as the paper was being prepared for publication in the academy's journal, Einstein decided to withdraw it, possibly because he discovered that, contrary to his intention, it implied non-separability of entangled systems, which he regarded as absurd.[12]
At the Fifth Solvay Congress, held in Belgium in October 1927 and attended by all the major theoretical physicists of the era, Louis de Broglie presented his own version of a deterministic hidden-variable theory, apparently unaware of Einstein's aborted attempt earlier in the year. In his theory, every particle had an associated, hidden "pilot wave" which served to guide its trajectory through space. The theory was subject to criticism at the Congress, particularly by Wolfgang Pauli, which de Broglie did not adequately answer. De Broglie abandoned the theory shortly thereafter.
Declaration of completeness of quantum mechanics[edit]
Also at the Fifth Solvay Congress, Max Born and Werner Heisenberg made a presentation summarizing the recent tremendous theoretical development of the subject. At the conclusion of the presentation, they declared:
[W]hile we consider ... a quantum mechanical treatment of the electromagnetic field ... as not yet finished, we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification.... On the question of the 'validity of the law of causality' we have this opinion: as long as one takes into account only experiments that lie in the domain of our currently acquired physical and quantum mechanical experience, the assumption of indeterminism in principle, here taken as fundamental, agrees with experience.[13]
Bohr–Einstein debates[edit]
Although there is no record of Einstein responding to Born and Heisenberg during the technical sessions of the Fifth Solvay Congress, he did challenge the completeness of quantum mechanics during informal discussions over meals, presenting a thought experiment intended to demonstrate that quantum mechanics could not be entirely correct. He did likewise during the Sixth Solvay Congress held in 1930. Both times, Niels Bohr is generally considered to have successfully defended quantum mechanics by discovering errors in Einstein's arguments.
EPR paradox[edit]
Main article: EPR paradox
The debates between Bohr and Einstein essentially concluded in 1935, when Einstein finally expressed what is widely considered his best argument against the completeness of quantum mechanics. Einstein, Podolsky, and Rosen had proposed their definition of a "complete" description as one that uniquely determines the values of all its measurable properties. Einstein later summarized their argument as follows:
Consider a mechanical system consisting of two partial systems A and B which interact with each other only during a limited time. Let the ψ function [i.e., wavefunction ] before their interaction be given. Then the Schrödinger equation will furnish the ψ function after the interaction has taken place. Let us now determine the physical state of the partial system A as completely as possible by measurements. Then quantum mechanics allows us to determine the ψ function of the partial system B from the measurements made, and from the ψ function of the total system. This determination, however, gives a result which depends upon which of the physical quantities (observables) of A have been measured (for instance, coordinates or momenta). Since there can be only one physical state of B after the interaction which cannot reasonably be considered to depend on the particular measurement we perform on the system A separated from B it may be concluded that the ψ function is not unambiguously coordinated to the physical state. This coordination of several ψ functions to the same physical state of system B shows again that the ψ function cannot be interpreted as a (complete) description of a physical state of a single system.[14]
Bohr answered Einstein's challenge as follows:
[The argument of] Einstein, Podolsky and Rosen contains an ambiguity as regards the meaning of the expression "without in any way disturbing a system." ... [E]ven at this stage [i.e., the measurement of, for example, a particle that is part of an entangled pair], there is essentially the question of an influence on the very conditions which define the possible types of predictions regarding the future behavior of the system. Since these conditions constitute an inherent element of the description of any phenomenon to which the term "physical reality" can be properly attached, we see that the argumentation of the mentioned authors does not justify their conclusion that quantum-mechanical description is essentially incomplete."[15]
Bohr is here choosing to define a "physical reality" as limited to a phenomenon that is immediately observable by an arbitrarily chosen and explicitly specified technique, using his own special definition of the term 'phenomenon'. He wrote in 1948:
As a more appropriate way of expression, one may strongly advocate limitation of the use of the word phenomenon to refer exclusively to observations obtained under specified circumstances, including an account of the whole experiment."[16][17]
This was, of course, in conflict with the definition used by the EPR paper, as follows:
Bell's theorem[edit]
Main article: Bell's theorem
In 1964, John Bell showed through his famous theorem that if local hidden variables exist, certain experiments could be performed involving quantum entanglement where the result would satisfy a Bell inequality. If, on the other hand, statistical correlations resulting from quantum entanglement could not be explained by local hidden variables, the Bell inequality would be violated. Another no-go theorem concerning hidden variable theories is the Kochen–Specker theorem.
Physicists such as Alain Aspect and Paul Kwiat have performed experiments that have found violations of these inequalities up to 242 standard deviations[18] (excellent scientific certainty). This rules out local hidden variable theories, but does not rule out non-local ones. Theoretically, there could be experimental problems that affect the validity of the experimental findings.
Gerard 't Hooft has disputed the validity of Bell's theorem on the basis of the superdeterminism loophole and proposed some ideas to construct local deterministic models.[19]
Bohm's hidden variable theory[edit]
Assuming the validity of Bell's theorem, any deterministic hidden-variable theory that is consistent with quantum mechanics would have to be non-local, maintaining the existence of instantaneous or faster-than-light relations (correlations) between physically separated entities. The currently best-known hidden-variable theory, the "causal" interpretation of the physicist and philosopher David Bohm, originally published in 1952, is a non-local hidden variable theory. Bohm unknowingly rediscovered (and extended) the idea that Louis de Broglie had proposed in 1927 (and abandoned) – hence this theory is commonly called "de Broglie-Bohm theory". Bohm posited both the quantum particle, e.g. an electron, and a hidden 'guiding wave' that governs its motion. Thus, in this theory electrons are quite clearly particles—when a double-slit experiment is performed, its trajectory goes through one slit rather than the other. Also, the slit passed through is not random but is governed by the (hidden) guiding wave, resulting in the wave pattern that is observed.
Such a view does not contradict the idea of local events that is used in both classical atomism and relativity theory as Bohm's theory (and quantum mechanics) are still locally causal (that is, information travel is still restricted to the speed of light) but allow nonlocal correlations. It points to a view of a more holistic, mutually interpenetrating and interacting world. Indeed, Bohm himself stressed the holistic aspect of quantum theory in his later years, when he became interested in the ideas of Jiddu Krishnamurti.
In Bohm's interpretation, the (nonlocal) quantum potential constitutes an implicate (hidden) order which organizes a particle, and which may itself be the result of yet a further implicate order: a superimplicate order which organizes a field.[20] Nowadays Bohm's theory is considered to be one of many interpretations of quantum mechanics which give a realist interpretation, and not merely a positivistic one, to quantum-mechanical calculations. Some consider it the simplest theory to explain quantum phenomena.[21] Nevertheless, it is a hidden variable theory, and necessarily so.[22] The major reference for Bohm's theory today is his book with Basil Hiley, published posthumously.[23]
A possible weakness of Bohm's theory is that some (including Einstein, Pauli, and Heisenberg) feel that it looks contrived.[24] (Indeed, Bohm thought this of his original formulation of the theory.[25]) It was deliberately designed to give predictions that are in all details identical to conventional quantum mechanics.[25] Bohm's original aim was not to make a serious counterproposal but simply to demonstrate that hidden-variable theories are indeed possible.[25] (It thus provided a supposed counterexample to the famous proof by John von Neumann that was generally believed to demonstrate that no deterministic theory reproducing the statistical predictions of quantum mechanics is possible.) Bohm said he considered his theory to be unacceptable as a physical theory due to the guiding wave's existence in an abstract multi-dimensional configuration space, rather than three-dimensional space.[25] His hope was that the theory would lead to new insights and experiments that would lead ultimately to an acceptable one;[25] his aim was not to set out a deterministic, mechanical viewpoint, but rather to show that it was possible to attribute properties to an underlying reality, in contrast to the conventional approach to quantum mechanics.[26]
Recent developments[edit]
In August 2011, Roger Colbeck and Renato Renner published a proof that any extension of quantum mechanical theory, whether using hidden variables or otherwise, cannot provide a more accurate prediction of outcomes, assuming that observers can freely choose the measurement settings.[27] Colbeck and Renner write: "In the present work, we have ... excluded the possibility that any extension of quantum theory (not necessarily in the form of local hidden variables) can help predict the outcomes of any measurement on any quantum state. In this sense, we show the following: under the assumption that measurement settings can be chosen freely, quantum theory really is complete".
In January 2013, GianCarlo Ghirardi and Raffaele Romano described a model which, "under a different free choice assumption [...] violates [the statement by Colbeck and Renner] for almost all states of a bipartite two-level system, in a possibly experimentally testable way".[28]
See also[edit]
1. ^ The Born-Einstein letters: correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born. Macmillan. 1971. p. 158. , (Private letter from Einstein to Max Born, 3 March 1947: "I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance.... I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently".)
2. ^ private letter to Max Born, 4 December 1926, Albert Einstein Archives reel 8, item 180
3. ^ a b Einstein, A.; Podolsky, B.; Rosen, N. (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?". Physical Review. 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777.
4. ^ "The debate whether Quantum Mechanics is a complete theory and probabilities have a non-epistemic character (i.e. nature is intrinsically probabilistic) or whether it is a statistical approximation of a deterministic theory and probabilities are due to our ignorance of some parameters (i.e. they are epistemic) dates to the beginning of the theory itself". See: arXiv:quant-ph/0701071v1 12 Jan 2007
5. ^ Senechal M, Cronin J (2001). "Social influences on quantum mechanics?-I". The Mathematical Intelligencer. 23 (4): 15–17. doi:10.1007/BF03024596.
6. ^ Individual diagrams are often split into several parts, which may occur beyond observation; only the diagram as a whole describes an observed event.
7. ^ For every subset of points within a range, a value for every argument from the subset will be determined by the points in the neighbourhood. Thus, as a whole, the evolution in time can be described (for a specific time interval) as a function, e.g. a linear one or an arc. See Continuous function#Definition in terms of limits of functions
8. ^ The Born–Einstein letters: correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born. Macmillan. 1971. p. 91.
9. ^ Cache of the Einstein section of the American Museum of Natural History
10. ^ This is a common paraphrasing. Bohr recollected his reply to Einstein at the 1927 Solvay Congress in his essay "Discussion with Einstein on Epistemological Problems in Atomic Physics", in Albert Einstein, Philosopher–Scientist, ed. Paul Arthur Shilpp, Harper, 1949, p. 211: "...in spite of all divergencies of approach and opinion, a most humorous spirit animated the discussions. On his side, Einstein mockingly asked us whether we could really believe that the providential authorities took recourse to dice-playing ("ob der liebe Gott würfelt"), to which I replied by pointing at the great caution, already called for by ancient thinkers, in ascribing attributes to Providence in everyday language." Werner Heisenberg, who also attended the congress, recalled the exchange in Encounters with Einstein, Princeton University Press, 1983, p. 117,: "But he [Einstein] still stood by his watchword, which he clothed in the words: 'God does not play at dice.' To which Bohr could only answer: 'But still, it cannot be for us to tell God, how he is to run the world.'"
11. ^ Albert Einstein Archives reel 2, item 100
13. ^ Max Born and Werner Heisenberg, "Quantum mechanics", proceedings of the Fifth Solvay Congress.
14. ^ Einstein A (1936). "Physics and Reality". Journal of the Franklin Institute. 221.
15. ^ Bohr N (1935). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?". Physical Review. 48: 700. Bibcode:1935PhRv...48..696B. doi:10.1103/physrev.48.696.
16. ^ Bohr N. (1948). "On the notions of causality and complementarity". Dialectica. 2: 312–319 [317]. doi:10.1111/j.1746-8361.1948.tb00703.x.
17. ^ Rosenfeld, L. (). 'Niels Bohr's contribution to epistemology', pp. 522–535 in Selected Papers of Léon Rosenfeld, Cohen, R.S., Stachel, J.J. (editors), D. Riedel, Dordrecht, ISBN 978-90-277-0652-2, p. 531: "Moreover, the complete definition of the phenomenon must essentially contain the indication of some permanent mark left upon a recording device which is part of the apparatus; only by thus envisaging the phenomenon as a closed event, terminated by a permanent record, can we do justice to the typical wholeness of the quantal processes."
18. ^ Kwiat P. G.; et al. (1999). "Ultrabright source of polarization-entangled photons". Physical Review A. 60: R773–R776. Bibcode:1999PhRvA..60..773K. doi:10.1103/physreva.60.r773.
19. ^ G 't Hooft, The Free-Will Postulate in Quantum Mechanics [1]; Entangled quantum states in a local deterministic theory [2]
20. ^ David Pratt: "David Bohm and the Implicate Order". Appeared in Sunrise magazine, February/March 1993, Theosophical University Press
21. ^ Michael K.-H. Kiessling: "Misleading Signposts Along the de Broglie–Bohm Road to Quantum Mechanics", Foundations of Physics, volume 40, number 4, 2010, pp. 418–429 (abstract)
23. ^ D. Bohm and B. J. Hiley, The Undivided Universe, Routledge, 1993, ISBN 0-415-06588-7.
24. ^ Wayne C. Myrvold (2003). "On some early objections to Bohm's theory" (PDF). International Studies in the Philosophy of Science. 17 (1): 8–24. doi:10.1080/02698590305233.
25. ^ a b c d e David Bohm (1957). Causality and Chance in Modern Physics. Routledge & Kegan Paul and D. Van Nostrand. p. 110. ISBN 0-8122-1002-6.
26. ^ B. J. Hiley: Some remarks on the evolution of Bohm's proposals for an alternative to quantum mechanics, 30 January 2010
27. ^ Roger Colbeck; Renato Renner (2011). "No extension of quantum theory can have improved predictive power". Nature Communications. 2 (8): 411. arXiv:1005.5173Freely accessible. Bibcode:2011NatCo...2E.411C. doi:10.1038/ncomms1416.
28. ^ Giancarlo Ghirardi; Raffaele Romano (2013). "Onthological models predictively inequivalent to quantum theory". Physical Review Letters. 110: 170404. arXiv:1301.2695Freely accessible. Bibcode:2013PhRvL.110q0404G. doi:10.1103/PhysRevLett.110.170404. PMID 23679689.
• Albert Einstein, Boris Podolsky, and Nathan Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review 47, 777–780 (1935).
• John Stewart Bell, "On the Einstein–Podolsky–Rosen paradox", Physics 1, (1964) 195–200. Reprinted in Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press, 2004.
• Wolfgang Pauli, letter to M. Fierz dated 10 August 1954, reprinted and translated in K. V. Laurikainen, Beyond the Atom: The Philosophical Thought of Wolfgang Pauli, Springer-Verlag, Berlin, 1988, p. 226.
• Werner Heisenberg, Physics and Beyond: Encounters and Conversations, translated by A. J. Pomerans, Harper & Row, New York, 1971, pp. 63–64.
• Claude Cohen-Tannoudji, Bernard Diu and Franck Laloë, Mecanique quantique (see also Quantum Mechanics translated from the French by Susan Hemley, Nicole Ostrowsky, and Dan Ostrowsky; John Wiley & Sons 1982) Hermann, Paris, France. 1977.
• P. S. Hanle, Indeterminacy before Heisenberg: The Case of Franz Exner and Erwin Schrödinger, Historical Studies in the Physical Sciences 10, 225 (1979).
• Asher Peres and Wojciech Zurek, "Is quantum theory universally valid?" American Journal of Physics 50, 807 (1982).
• Wojciech Zurek "Environment-induced superselection rules" Physical Review D 26 1862. 1982.
• Max Jammer, "The EPR Problem in Its Historical Development", in Symposium on the Foundations of Modern Physics: 50 years of the Einstein–Podolsky–Rosen Gedankenexperiment, edited by P. Lahti and P. Mittelstaedt (World Scientific, Singapore, 1985), pp. 129–149.
• Arthur Fine, The Shaky Game: Einstein Realism and the Quantum Theory, University of Chicago Press, Chicago, 1986.
• Thomas Kuhn. Black-Body Theory and the Quantum Discontinuity, 1894–1912 Chicago University Press. 1987.
• Asher Peres, Quantum Theory: Concepts and Methods, Kluwer, Dordrecht, 1993.
• Carlton M. Caves and Christopher A. Fuchs, "Quantum Information: How Much Information in a State Vector?", in The Dilemma of Einstein, Podolsky and Rosen – 60 Years Later, edited by A. Mann and M. Revzen, Ann. Israel Physical Society 12, 226–257 (1996).
• Carlo Rovelli. "Relational quantum mechanics" International Journal of Theoretical Physics 35 1637–1678. 1996.
• Roland Omnès, Understanding Quantum Mechanics, Princeton University Press, 1999.
• Roman Jackiw and Daniel Kleppner, "One Hundred Years of Quantum Physics", Science, Vol. 289 Issue 5481, p. 893, August 2000.
• Orly Alter and Yoshihisa Yamamoto (2001). Quantum Measurement of a Single System (PDF). Wiley-Interscience. 136 pp. doi:10.1002/9783527617128. ISBN 9780471283089. Slides.
• Erich Joos, et al., Decoherence and the Appearance of a Classical World in Quantum Theory, 2nd ed., Berlin, Springer, 2003.
• Wojciech Zurek (2003). "Decoherence and the transition from quantum to classical — Revisited", arXiv:quant-ph/0306072 (An updated version of Physics Today, 44:36–44 (1991) article)
• Wojciech Zurek, "Decoherence, einselection, and the quantum origins of the classical" in Reviews of Modern Physics, vol.75, (715).
• Asher Peres and Daniel Terno, "Quantum Information and Relativity Theory", Reviews of Modern Physics 76 (2004) 93.
• Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, Alfred Knopf 2004.
• Maximilian Schlosshauer, "Decoherence, the Measurement Problem, and Interpretations of Quantum Mechanics", in Reviews of Modern Physics, vol.76, pages 1267–1305, 2005.
• Federico Laudisa and Carlo Rovelli. "Relational Quantum Mechanics" The Stanford Encyclopedia of Philosophy (Fall 2005 Edition).
• Marco Genovese, "Research on hidden variable theories: a review of recent progresses", in Physics Reports, vol.413, 2005.
External links[edit] |
8dae37f1e804ba1a | Take the 2-minute tour ×
This question already has an answer here:
While reviewing some quantum mechanics, I cam across a very interesting situation. For a potential barrier, if a particle has an energy $E$ less than the potential barrier $V_0$, it is possible to measure it inside the potential barrier or the classically forbidden region quantum mechanically.
But, if we calculate reflection and transmission coefficients for the wave, we find out $R=1$ and $T=0$ which means the wave will be fully reflected and no transmission will take place. But still there is probability of finding it inside the potential barrier. How?
share|improve this question
marked as duplicate by Brandon Enright, Qmechanic Dec 16 '13 at 10:34
there should be some transmission for a potential barrier. If you are thinking of just a single step, then there is of course no transmission, but you can still find particles in the high potential region. – NowIGetToLearnWhatAHeadIs Sep 17 '13 at 2:02
Possible duplicates: physics.stackexchange.com/q/11188/2451 and links therein. – Qmechanic Sep 17 '13 at 2:19
2 Answers 2
This is wholly analogous to the evanescent optical field that arises in the classically (i.e. computed by raytracing) forbidden region beyond a totally internally reflecting interface between two optical mediums. I analyse this situation in my answer here and there is also a great plot of the situation in Ruslan's answer here.
Let's think of a 1D barrier and nonrelativistically (the relativistic Dirac equation leads to the Klein paradox, so we'll let that sleeping hound lie). What's happenning is that on the low energy side (say $U=0$) of the barrier, the 1D Schrödinger equation has the form (setting all the constants to unity):
$$({\rm d}_x^2 + E)\psi(x) = 0$$
whence solutions of the form $\psi = A e^{\pm i \sqrt{E} x}$. These have real well-defined momentum (they are momentum eigenstates of the observable -i {\rm d_x}$ so they represent travelling waves.
But on the high energy side we've got:
$$({\rm d}_x^2 + E-U)\psi(x) = 0$$
with $U>E$ so that we get evanescent waves i.e. plane waves $\psi = A e^{-\sqrt{U-E} x}$ with imaginary wavenumber and imaginary momentum.
What this means is that the particle is not propagating in these classically forbidden regions: it is "standing still" and its field of influence decays swiftly with increasing depth into the forbidden region. In the optical, total internal reflexion case, an evanescent wave is one comprising pulsating electric and magnetic field energy shuttling periodically to and fro between neighbouring regions. Here the situation is mathematically precisely analogous, but we haven't got shuttling energy, we've got shuttling probability density. The regions of highest provbability quivver, there are instantanous probability current fluxes back and forth between neighbouring regions such that averaged over a period the nett probability flux is nought. So if there is an incident wave, all its probabilty flux will ultimately get reflected, owing to the lack of nett average flux in the classically forbidden region, just as a totally internally reflected wave is theoretically 100% energy efficient (there is no loss) notwitstanding the penetration of the field of influence. Therefore, one expects the reflexion coefficient to be unity for an infinitely thick classical region. But as in my and Ruslan's answers, if there is only a thin forbidden region, the evanescent waves reach the other side and become propagating waves with real momentum again, again precisely analogously to the optical situation.
share|improve this answer
Reflection and transmission coefficients always refer to plane waves, i.e. the components of the wave function that go straight to infinity. Which means, if you measure sufficiently deep inside of the barrier, the probability will indeed approach zero.
However, such waves are not the only solutions to the Schrödinger equation inside a constant potential. With energy below $V_0$, you rather get exponentially decaying solutions. Which is obviously unphysical in an unbounded space range: the exponential is unbounded, the wavefunction would thus not be renormalisable.
But that need not be an issue close to a boundary: if you "cut off" the unbounded part, the remaining "tail", which approaches zero quickly, is normalisable all right. So at a boundary between $0$ and $V_0>E$, you get a transition between a plane-waves solution and an exponential one, called evanescent wave. It is this evanescent wave that shows the amplitude of tunneling into the barrier. It also shows how unlikely tunnelling gets if you measure a little farther from the boundary, due to the exponential decay. In that sense, the transmission is indeed zero: the particle doesn't properly enter the barrier, it's rather "squeezed in but bounces back out immediately", for a classical analogy.
share|improve this answer
Have a look at hyperphysics.phy-astr.gsu.edu/hbase/quantum/barr.html . The exponential decay is within the barrier. on the other side it is again a propagating wave with imposed continuity conditions on the boundary.Note it is a probability wave. – anna v Dec 16 '13 at 8:19
|
020c471beda36912 | Thursday, 28 February 2013
Spaced out
** Grumpy old man alert **
A genuine web form
There's a campaign in the US saying that every student should have the opportunity to learn to write computer code. And I agree - it is a great thing to be able to do. However I do warn people who take that step into coding that it may turn them into a grumpy person, when they know how easy it is to do something... yet discover that so many idiot coders have failed to do so.
My particular gripe today is web forms that ask for telephone numbers. The standard format for a phone number in the UK is something like '01793 765432' with a dialling code, a space, and the local number. Of course you don't type the space into your phone, but it is the correct format. Yet increasingly web forms are rejecting phone numbers with spaces in. Use one and you will get an error message pointing out the folly of your ways.
But here's the thing, and the reason why I bring up coding skills in the same breath. Once upon a time I used to write code in C, the programming language (in various variants) most used to write software for personal computers. In C there is a standard, built-in function that does a pretty simple, but occasionally useful thing. It takes a string of characters and returns that same string with the spaces removed. That's all it does. Frankly it's easy enough to write yourself, but it usually comes in the library of functions. Plain and simple. So guess what? If you have a form and put the text in it through this routine, you will get a phone number without spaces. No need to wrap the user on the knuckles for getting it right - simply change the format to the one you want.
This is a fundamental of good user interface design. If you know what's wrong, don't complain, just fix it. If your programming can't cope with the input it's your fault, not the users.
Get your act together guys. This is pathetic.
Wednesday, 27 February 2013
Why green heretics are essential
You may recall a little while ago I rather revelled in being labelled a 'green heretic'. I've just come across a report that emphasises why it is so important to indulge in a little green heresy (hopefully dodging the green inquisition) and think beyond the knee-jerk reaction as I suggest we should in Ecologic.
According to this piece in the The Register (usually more a source of great IT information), climate change isn't high on ordinary people's priorities. Well, that's not surprising at the moment with worldwide recession and financial difficulties. When you are trying to keep your business afloat, or to keep your house from being repossessed and your children fed, it is difficult to pay too much attention to the finer points of improving the environment - important though they remain. But the interesting thing about the data discussed in that article is that people gave climate change a similarly low importance when times were good. It's not just politicians that have a short term view - so do the rest of us, apart from a vocal few.
This isn't, by the way, a matter of climate change denial - it is rather accepting that things are the way they are, but not being prepared to do anything much about it. There's a strong parallel with the overweight/obesity situation in the Western world. Most of us know perfectly well that eating too much fat and sugar is going to make us overweight. We don't deny it. But we can't resist the siren call of fish and chips or pizza, or hamburgers and a coke, or whatever our high fat, high sugar diet of choice is. Because each incremental meal doesn't really make much difference. The impact is from long term use, but the experience is short term, one meal at a time.
Those who call people like me green heretics argue that we put too much stock on engineering ourselves out of trouble. They say that we have too much faith in science and technology to counter our mistreatment of the environment. But this survey says to me that such people have got things back to front. Because short of draconian restrictions from government, something that isn't going to happen in a democracy if said government wants to be re-elected, we are not going to change our ways. Why would we, if we don't consider it a priority? We consume our energy in small chunks, just like those hamburgers. So the faith in science and technology solutions aren't based an overweening belief in the power of science, they are instead our only hope.
I'm not saying give up your recycling, or everyone should go out and buy a Hummer. Of course we can and should still do as much as we can as individuals to counter climate change. But it clearly isn't going to be anywhere enough. We aren't going to radically change the way we live our lives because it will help with climate change in the future. It is just not going to happen. And so we have to find science and engineering solutions to counter the way we live. And the sooner we put more effort into that, the better.
Tuesday, 26 February 2013
Communicate, communicate, communicate
Yesterday I had a phone call from the company that makes the accounting software I use. Apparently they want to expand their offering and wondered if I'd be interested in, for instance, a CRM system. I told them politely NTY.
You can do it if you communicate
(If you aren't au fait with TLAs (three letter acronyms), CRM is 'Customer Relationship Management' - in essence a database of your customers that enables you to give the impression of knowing them to some extent. I made up NTY - 'no thank you.')
I have two types of customer - big direct people like publishers and smaller (in terms of income, though obviously hugely important) indirect people like book readers. Neither of these really fit the CRM profile. I only interact with a handful of publishers - a 'to do' list (I use Apple's Reminders) is fine for that. As far as readers go it's a very ad-hoc relationship that doesn't need that kind of management.
However, the whole business made me think about customer service, something I used to major on in my airline days. I even wrote a (rather good) book about it. But in all honesty, and at risk of sounding like Tony Blair, there are just three things you first need to concentrate on to improve customer service/relations - communicate, communicate and communicate. It's ridiculously simple, and yet so many companies are really bad at it.
Absolutely the most important time for this is when things go wrong. This is why airlines/airports often have a terrible customer service reputation. Because when things go wrong they don't communicate quickly or frequently enough. I've had a really good example recently of a company not quite getting it right.
The company that hosts my websites/mail servers, Webfusion generally is very good. But for over 24 hours now, the server that hosts many of my websites and all my frequently used email addresses has been totally out of action. This is a long time to be offline - and the only way to come out of it smelling of roses is for Webfusion to communicate a lot. So how have they done?
First fail: they didn't let me know that the server was down, I had to find out the hard way. Of course they couldn't email me on my regular address - but they should have both a backup address and, crucially, a number they can text an alert to. Simple, easy to do, informative.
Second partial success: they have a support site with a status page. This has been giving updates of progress (or lack of it). That's good. But it has not been done well enough. The updates have been too infrequent and give no timescale for the next information. I'd suggest every two hours is a sensible period, and each update should tell you when the next one will be up. (And, of course, that next one mustn't be late.)
Third fail: they raised hopes then dashed them. Bad move. Last night at around 10pm they posted a message saying 'We are performing final checks on the system with a view to have the server back online as soon as possible.' This made it sound as if it was about to come back any minute, but it still wasn't working at 8am next morning. A supplementary message wasn't posted until 8.43am.
As of 8.46am when I write this, the server is still not back, so as well as continuing communication with more information content (the latest message is still very much a holding one with no idea of timescale/what's happening), I don't yet know what they will do when it's fixed. There is the biggest challenge of all. I suspect they will just issue a 'Sorry, but it's fixed now,' message. That really won't do. After a failure like this, the final communication should offer some kind of compensation too - and generous at that. This is the point you can turn a disaster into a triumph, but not if you are careless about communicating or stingy with your recompense.
So there we have it. Communicate, communicate, communicate. Simples.
Monday, 25 February 2013
Cloudy working
Have you managed to ignore the concept of 'the cloud' on your computer so far? If so, could I politely suggest that you are bonkers?
Let's think of a humble file on my computer - say an article I've spent hours writing. Let's think of the pre-cloud me working with it. What happens if my computer hard disc dies horribly? Well, I will have backed it up. Probably. Certainly within the last week. Shame I only wrote it yesterday. Or let's imagine I'm 50 miles from home and suddenly need to access it. Well, tough. I can't.
Now let's think of post-cloud me. My hard disc dies? No problem, the latest version of the article is in the cloud and I can access it from any other computer. Need to get it remotely? No problem again. I can get to it from my phone, my iPad or a computer.
But isn't it complicated/expensive? No! It isn't. It's simple and for the kind of space you need for documents (if not photos and music) it's free.
The main cloud storage facilities work by setting up a new folder on your computer. Put anything in that folder and it is automatically duplicated in the cloud. Any changes are synchronized. That's all there is to it. Of course you have to slightly change your way of working, in that your documents will sit in that folder rather than your computer's Documents or My Documents folder - but that's hardly a chore.
Personally I use three free cloud services - Dropbox, Google Drive and SkyDrive (Microsoft's version). They come with 2 Gb, 5 Gb and 7 Gb of free storage respectively - plenty for any document work. There's not a huge amount to choose between them in practice, though each offers subtly different features (you can see a useful comparison here). I would tend to recommend SkyDrive for Microsoft Office documents as the web version has built in Office editing tools, so you can tweak a document even if you don't have access to Office. There's no reason to use all three particularly, though I find it quite useful having different spaces for different types of documents.
If you want to go the whole hog and have all your photos and music up there, you can do that too, though typically you would go over these limits and need to pay an annual fee for extra space.
I come back to my original statement. If you aren't using one of these services, why not, short of inertia and folly? Rush out and do it today. Bear in mind that you are not tying yourself into only having access to your files when you have internet access. The folder is actually on your computer, it only synchronizes with a copy in the cloud. But why would you want to miss out on automated instant backup and the ability to access your files away from home?
Friday, 22 February 2013
Where's quantum Wally?
It's appropriate that the episode of The Big Bang Theory I watched last night featured as part of a kind of nerd Olympics a competitive game of Where's Wally (or to be precise, the US variant of the book Where's Waldo? - why did they change the name?) where contestants were handicapped by playing without their glasses. There's something very Where's Wally? like about my topic today, which is the puzzle of where a quantum particle like an electron or a photon is when you aren't looking at it.
Here's the thing. Unless you observe it and pin it down, a quantum particle's location is fuzzy. The position is described by Schrödinger equation, which tells you the probability of it being in any location, but this isn't like saying I can give a probability for where I am in the house, because in practice I actually will be in one, specific place at any one time. In the case of the quantum particle the probability is all there is. The best we can say is the particle literally doesn't have a fixed location, we can only say that there are various probabilities of where we would find it if we looked.
Young's slits
So far so good. This fuzziness of location is important, because it explains why it is easy to think that quantum particles are, in fact, waves. The classic example is Young's slits. A couple of hundred years ago, Thomas Young set light through a pair of slits so that the beams merged on a screen behind them. The result was a series of light and dark bands. This was used to show that light is a wave, because waves 'interfere' with each other. If two waves meet and they are both rippling up at the same point, the result will be a bigger wave. If one is rippling up and the other down, they will cancel each other out. And for light this would produce those dark and light bands.
The only thing is, we now know that light can be described as quantum particles - photons. And even if you send those one at a time through Young's slits, the interference pattern still builds up. The same goes for other quantum particles like electrons, which produce exactly the same effect.
The reason I bring this up now is I am currently reading for review a book called The Quantum Divide, which is about half way between a popular science book and a student text, and which takes some offence at the way this quantum strangeness is often portrayed. What popular science books frequently say, and I think I have been guilty of this, is that a particle goes through both slits (i.e. is in more than one place at a time) and interferes with itself. Gerry and Bruno (I'm not being overly familiar, these are surnames), the authors of the book, take serious umbrage at this wording:
We hasten to emphasize that quantum mechanics does not actually say that an electron can be in two places at once, hence the use of the proviso that quantum mechanics only superficially appears to allow the electron to be in two places at once.
And just to be clear why they are having to make this distinction:
Quantum theory does not predict that an object can be in two or more places at once. The false notion to the contrary often appears in the popular press, but is due to a naïve interpretation of quantum mechanics.
(Their emphasis in both cases.) In one way I am very grateful to G&B as I will be more careful with my wording as a result of this. But on the other hand, I think this typifies how scientists trying to present science to the general public can get a bad press. They no doubt think what they are doing is emphasizing their precision, but this comes across as the worst kind of academic bitching. More seriously, I think G&B are in danger of throwing the baby out with the bathwater. All descriptive models of something as counter-intuitive as quantum theory are inevitably approximations - what they are really doing here is not liking someone else's language, even though it gets the basic point across better than their version in some ways.
The idea that, say, a photon goes through both slits and interferes with itself is technically inaccurate. The photon is not in any place with certainty, but is only described by the wave equation, which gives it a probability of being located in each slit. And the interference is of that probability wave, not the photon itself. However it is arguable that the probability wave is the photon - that it is the only meaningful description of the photon and as such if we say that the probability wave has values at both slits and interferes with itself, we are surely not stretching things too far to replace the clumsy 'the probability wave' with 'the photon'.
Okay, it's not perfectly accurate, and we certainly should explain what we are doing and probably often fail to do so. But I don't think this is any more a problem than when physicists speak of the big bang or dark matter as if it they are facts, rather than our current best accepted theories. To make a big thing of it as G&B do is, frankly, to miss the point.
Thursday, 21 February 2013
When the remake is better
We see a steady stream of TV programmes from the UK crossing the Atlantic and being remade for a US audience. Often the result is to water down the original, or to lose the point of the show. I would be hard pressed to think of a remake done this way that was better than the original... until now.
I was a great fan of the Michael Dobbs 1990 TV drama and books House of Cards with its scheming chief whip (and, eventually, Prime Minister) Francis Urquhart. Everything about it was superb. Ian Richardson made a brilliant Machiavellian villain, and the show was groundbreaking in its use of direct access to the camera, with Richardson making asides to the audience and giving us wonderful knowing looks. And, of course there was that catchphrase 'You might very well think that; I couldn't possibly comment.'
Now Netflix has remade the programme from the original shortish series to a 13 part epic starring Kevin Spacey. And it is excellent. Although the original was great, this is genuinely better. It's more sophisticated, more complex and brilliantly done. Spacey, as Francis Underwood (presumably Urquhart was too difficult a name) has that same ruthless charm and uses the camera aside to great effect.
I've had Netflix for a while now, and am very impressed with it, but never expected they would produce their own drama of this quality - well done guys.
There is only one slight problem with the storyline, which will involve a spoiler, so I will briefly discuss that further down the page - otherwise this piece is finished.
Image from Wikipedia
One of the most interesting aspects of the new series was how it would end. The original book had Urquhart commit suicide at the end, when the house of cards collapses. But in the TV show he throws the reporter off the tower of Westminster Palace, gets away with it and goes on to become Prime Minister. (Hence the two subsequent novels are follow ups to the TV show, rather than the first novel.) The Netflix series does neither, but ends unresolved just before the house of cards collapses. Arggh! Nasty people. Hopefully that means there will be a sequel.
Wednesday, 20 February 2013
The dreaded CFCs
My latest podcast for the Royal Society of Chemistry features the compounds we loved to boo and hiss before carbon dioxide became our favourite baddy - chlorofluorcarbons or CFCs. Remember the hole in the ozone layer? That's the one.
Amazingly, the same man who came up with CFCs also was responsible for adding lead to petrol - if ever the environmental movement wanted a bad guy, Thomas Midgley was their man. Yet he got a medal for it - because at the time his work seemed brilliant. So slap on the factor 50 and hurry along to the RSC compounds site - or if you've five minutes to spare now, click to to have a listen to my podcast on CFCs.
Tuesday, 19 February 2013
Puff, puff
Would you take note of an endorsement
by this man?
An almost inevitable feature of a new book is some gushing comment on the cover - known in the trade as a 'puff'. Publishers love these - but do they make any difference?
We're all familiar with the kind of thing that is put on comedy books, where someone goes entirely over the top:
Before I read this book I was in a deep depression and thought my life was pointless. Now, thanks to this book, I realize life is worth living. It is quite literally the best thing since sliced bread, and I would pay £1,000 for a copy. Or give up a lesser organ.
This reflects an underlying concern - does the person giving the 'puff' really mean it? Have they even read the book? Were they paid to say nice things? And do you care what they think?
There certainly needs to be careful selection of anyone endorsing a book. Some publishers seem to think 'if they're famous, that's good enough' - but it certainly isn't the right approach for me as a reader. An enthusiastic comment from an Only Way is Essex 'star' is not going to get me heading excitedly for the tills. In fact the matching can be quite subtle. I have seen popular maths books endorsed by Carol Vorderman, and I can imagine the publishers rubbing their hands in glee. Who could be more mathsy than our Carol? Sorry guys, that just doesn't work for me, or I suspect, most of the audience for popular maths books. I think I do take a little notice if the person making the comment is someone relevant who I respect - but that's about all.
The good news is that people don't get paid for making these comments (well, I never have) - and personally I would certainly never endorse a book without reading it first. Nor would I say something I didn't mean. However, it's also fair to say that a one-liner comment can't really capture an overall view. If you take a look at my review of the book I'm quoted on in the photo here, I liked it, definitely - but there are a few balancing remarks too. A puff inevitably provides only one side of the balance.
Overall, then, I don't think such cover endorsements are a bad thing, nor would I totally ignore them. But I only give them a pretty small weighting in my buying decision - and I suspect you do too.
Monday, 18 February 2013
Hitting QI in the asteroids
The 2009 Orionid Meteor Shower (Courtesy of NASA)
I dearly love QI, the BBC's quirky factoid quiz show hosted by Stephen Fry. However, as I've pointed out before, the programme's 'aren't you thick, nah nah' attitude makes it fair game when its researchers get it wrong, as they regularly do.
One of their rather nice revelations on the show was that if you see a meteor crash to earth (a timely subject given the recent Russian meteor strike) and rush to pick up the resultant remains - a meteorite - it won't, as you might expect be incredibly hot, but instead it is likely to be painfully cold. This is because as it comes in through the atmosphere lots of fragments will be ablated from the surface, carrying away the heat, preventing the remnant from heating up.
Unfortunately, according to NASA scientist Donald Yeomans in his book Near-Earth Objects, they haven't got it quite right.
With a rocky object - which is most of them - this ablation will indeed occur, but Yeomans reckons the temperature on impact will be 'little more than ambient temperature' rather than freezing cold. It's worse though. Metallic meteorites are significantly rarer - but they wouldn't ablate in the same way and would retain most of their energy. They would be hot. And here's the clincher. Most meteorites that get found are metallic. Because they stand out. How could you find a few small bits of stone at ambient temperature in a field, say? The majority of meteorites that are ever found are metallic - and chances are these would have been hot on arrival.
Sorry QI - failed again.
Thanks, by the way, for the messages of encouragement to come back to the blog. I had to have a break for medical reasons, which still may cause further interruptions, but at the moment, we're back on track.
Monday, 4 February 2013
I'm going out and I may be some time
For reasons beyond my control (as they say) this will be my last blog post for a little while - apologies to my regular readers (both of you) - I will resume as soon as I am able.
In the good old days, when TV broadcasts used to break down, as they did with considerable regularity (it only seems to be Channel 4 these days), they used to be kind enough to play you some music while you wait. In that same spirit, I thought I might leave you for the moment with a piece of music.
I was going to give you one of my favourite Tudorbethan masterpieces, but I thought instead I'd make it this excellent example of rather more modern but still exciting choral music:
Friday, 1 February 2013
Quantum vampires
The title of this piece may sound like the latest Young Adult bestseller (and I reserve all rights, thank you very much) but I was thinking of something a little more down to earth... yet at the same time rather more exciting. Even though it has been out for a while, I get more emails about my book on quantum entanglement, The God Effect than almost any other. I think it is because the subject is mind-boggling even to physicists (the whole business really started when Einstein wrote a paper to the effect of 'this entanglement stuff is so weird, quantum theory must be wrong'... but it was Einstein who was proved to be in error), and because some of the applications are amazing, notably quantum teleportation, which produces an effect like a Star Trek transporter on the scale of quantum particles.
I just thought I'd give a taster for the subject by using a little extract from The God Effect where the scientists head for the sewers:
By 2004, [Anton] Zeilinger and his team had achieved teleportation over significantly greater distances – in fact across the river Danube. A year after their ground-breaking long range transmission of entangled photons across the Danube, the Austrian team was back in the sewers, this time achieving teleportation from one side of the river to the other. (Quantum entanglement experimenters seem to have a functional relationship with the sewage system rivaled only by utility workers and Buffy the Vampire Slayer.)
As always with teleportation there are two “channels”, one carrying the entangled particles, the other transmitting the conventional information that will be used to complete the teleportation process. Entangled photons were pumped along a fiber optic cable running through the sewer system under the Danube, while the conventional information was beamed by microwave for 600 meters across the river. This may not seem ground breaking, but as their paper in Nature commented they had “demonstrated quantum teleportation over a long distance and with high fidelity under real world conditions outside a laboratory”.
This is a significant blow to those critics who have said that teleportation could only occur under highly controlled laboratory conditions. The team points out that it’s also possible that this technique could be used as an alternative approach to make quantum repeaters that would enable entanglement to be shared anywhere around the world, as teleporting an entangled particle transfers the particle’s state, including its entanglement.
As this demonstrates, even if there never can be “real” teleportation of physical objects, it doesn’t mean that this isn’t a development of great importance. Teleportation even its limited form will prove vitally useful in making quantum computers real. Quantum computers rely on qubits, where information is stored in the quantum state of a particle. This may be very powerful, but it is also difficult to transfer that quantum state safely from place to place within the computer – or even between two quantum computers.
Teleportation means that, provided a supply of entangled particles is available, something that is now relatively easy to achieve, a qubit can be teleported from one place to another using only a conventional link. So a satellite pumping out entangled photons to two locations could enable quantum computers in two locations to swap qubits over the Internet. |
34b00836785c4e71 | Tuesday, September 20, 2005
Faster than light or not
I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release.
All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.
So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.
Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.
Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.
For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.
The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.
(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.
What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.
Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.
Update: Links updated DAMTP-->IUB
Ralf Buelow said...
Hello: I am working at the museum in Mannheim where the most recent experiment of Professor Nimtz' is displayed (well, I'm actually the guy who asked him to install it in our Einstein exhibition), and I can assure you that it is not identical with the old Mozart experiment. See www.heise.de/newsticker/meldung/64004 for details.
Sincerely Ralf Buelow
Wolfgang said...
Nice exercise!
What program did you use to get from the output of your C program to the mpg file ?
Robert said...
That's 'convert' from the ImageMagick suite.
Wolfgang said...
thank you for the hint. I think I puzzled it together:
i) your C program generates PGM files (portable grayscale image)
ii) the convert utilty from of ImageMagick reads your PGM files and generates the mpg movie.
Very nice and thanks.
B Nettles said...
So, are you saying that the "tunnelling" causes a phase shift in the propagated wave? Is this simply a phase velocity trick?
Not that I'm supporting Nimtz (far from it), but why haven't we heard guys like Neill Tyson screaming about this?
Robert said...
It's not clear to me what you mean by 'phase shift' for a wave that is not monochromatic.
What I am saying is that of a wave packet it's only the first bit (length order of width of obstruction) that get through, the rest is reflected.
Again, phase vs. group velocity is well defined only at a single frequency/wave length, it varies if you have dispersion (like here). The speed of information (given in terms of the support of the Green's function) is determined by the limit of d omega/dk of infinite omega or k->0. And this is c here and in all of Nimtz' experiments. |
348eaa543a21aba7 | Saturday, October 31, 2020
What is Energy? Is Energy Conserved?
Wednesday, October 28, 2020
A new model for the COVID pandemic
I spoke with the astrophysicist Niayesh Afshordi about his new pandemic model, what he has learned from it, and what the reaction has been to it.
You find more information about Niayesh's model on his website, and the paper is here.
You can join the chat with him tomorrow (Oct 29) at 5pm CET (noon Eastern Time) here.
Herd Immunity, Facts and Numbers
Today, I have a few words to say about herd immunity because there’s very little science in the discussion about it. I also want to briefly comment on the Great Barrington Declaration and on the conversation about it that we are not having.
First things first, herd immunity refers to that stage in the spread of a disease when a sufficient fraction of the population has become immune to the pathogen so that transmission will be suppressed. It does not mean that transmission stops, it means that on the average one infected person gives the disease to less than one new person, so outbreaks die out, instead of increasing.
It’s called “herd immunity” because it was first observed about a century ago in herds of sheep and, in some ways we’re not all that different from sheep.
Now, herd immunity is the only way a disease that is not contained will stop spreading. It can be achieved either by exposure to the live pathogen or by vaccination. However, in the current debate about the pursuit of herd immunity in response to the ongoing COVID outbreak, the term “herd immunity” has specifically been used to refer to herd immunity achieved by exposure to the virus, instead of waiting for a vaccine.
Second things second, when does a population reach herd immunity? The brief answer is, it’s complicated. This should not surprise you because whenever someone claims the answer to a scientific question is simple they either don’t know what they’re talking about, or they’re lying. There is a simple answer to the question when a population reaches herd immunity. But it does not tell the whole story.
This simple answer is that one can calculate the fraction of people who must be immune for herd immunity from the basic reproduction number R_0 as 1- 1/R_0.
Why is that? It’s because, R_0 tells you how many new people one infected person infects on the average. But the ones who will get ill are only those which are not immune. So if 1-1/R_0 is the fraction of people who are immune, then the fraction of people who are not immune is 1/R_0.
This then means that average number of susceptible people that one infected person reaches is R_0 * 1/R_0 which is 1. So, if the fraction of immune people has reached 1 – 1/R_0, then one infected person will on the average only pass on the disease to one other person, meaning at any level of immunity above 1 – 1/R_0, outbreaks will die out.
R_0 for COVID has been estimated with 2 to 3, meaning that the fraction of people who must have had the disease for herd immunity would be around 50 to 70 percent. For comparison, R_0 of the 1918 Spanish influenza has been estimated with 1.4 to 2.8, so that’s comparable to COVID, and R_0 of measles is roughly 12 to 18, with a herd immunity threshold of about 92-95%. Measles is pretty much the most contagious disease known to mankind.
That was the easy answer.
Here’s the more complicated but also more accurate answer. R_0 is not simply a property of the disease. It’s a number that quantifies successful transmission, and therefore depends on what measures people take to protect themselves from infection, such as social distancing, wearing masks, and washing hands. This is why epidemiologists use in their models instead an “effective R” coefficient that can change with time and with people’s habits. Roughly speaking this means that if we would all be very careful and very reasonable, then herd immunity would be easier to achieve.
But that R can change is not the biggest problem with estimating herd immunity. The biggest problem is that the simple estimate I just talked about assumes that everybody is equally likely to meet other people, which is just not the case in reality.
In realistic populations under normal circumstances, some people will have an above average number of contacts, and others below average. Now, people who have many contacts are likely to contribute a lot to the spread of the disease, but they are also likely to be among the first ones to contract the disease, and therefore become immune early on.
This means, if you use information about the mobility patterns, social networks, and population heterogeneity, the herd immunity threshold is lower because the biggest spreaders are the first to stop spreading. Taking this into account, some researchers have estimated the COVID herd immunity threshold to be more like 40% or in some optimistic cases even below 20%.
How reliable are these estimates? To me it looks like these estimates are based on more or less plausible models with little empirical data to back them up. And plausible models are the ones one should be especially careful with.
So what do the data say? Unfortunately, so far not much. The best data on herd immunity so far come from an antibody study in the Brazilian city of Manaus. That’s one of the largest cities in Brazil, with an estimated population of two point one million.
According to data from the state government, there have been about fifty five thousand COVID cases and two thousand seven hundred COVID fatalities in Manaus. These numbers likely underestimate the true number of infected and deceased people because the Brazilians have not been testing a lot. Then again, most countries did not have sufficient testing during the first wave.
If you go by the reported numbers, then about two point seven percent of the population in Manaus tested positive for COVID at some point during the outbreak. But the study which used blood donations collected during this time found that about forty-four percent of the population developed antibodies in the first three months of the outbreak.
After that, the infections tapered off without interventions. The researchers estimate the total number of people who eventually developed antibodies with sixty-six percent. The researchers claim that’s a sign for herd immunity. Please check the information below the video for references.
The number from this Brazilian study, about 44 to 66 percent seems consistent with the more pessimistic estimates for the COVID herd immunity threshold. But what it took to get there is not pretty.
2700 dead of about two million that’s more than one in a thousand. Hospitals run out of intensive care units, people were dying in the corridors, the city was scrambling to find ways to bury the dead quickly enough. And that’s even though the population of Manaus is pretty young; just six percent are older than sixty years. For comparison, in the United States, about 20% are above sixty years of age, and older people are more likely to die from the disease.
There are other reasons one cannot really compare Manaus with North America or Europe. Their health care system was working at almost full capacity even before the outbreak, and according to data from the world bank, in the Brazilian state which Manaus belongs to, the state of Amazonas, about 17% of people live below the poverty line. Also, most of the population in Manaus did not follow social distancing rules and few of them wore masks. These factors likely contributed to the rapid spread of the disease.
And I should add that the paper with the antibody study in Manaus has not yet been peer reviewed. There are various reasons why the people who donated blood may not be representative for the population. The authors write they corrected for this, but it remains to be seen what the reviewers think.
You probably want to know now how close we are to reaching herd immunity. The answer is, for all can tell, no one knows. That’s because, even leaving aside that we have no reliable estimates on the herd immunity threshold, we do not how many people have developed immunity to COVID.
In Manaus, the number of people who developed antibodies was more than twenty times higher than the number of those who tested positive. As of date in the United States about eight point five million people tested positive for COVID. The total population is about 330 Million.
This means about 2.5% of Americans have demonstrably contracted the disease, a rate that just by number is similar to the rate in Manaus, though Manaus got there faster with devastating consequences. However, the Americans are almost certainly better at testing and one cannot compare a sparsely populated country, like the United States, with one densely populated city in another country. So, again, it’s complicated.
For the Germans here, in Germany so far about 400,000 people have tested positive. That’s about 0.5 percent of the population.
And then, I should not forget to mention that antibodies are not the only way one can develop immunity. There is also T-cell immunity, that is basically a different defense mechanism of the body. The most relevant difference for the question of herd immunity is that it’s much more difficult to test for T-cell immunity. Which is why there are basically no data on it. But there are pretty reliable data by now showing that immunity to COVID is only temporary, antibody levels fall after a few months, and reinfections are possible, though it remains unclear how common they will be.
So, in summary: Estimates for the COVID herd immunity threshold range from roughly twenty percent to seventy percent, there are pretty much no data to make these estimates more accurate, we have no good data on how many people are presently immune, but we know reinfection is possible after a couple of months.
Let us then talk about the Great Barrington Declaration. The Great Barrington Declaration is not actually Great, it was merely written in place called Great Barrington. The declaration was formulated by three epidemiologists, and according to claims on the website, it has since been signed by more than eleven thousand medical and public health scientists.
The supporters of the declaration disapprove of lockdown measures and instead argue for an approach they call Focused Protection. In their own words:
The reaction by other scientists and the media has been swift and negative. The Guardian called the Barrington Declaration “half baked” “bad science” and “a folly”. A group of scientists writing for The Lancet called it a “dangerous fallacy unsupported by scientific evidence”, the US American infectious disease expert Fauci called it “total nonsense,” and John Barry, writing for the New York Times, went so far to suggest it be called “mass murder” instead of herd immunity. Though they later changed the headline.
Some of the criticism focused on the people who wrote the declaration, or who they might have been supported by. These are ad hominem attacks that just distract from the science, so I don’t want to get into this.
The central element of the criticism is that the Barrington Declaration is vague on how the “Focused Protection” is supposed to work. This is a valid criticism. The declaration left it unclear just to how identify those at risk and how to keep them efficiently apart from the rest of the population, which is certainly difficult to achieve. But of course if no one is thinking about how to do it, there will be no plan for how to do it.
Why am I telling you this? Because I think all these commentators missed the point of the Barrington Declaration. Let us take this quote from an opinion piece in the Guardian in which three public health scientists commented on the idea of focused protection:
“It’s time to stop asking the question “is this sound science?” We know it is not.”
It’s right that arguing for focused protection is not sound science, but that is not because it’s not sound, it’s because it’s not science. It’s a value decision.
The authors of the Great Barrington Declaration point out, entirely correctly, that we are in a situation where we have only bad options. Lockdown measures are bad, pursuing natural herd immunity is also bad.
The question is, which is worse, and just what do you mean by “worse”. This is the decision that politicians are facing now and it is not obvious what is the best strategy. This decision must be supported by data for the consequences of each possible path of action. So we need to discuss not only how many people die from COVID and what the long-term health problems may be, but also how lockdowns, social distancing, and economic distress affect health and health care. We need proper risk estimates with uncertainties. We do not need scientists who proclaim that science tells us what’s the right thing to do.
I hope that this brief survey of the literature on herd immunity was helpful for you.
I have a video upcoming later today with astrophysicist (!) Niayesh Afshordi from Perimeter Institute about his new pandemic model (!!), so stay tuned. He will also join the Thursday chat at 5pm CET. Note that this is the awkward week of the year when the NYC-Berlin time shift is only 5 hours, so that's noon Eastern Time.
Saturday, October 24, 2020
How can climate be predictable if weather is chaotic?
Today I want to take on a question that I have not been asked, but that I have seen people asking – and not getting a good answer. It’s how can scientists predict the climate in one hundred years if they cannot make weather forecasts beyond two weeks – because of chaos. The answer they usually get is “climate is not weather”, which is correct, but doesn’t really explain it. And I think it’s actually a good question. How is it possible that one can make reliable long-term predictions when short-term predictions are impossible. That’s what we will talk about today.
Now, weather forecast is hideously difficult, and I am not a meteorologist, so I will instead just use the best-known example of a chaotic system, that’s the one studied by Edward Lorenz in 1963.
Edward Lorenz was a meteorologist who discovered by accident that weather is chaotic. In the 1960s, he repeated a calculation to predict a weather trend, but rounded an initial value from six digits after the point to only three digits. Despite the tiny difference in the initial value, he got wildly different results. That’s chaos, and it gave rise to the idea of the “butterfly effect”, that the flap of a butterfly in China might cause a tornado in Texas two weeks later.
To understand better what was happening, Lorenz took his rather complicated set of equations and simplified it to a set of only three equations that nevertheless captures the strange behavior he had noticed. These three equations are now commonly known as the “Lorenz Model”. In the Lorenz model, we have three variables, X, Y, and Z and they are functions of time, that’s t. This model can be interpreted as a simplified description of convection in gases or fluids, but just what it describes does not really matter for our purposes.
The nice thing about the Lorenz model is that you can integrate the equations on a laptop. Let me show you one of the solutions. Each of the axes in this graph is one of the directions X, Y, Z, so the solution to the Lorenz model will be a curve in these three dimensions. As you can see, it circles around two different locations, back and forth.
It's not only this one solution which does that, actually all the solutions will end up doing circles close by these two places in the middle, which is called the “attractor”. The attractor has an interesting shape, and coincidentally happens to look somewhat like a butterfly with two parts you could call “wings”. But more relevant for us is that the model is chaotic. If we take two initial values that are very similar, but not exactly identical, as I have done here, then the curves at first look very similar, but then they run apart, and after some while they are entirely uncorrelated.
These three dimensional plots are pretty, but it’s somewhat hard to see just what is going on, so in the following I will merely look at one of these coordinates, that is the X-direction. From the three dimensional plot, you expect that the value in X-direction will go back and forth between two numbers, and indeed that’s what happens.
Here you see again the curves I previously showed for two initial values that differ by a tiny amount. At first the two curves look pretty much identical, but then they diverge and after some time they become entirely uncorrelated. As you see, the curves flip back and forth between positive and negative values, which correspond to the two wings of the attractor. In this early range, maybe up to t equals five, you would be able to make a decent weather forecast. But after that, the outcome depends very sensitively on exactly what initial value you used, and then measurement error makes a good prediction impossible. That’s chaos.
Now, I want to pretend that these curves say something about the weather, maybe they describes the weather on a strange planet where it either doesn’t rain at all or it pours and the weather just flips back and forth between these two extremes. Besides making the short-term weather forecast you could then also ask what’s the average rainfall in a certain period, say, a year.
To calculate this average, you would integrate the curve over some period of time, and then divide by the duration of that period. So let us plot these curves again, but for a longer period. Just by eyeballing these curves you’d expect the average to be approximately zero. Indeed, I calculated the average from t equals zero to t equals one hundred, and it comes out to be approximately zero. What this means is that the system spends about equal amounts of time on each wing of the attractor.
To stick with our story of rainfall on the weird planet, you can imagine that the curve shows deviations from a reference value that you set to zero. The average value depends on the initial value and will fluctuates around zero because I am only integrating over a finite period of time, so I arbitrarily cut off the curve somewhere. If you’d average over longer periods of time, the average would inch closer and closer to zero.
What I will do now is add a constant to the equations of the Lorenz model. I will call this constant “f” and mimics what climate scientists call “radiative forcing”. The radiative forcing is the excess power per area that Earth captures due to increasing carbon dioxide levels. Again that’s relative to a reference value.
I want to emphasize again that I am using this model only as an analogy. It does not actually describe the real climate. But it does make a good example for how to make predictions in chaotic systems.
Having said that, let us look again at how the curves look like with the added forcing. These are the curves for f equals one. Looks pretty much the same as previously if you ask me. f=2. I dunno. You wouldn’t believe how much time I have spent staring at these curves for this video. f=3. Looks like the system is spending a little more time in this upper range, doesn’t it? f=4. Yes, it clearly does. And just for fun, If you turn f up beyond seven or so, the system will get stuck on one side of the attractor immediately.
The relevant point is now that this happens for all initial values. Even though the system is chaotic, one clearly sees that the response of the system does have a predictable dependence on the input parameter.
To see this better, I have calculated the average of these curves as a function of the “radiative forcing”, for a sample of initial values. And this is what you get. You clearly see that the average value is strongly correlated with the radiative forcing. Again, the scatter you see here is because I am averaging over a rather arbitrarily chosen finite period.
What this means is that in a chaotic system, the trends of average values can be predictable, even though you cannot predict the exact state of the system beyond a short period of time. And this is exactly what is happening in climate models. Scientists cannot predict whether it will rain on June 15th, 2079, but they can very well predict the average rainfall in 2079 as a function of increasing carbon dioxide levels.
This video was sponsored by Brilliant, which is a website that offers interactive courses on a large variety of topics in science and mathematics. In this video I showed you the results of some simple calculations, but if you really want to understand what is going on, then Brilliant is a great starting point. Their courses on Differential Equations I and II, probabilities and statistics cover much of the basics that I used here.
To support this channel and learn more about Brilliant, go to and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.
You can join the chat about this week’s video, tomorrow (Sunday, Oct 25) at 5pm CET, here.
Thursday, October 22, 2020
Particle Physicists Continue To Make Empty Promises
[This is a transcript of the video embedded below]
Hello and welcome back to my YouTube channel. Today I want to tell you how particle physicists are wasting your money. I know that’s not nice, but at the end of this video I think you will understand why I say what I say.
What ticked me off this time was a comment published in Nature Physics, by CERN Director-General Fabiola Gianotti and Gian Giudice, who is Head of CERN's Theory Department. It’s called a comment, but what it really is is an advertisement. It’s a sales pitch for their next larger collider for which they need, well, a few dozen billion Euro. We don’t know exactly because they are not telling us how expensive it would be to actually run the thing. When it comes to the question what the new mega collider could do for science, they explain:
“A good example of a guaranteed result is dark matter. A proton collider operating at energies around 100 TeV [that’s the energy of the planned larger collider] will conclusively probe the existence of weakly interacting dark-matter particles of thermal origin. This will lead either to a sensational discovery or to an experimental exclusion that will profoundly influence both particle physics and astrophysics.”
Let me unwrap this for you. The claim that dark matter is a guaranteed result, followed by weasel words about weakly interacting and thermal origin, is the physics equivalent of claiming “We will develop a new drug with the guaranteed result of curing cancer” followed by weasel words to explain, well, actually it will cure a type of cancer that exists only theoretically and has never been observed in reality. That’s how “guaranteed” this supposed dark matter result is. They guarantee to rule out some very specific hypotheses for dark matter that we have no reason to think are correct in the first place. What is going on here?
What’s going on is that particle physicists have a hard time understanding that when Popper went on about how important it is that a scientific hypothesis is falsifiable, he did not mean that a hypothesis is scientific just because it is falsifiable. There are lots of falsifiable hypotheses that are clearly unscientific.
For example, YouTube will have a global blackout tomorrow at noon central time. That’s totally falsifiable. If you give me 20 billion dollars, I can guarantee that I can test this hypothesis. Of course it’s not worth the money. Why? Because my hypothesis may be falsifiable, but it’s unscientific because it’s just guesswork. I have no reason whatsoever to think that my blackout prediction is correct.
The same is the case with particle physicists’ hypotheses for dark matter that you are “guaranteed” to rule out with that expensive big collider. Particle physicists literally have thousands of theories for dark matter, some thousandths of which have already been ruled out. Can they guarantee that a next larger collider can rule out some more? Yes. What is the guaranteed knowledge we will gain from this? Well, the same as the gain that we have gotten so far from ruling out their dark matter hypotheses, which is that we still have no idea what dark matter is. We don’t even know it is a particle to begin with.
Let us look again at that quote, they write:
No. The most likely outcome will be that particle physicists and astrophysicsts will swap their current “theories” for new “theories” according to which the supposed particles are heavier than expected. Then they will claim that we need yet another bigger collider to find them. What makes me think this will happen? Am I just bitter or cynical, as particle physicists accuse me? No, I am just looking at what they have done in the past.
For example, here’s an oldie but goldie, a quote from a piece written by string theorists David Gross and Edward Witten for the Wall street journal
They wrote this in 1996. Well, clearly that didn’t pan out.
And because it’s so much fun, I want to read you a few more quotes. But they are a little bit more technical, so I have to give you some background first.
When particle physicists say “electroweak scale” or “TeV scale” they mean energies that can be tested at the Large Hadron Collider. When they say “naturalness” they refer to a certain type of mathematical beauty that they think a theory should fulfil.
You see, particle physicists think it is a great problem that theories which have been experimentally confirmed are not as beautiful as particle physicists think nature should be. They have therefore invented a lot of particles that you can add to the supposedly ugly theories to remedy the lack of beauty. If this sounds like a completely non-scientific method, that’s because it is. There is no reason this method should work, and it does as a matter of fact not work. But they have done this for decades and still have not learned that it does not work.
Having said that, here is a quote from Giudice and Rattazzi in 1998. That’s the same Guidice who is one of the authors of the new nature commentary that I mentioned in the beginning. In 1998 he wrote:
Higher energies, at that time, were the energies that have now been tested at the Large Hadron Collider. The supposed naturalness problem was the reason they thought the LHC should see new fundamental particles besides the Higgs. This has not happened. We now know that those arguments were wrong.
In 2004, Fabiola Gianotti, that’s the other author of the new Nature Physics comment, wrote:
So, she claimed in 2004 that the LHC would see new particles besides the Higgs. Whatever happened to this prediction? Did they ever tell us what they learned from being wrong? Not to my knowledge.
These people were certainly not the only ones who repeated this story. Here is for example a quote from the particle physicist Michael Dine, who wrote in 2007:
Well, you know what, it hasn’t done either.
I could go on for quite some while quoting particle physicists who made wrong predictions and now pretend they didn’t, but it’s rather repetitive. I have collected the references here. Let us instead talk about what this means.
All these predictions from particle physicists were wrong. There is no shame in being wrong. Being wrong is essential for science. But what is shameful is that none of these people ever told us what they learned from being wrong. They did not revise their methods for making predictions for new particles. They still use the same methods that have not worked for decades. Neither did they do anything about the evident group think in their community. But they still want more money.
The tragedy is I actually like most of these particle physicists. They are smart and enthusiastic about science and for the most part they’re really nice people.
But look, they refuse to learn from evidence. And someone has to point it out: The evidence clearly says their methods are not working. Their methods have led to thousands of wrong predictions. Scientists should learn from failure. Particle physicists refuse to learn.
Particle physicists, of course, are entirely ignoring my criticism and instead call me “anti-science”. Let that sink in for a moment. They call me “anti-science” because I say we should think about where to best invest science funding, and if you do a risk-benefit assessment it is clear that building a bigger collider is not currently a good investment. It is both high risk and low benefit. We would be better off if we'd instead invest in the foundations of quantum mechanics and astroparticle physics. They call me “anti-science” because I ask scientists to think. You can’t make up this shit.
Frankly, the way that particle physicists behave makes me feel embarrassed I ever had anything to do with their field.
Saturday, October 17, 2020
I Can’t Forget [Remix]
In the midst of the COVID lockdown I decided to remix some of my older songs. Just as I was sweating over the meters, I got an email out of the blue. Steven Nikolic from Canada wrote he’ be interested in remixing some of my old songs. A few months later, we have started a few projects together. Below you see the first result, a remake of my 2014 song “I Can’t Forget”.
If you want to see what difference 6 years can make, in hardware, software, and wrinkles, the original is here.
David Bohm’s Pilot Wave Interpretation of Quantum Mechanics
Today I want to take on a topic many of you requested, repeatedly. That is David Bohm’s approach to Quantum Mechanics, also known as the Pilot Wave Interpretation, or sometimes just Bohmian Mechanics. In this video, I want to tell you what Bohmian mechanics is, how it works, and what’s good and bad about it.
Ahead, I want to tell you a little about David Bohm himself, because I think the historical context is relevant to understand today’s situation with Bohmian Mechanics. David Bohm was born in 1917 in Pennsylvania, in the Eastern United States. His early work in physics was in the areas we would now call plasma physics and nuclear physics. In 1951, he published a textbook about quantum mechanics. In the course of writing it, he became dissatisfied with the then prevailing standard interpretation of quantum mechanics.
The standard interpretation at the time was that pioneered by the Copenhagen group – notably Bohr, Heisenberg, and Schrödinger – and is today usually referred to as the Copenhagen Interpretation. It works as follows. In quantum mechanics, everything is described by a wave-function, usually denoted Psi. Psi is a function of time. One can calculate how it changes in time with a differential equation known as the Schrödinger equation. When one makes a measurement, one calculates probabilities for the measurement outcomes from the wave-function. The equation by help of which one calculates these probabilities is known as Born’s Rule. I explained in an earlier video how this works.
The peculiar thing about the Copenhagen Interpretation is now that it does not tell you what happens before you make a measurement. If you have a particle described by a wave-function that says the particle is in two places at once, then the Copenhagen Interpretation merely says, at the moment you measure the particle it’s either here or there, with a certain probability that follows from the wave-function. But how the particle transitioned from being in two places at once to suddenly being in only one place, the Copenhagen Interpretation does not tell you. Those who advocate this interpretation would say that’s a question you are not supposed to ask because, by definition, what happens before the measurement is not measureable.
Bohm was not the only one dismayed that the Copenhagen people would answer a question by saying you’re not supposed to ask it. Albert Einstein didn’t like it either. If you remember, Einstein famously said “God does not throw dice”, by which he meant he does not believe that the probabilistic nature of quantum mechanics is fundamental. In contrast to what is often claimed, Einstein did not think quantum mechanics was wrong. He just thought it is probabilistic the same way classical physics is probabilistic, namely, that our inability to predict the outcome of a measurement in quantum mechanics comes from our lack of information. Einstein thought, in a nutshell, there must be some more information, some information that is missing in quantum mechanics, which is why it appears random.
This missing information in quantum mechanics is usually called “hidden variables”. If you knew the hidden variables, you could predict the outcome of a measurement. But the variables are “hidden”, so you can only calculate the probability of getting a particular outcome.
Back to Bohm. In 1952, he published two papers in which he laid out his idea for how to make sense of quantum mechanics. According to Bohm, the wave-function in quantum mechanics is not what we actually observe. Instead, what we observe are particles, which are guided by the wave-function. One can arrive at this interpretation in a few lines of calculation. I will not go through this in detail because it’s probably not so interesting for most of you. Let me just say you take the wave-function apart into an absolute value and a phase, insert it into the Schrödinger equation, and then separate the resulting equation into its real and imaginary part. That’s pretty much it.
The result is that in Bohmian mechanics the Schrödinger equation falls apart into two equations. One describes the conservation of probability and determines what the guiding field does. The other determines the position of the particle, and it depends on the guiding field. This second equation is usually called the “guiding equation.” So this is how Bohmian mechanics works. You have particles, and they are guided by a field which in return depends on the particle.
To use Bohm’s theory, you then need one further assumption, one that tells what the probability is for the particle to be at a certain place in the guiding field. This adds another equation, usually called the “quantum equilibrium hypothesis”. It is basically equivalent to Born’s rule and says that the probability for finding the particle in a particular place in the guiding field is given by the absolute square of the wave-function at that place. Taken together, these equations – the conservation of probability, the guiding equation, and the quantum equilibrium hypothesis – give the exact same predictions as quantum mechanics. The important difference is that in Bohmian mechanics, the particle is really always in only one place, which is not the case in quantum mechanics.
As they say, a picture speaks a thousand words, so let me just show you how this looks like for the double slit experiment. These thin black curves you see here are the possible ways that the particle could go from the double slit to the screen where it is measured by following the guiding field. Just which way the particle goes is determined by the place it started from. The randomness in the observed outcome is simply due to not knowing exactly where the particle came from.
What is it good for? The great thing about Bohmian mechanics is that it explains what happens in a quantum measurement. Bohmian mechanics says that the reason we can only make probabilistic predictions in quantum mechanics is just that we did not exactly know where the particle initially was. If we measure it, we find out where it is. Nothing mysterious about this. Bohm’s theory, therefore, says that probabilities in quantum mechanics are of the same type as in classical mechanics. The reason we can only predict probabilities for outcomes is because we are missing information. Bohmian mechanics is a hidden variables theory, and the hidden variables are the positions of those particles.
So, that’s the big benefit of Bohmian mechanics. I should add that while Bohm was working on his papers, it was brought to his attention that a very similar idea had previously been put forward in 1927 by De Broglie. This is why, in the literature, the theory is often more accurately referred to as “De Broglie Bohm”. But de Broglie’s proposal did, at the time, not attract much attention. So how did physicists react to Bohm’s proposal in fifty-two. Not very kindly. Niels Bohr called it “very foolish”. Leon Rosenfeld called it “very ingenious, but basically wrong”. Oppenheimer put it down as “juvenile deviationism”. And Einstein, too, was not convinced. He called it “a physical fairy-tale for children” and “not very hopeful.”
Why the criticism? One of the big disadvantages of Bohmian mechanics, that Einstein in particular disliked, is that it is even more non-local than quantum mechanics already is. That’s because the guiding field depends on all the particles you want to measure. This means, if you have a system of entangled particles, then the guiding equation says the velocity of one particle depends on the velocity of the other particles, regardless of how far away they are from each other.
That’s a problem because we know that quantum mechanics is strictly speaking only an approximation. The correct theory is really a more complicated version of quantum mechanics, known as quantum field theory. Quantum field theory is the type of theory that we use for the standard model of particle physics. It’s what people at CERN use to make predictions for their experiments. And in quantum field theory, locality and the speed of light limit, are super-important. They are built very deeply into the math.
The problem is now that since Bohmian mechanics is not local, it has turned out to be very difficult to make a quantum field theory out of it. Some have made attempts, but currently there is simply no Pilot Wave alternative for the Standard Model of Particle Physics. And for many physicists, me included, this is a game stopper. It means the Bohmian approach cannot reproduce the achievements of the Copenhagen Interpretation.
Bohmian mechanics has another odd feature that seems to have perplexed Albert Einstein and John Bell in particular. It’s that, depending on the exact initial position of the particle, the guiding field tells the particle to go either one way or another. But the guiding field has a lot of valleys where particles could be going. So what happens with the empty valleys if you make a measurement? In principle, these empty valleys continue to exist. David Deutsch has claimed this means “pilot-wave theories are parallel-universes theories in a state of chronic denial.”
Bohm himself, interestingly enough, seems to have changed his attitude towards his own theory. He originally thought it would in some cases give predictions different from quantum mechanics. I only learned this recently from a Biography of Bohm written by David Peat. Peat writes
“Bohm told Einstein… his only hope was that conventional quantum theory would not apply to very rapid processes. Experiments done in a rapid succession would, he hoped, show divergences from the conventional theory and give clues as to what lies at a deeper level.”
However, Bohm had pretty much the whole community against him. After a particularly hefty criticism by Heisenberg, Bohm changed course and claimed that his theory made the same predictions as quantum mechanics. But it did not help. After this, they just complained that the theory did not make new predictions. And in the end, they just ignored him.
So is Bohmian mechanics in the end just a way of making you feel better about the predictions of quantum mechanics? Depends on whether or not you think the “quantum equilibrium hypothesis” is always fulfilled. If it is always fulfilled, the two theories give the same predictions. But if the equilibrium is actually a state the system must first settle in, as the name certainly suggests, then there might be cases when this assumption is not fulfilled. And then, Bohmian mechanics is really a different theory. Physicists still debate today whether such deviations from quantum equilibrium can happen, and whether we can therefore find out that Bohm was right."" This video was sponsored by Brilliant which is a website that offers interactive courses on a large variety of topics in science and mathematics. I always try to show you some of the key equations, but if you really want to understand how to use them, then Brilliant is a great starting point. For this video, for example, I would recommend their courses on differential equations, linear algebra, and quantum objects. To support this channel and learn more about Brilliant, go to and sign up for free. The first 200 subscribers using this link will get 20 percent off the annual premium subscription.
You can join the chats on this week’s topic using the Converseful app in the bottom right corner:
Saturday, October 10, 2020
You don’t have free will, but don’t worry.
Today I want to talk about an issue that must have occurred to everyone who spent some time thinking about physics. Which is that the idea of free will is both incompatible with the laws of nature and entirely meaningless. I know that a lot of people just do not want to believe this. But I think you are here to hear what the science says. So, I will tell you what the science says. In this video I first explain why free will does not exist, indeed makes no sense, and then tell you why there are better things to worry about.
I want to say ahead that there is much discussion about free will in neurology, where the question is whether we subconsciously make decisions before we become consciously aware of having made one. I am not a neurologist, so this is not what I am concerned with here. I will be talking about free will as the idea that in this present moment, several futures are possible, and your “free will” plays a role for selecting which one of those possible futures becomes reality. This, I think, is how most of us intuitively think of free will because it agrees with our experience of how the world seems to works. It is not how some philosophers have defined free will, and I will get to this later. But first, let me tell you what’s wrong with this intuitive idea that we can somehow select among possible futures.
Last week, I explained what differential equations are, and that all laws of nature which we currently know work with those differential equations. These laws have the common property that if you have an initial condition at one moment in time, for example the exact details of the particles in your brain and all your brain’s inputs, then you can calculate what happens at any other moment in time from those initial conditions. This means in a nutshell that the whole story of the universe in every single detail was determined already at the big bang. We are just watching it play out.
These deterministic laws of nature apply to you and your brain because you are made of particles, and what happens with you is a consequence of what happens with those particles. A lot of people seem to think this is a philosophical position. They call it “materialism” or “reductionism” and think that giving it a name that ends on –ism is an excuse to not believe it. Well, of course you can insist to just not believe reductionism is correct. But this is denying scientific evidence. We do not guess, we know that brains are made of particles. And we do not guess, we know, that we can derive from the laws for the constituents what the whole object does. If you make a claim to the contrary, you are contradicting well-established science. I can’t prevent you from denying scientific evidence, but I can tell you that this way you will never understand how the universe really works.
So, the trouble with free will is that according to the laws of nature that we know describe humans on the fundamental level, the future is determined by the present. That the system – in this case, your brain – might be partly chaotic does not make a difference for this conclusion, because chaos is still deterministic. Chaos makes predictions difficult, but the future still follows from the initial condition.
What about quantum mechanics? In quantum mechanics some events are truly random and cannot be predicted. Does this mean that quantum mechanics is where you can find free will? Sorry, but no, this makes no sense. These random events in quantum mechanics are not influenced by you, regardless of exactly what you mean by “you”, because they are not influenced by anything. That’s the whole point of saying they are fundamentally random. Nothing determines their outcome. There is no “will” in this. Not yours and not anybody else’s.
Taken together we therefore have determinism with the occasional, random quantum jump, and no combination of these two types of laws allows for anything resembling this intuitive idea that we can somehow choose which possible future becomes real. The reason this idea of free will turns out to be incompatible with the laws of nature is that it never made sense in the first place. You see, that thing you call “free will” should in some sense allow you to choose what you want. But then it’s either determined by what you want, in which case it’s not free, or it’s not determined, in which case it’s not a will.
Now, some have tried to define free will by the “ability to have done otherwise”. But that’s just empty words. If you did one thing, there is no evidence you could have done something else because, well, you didn’t. Really there is always only your fantasy of having done otherwise.
In summary, the idea that we have a free will which gives us the possibility to select among different futures is both incompatible with the laws of nature and logically incoherent. I should add here that it’s not like I am saying something new. Look at the writing of any philosopher who understand physics, and they will acknowledge this.
But some philosophers insist they want to have something they can call free will, and have therefore tried to redefine it. For example, you may speak of free will if no one was in practice able to predict what you would do. This is certainly presently the case, that most human behavior is unpredictable, though I can predict that some people who didn’t actually watch this video will leave a comment saying they had no other choice than leaving their comment and think they are terribly original.
So, yeah, if you want you can redefine “free will” to mean “no one was able to predict your decision.” But of course your decision was still determined or random regardless of whether someone predicted it. Others have tried to argue that free will means some of your decisions are dominated by processes internal to your brain and not by external influences. But of course your decision was still determined or random, regardless of whether it was dominated by internal or external influences. I find it silly to speak of “free will” in these cases.
I also find it unenlightening to have an argument about the use of words. If you want to define free will in such a way that it is still consistent with the laws of nature, that is fine by me, though I will continue to complain that’s just verbal acrobatics. In any case, regardless of how you want to define the word, we still cannot select among several possible futures. This idea makes absolutely no sense if you know anything about physics.
What is really going on if you are making a decision is that your brain is running a calculation, and while it is doing that, you do not know what the outcome of the calculation will be. Because if you did, you wouldn’t have to do the calculation. So, the impression of free will comes from our self-awareness, that we think about what to do, combined with our inability to predict the result of that thinking before we’re done.
I feel like I must add here a word about the claim that human behavior is unpredictable because if someone told you that they predicted you’d do one thing, you could decide to do something else. This is a rubbish argument because it has nothing to do with human behavior, it comes from interfering with the system you are making predictions for. It is easy to see that this argument is nonsense because you can make the same claim about very simple computer codes.
Suppose you have a computer that evaluates whether an equation has a real-valued root. The answer is yes or no. You can predict the answer. But now you can change the algorithm so that if you input the correct answer, the code will output the exact opposite answer, ie “yes” if you predicted “no” and “no” if you predicted “yes”. As a consequence, your prediction will never be correct. Clearly, this has nothing to do with free will but with the fact that the system you make a prediction for gets input which the prediction didn’t account for. There’s nothing interesting going on in this argument.
Another objection that I’ve heard is that I should not say free will does not exist because that would erode people’s moral behavior. The concern is, you see, that if people knew free will does not exist, then they would think it doesn’t matter what they do. This is of course nonsense. If you act in ways that harm other people, then these other people will take steps to prevent that from happening again. This has nothing to do with free will. We are all just running software that is trying to optimize our well-being. If you caused harm, you are responsible, not because you had “free will” but because you embody the problem and locking you up will solve it.
There have been a few research studies that supposedly showed a relation between priming participants to not believe in free will and them behaving immorally. The problem with these studies, if you look at how they were set up, is that people were not primed to not believe in free will. They were primed to think fatalistically. In some cases, for example, they were being suggested that their genes determine their future, which, needless to say, is only partly correct, regardless of whether you believe in free will. And some more nuanced recent studies have actually shown the opposite. A 2017 study on free will and moral behavior concluded “we observed that disbelief in free will had a positive impact on the morality of decisions toward others”. Please check the information below the video for a reference.
So I hope I have convinced you that free will is nonsense, and that the idea deserves going into the rubbish bin. The reason this has not happened yet, I think, is that people find it difficult to think of themselves in any other way than making decisions drawing on this non-existent “free will.” So what can you do? You don’t need to do anything. Just because free will is an illusion does not mean you are not allowed to use it as a thinking aid. If you lived a happy life so far using your imagined free will, by all means, please keep on doing so.
If it causes you cognitive dissonance to acknowledge you believe in something that doesn’t exist, I suggest that you think of your life as a story which has not yet been told. You are equipped with a thinking apparatus that you use to collect information and act on what you have learned from this. The result of that thinking is determined, but you still have to do the thinking. That’s your task. That’s why you are here. I am curious to see what will come out of your thinking, and you should be curious about it too.
Why am I telling you this? Because I think that people who do not understand that free will is an illusion underestimate how much their decisions are influenced by the information they are exposed to. After watching this video, I hope, some of you will realize that to make the best of your thinking apparatus, you need to understand how it works, and pay more attention to cognitive biases and logical fallacies.
You can join the chat about this week's post using these links:
Chat #1 - Sunday, October 11 @ 9 AM PST / 12PM EST / 6PM CEST
Chat #2 - Tuesday, October 13 @ 9 AM PST / 12PM EST / 6PM CEST
Thursday, October 08, 2020
[Guest Post] New on BackRe(action): Real-Time Chat Rooms
[This post is written by Ben Alderoty.]
For those who’ve been keeping tabs, my team and I have been working with Sabine since earlier this year to give commenters on her site more ways to talk. Based on your feedback, we’re launching a new way to make that happen: real-time chat rooms. Here’s how they’ll work.
Chat rooms (chats) live in the bottom right corner of the blog. For the time being, they are only available on Desktop with support for mobile devices to come soon. Unlike traditional, always-available chat rooms, chats on BackRe(action) happen at scheduled times. This ensures people will be there at the same time as you and the conversation can happen in real-time. Chats start at their scheduled times and end when everyone has left.
You’ll see the first couple of chats have already been scheduled when you open the app. The topic for these chats is Sabine’s upcoming post on free will she is releasing on Saturday. If you’re interested in attending, you can set up a reminder by clicking ‘Remind me’ and selecting either Email or Calendar. You can also share links to the chat by clicking the icon next to the chat name. We’ll be trying out different topics and times for chats based on feedback we receive.
The chats themselves happen right here on BackRe(action). You won’t need an account to participate, just a name (real, fake, pseudonym… anything works). Depending on how many people join, the group may be split into separate rooms to allow for better discussion. Chats will remain open for late joiners as long as there’s an active discussion taking place. Spectators are welcome too! All of the messages will disappear when the chat ends, so you’ll have to be there to see what’s said.
As a reminder, the first two chats are happening on:
Come to one or come to both! New chats will be up mid-next week for the week after.
So, what do you think? Are you ready for chat rooms on BackRe(action)? What topics do you want to talk about? Let us know what you think in the comments section or in the app via the ‘Give Feedback’ button below the chats.
Saturday, October 03, 2020
What are Differential Equations and how do they work?
|
4c6051dae48b7972 | thumb|A scanning tunneling microscopy image of [[pentacene molecules, which consist of linear chains of five carbon rings.]] A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and ''molecule'' is often used when referring to polyatomic ions. In the kinetic theory of gases, the term ''molecule'' is often used for any gaseous particle regardless of its composition. This violates the definition that a molecule contain ''two or more'' atoms, since the noble gases are individual atoms. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, as with water (two hydrogen atoms and one oxygen atom; H2O). Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules. Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids they are made of, the nucleic acids (DNA & RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals ordinarily are not molecules, e.g. iron sulfate. However, the majority of familiar solid substances on Earth are not made of molecules. These include all of the minerals that make up the substance of the Earth, soil, dirt, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are ''not'' made of identifiable molecules. No typical molecule can be defined for salts nor for covalent crystals, although these are often composed of repeating unit cells that extend either in a plane, e.g. graphene; or three-dimensionally e.g. diamond, quartz, sodium chloride. The theme of repeated unit-cellular-structure also holds for most metals which are condensed phases with metallic bonding. Thus solid metals are not made of molecules. In glasses, which are solids that exist in a vitreous disordered state, the atoms are held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating unit-cellular-structure that characterizes salts, covalent crystals, and metals.
Molecular science
The science of molecules is called ''molecular chemistry'' or ''molecular physics'', depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term ''unstable molecule'' is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate.
History and etymology
According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. * Molecule (1794) – "extremely minute particle", from French ' (1678), from New Latin ', diminutive of Latin ' "mass, barrier". A vague meaning at first; the vogue for the word (used until the late 18th century only in Latin form) can be traced to the philosophy of Descartes. The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.
Molecules are held together by either covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment. For example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements. A homonuclear molecule is made out of two or more atoms of a single element. While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals act very differently than molecules.
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed ''shared pairs'' or ''bonding pairs'', and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed ''covalent bonding''.
Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed ''electrovalence'' in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. At normal temperatures and pressures, ionic bonding mostly creates solids (or occasionally liquids) without separate identifiable molecules, but the vaporization/sublimation such materials does produce small separate molecules where electrons are still transferred fully enough for the bonds to be considered ionic rather than covalent.
Molecular size
Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å. Effective molecular radius is the size a molecule displays in solution. The table of permselectivity for different substances contains examples.
Molecular formulas
Chemical formula types
The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and ''plus'' (+) and ''minus'' (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts. A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations.
Structural formula
Molecular geometry
Molecular spectroscopy
Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to Planck's formula). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule.
Theoretical aspects
The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is ''sufficiently stable'' to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is ''sufficiently stable'' to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe.
See also
* Atom * Chemical polarity * Covalent bond * Diatomic molecule * List of compounds * List of interstellar and circumstellar molecules * Molecular biology * Molecular design software * Molecular engineering * Molecular geometry * Molecular Hamiltonian * Molecular ion * Molecular modelling * Molecular promiscuity * Molecular orbital * Non-covalent bonding * Periodic systems of small molecules * Small molecule * Comparison of software for molecular mechanics modeling * Van der Waals molecule * World Wide Molecular Matrix
External links
Molecule of the MonthSchool of Chemistry, University of Bristol
{{Authority control Category:Chemistry Category:Matter |
b378cd19c2c7d41a | Tullio Regge
Tullio Regge (July 11, 1931 – October 23, 2014) was an Italian theoretical physicist. He obtained the laurea in physics from the University of Turin in 1952 under the direction of Mario Verde and Gleb Wataghin, and a Ph.D. in physics from the University of Rochester in 1957 under the direction of Robert Marshak. From 1958 to 1959 Regge held a position at the Max Planck Institute for Physics where he worked with Werner Heisenberg. In 1961 he was appointed to the chair of Relativity at the University of Turin. He also held an appointment at the Institute for Advanced Study of Princeton from 1965 to 1979. He was emeritus professor at the Polytechnic University of Turin while contributing work at CERN as a visiting scientist. In 1959, Regge discovered a mathematical property of potential scattering in the Schrödinger equation—that the scattering amplitude can be thought of as an analytic function of the angular momentum, and that the position of the poles determines power-law growth rates of the amplitude in the purely mathematical region of large values of the cosine of the scattering angle. This formulation is known as Regge theory and had a strong influence on the High Energy Physics of the ‘60s and ‘70s so that his name was well-known everywhere, in particular in the Soviet Union. The prediction of Regge trajectories, a part of Regge’s theory which tries to explain slowly rising cross section of hadronic collisions at high energies, was first demonstrated at CERN at the Intersecting Storage Ring (ISR). In the early 1960s, Regge introduced Regge calculus, a simplicial formulation of general relativity. Regge calculus was the first discrete gauge theory suitable for numerical simulation, and an early relative of lattice gauge theory. In 1968 he and G. Ponzano developed a quantum version of Regge calculus in three space-time dimensions now known as the Ponzano-Regge model. This was the first of a whole series of state sum models for quantum gravity known as spin foam models. In mathematics, the model also developed into the Turaev-Viro model, an example of a quantum invariant. Other important contributions were the theory of vortices in liquid Helium and exact solution to the Ising model on finite lattices.
Awards
(with link)
1. 1964 Dannie Heineman Prize for Mathematical Physics
2. 1968 Prize Città di Como
3. 1979 Albert Einstein Award for Relativity
4. 1987 Cecil Powell Medal
5. 1996 Dirac Medal
6. 1997Marcel Grossmann Award
7. 2001 Pomeranchuk Prize
There is an asteroid that bears his name, 3778 Regge. |
b22667e3654957a3 |
Excitons in atomically thin transition metal dichalcogenides
Gang Wang Université de Toulouse, INSA-CNRS-UPS, LPCNO, 135 Av. Rangueil, 31077 Toulouse, France * Alexey Chernikov Department of Physics, University of Regensburg, D-93040 Regensburg, Germany Mikhail M. Glazov Ioffe Institute, 194021 St. Petersburg, Russia Tony F. Heinz Department of Applied Physics, Stanford University, Stanford, California 94305, USA and
SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA
Xavier Marie, Thierry Amand and Bernhard Urbaszek Université de Toulouse, INSA-CNRS-UPS, LPCNO, 135 Av. Rangueil, 31077 Toulouse, France
Atomically thin materials such as graphene and monolayer transition metal dichalcogenides exhibit remarkable physical properties resulting from their reduced dimensionality and crystal symmetry. The family of semiconducting transition metal dichalcogenides is an especially promising platform for fundamental studies of two-dimensional (2D) systems, with potential applications in optoelectronics and valleytronics due to their direct band gap in the monolayer limit and highly efficient light-matter coupling. A crystal lattice with broken inversion symmetry combined with strong spin-orbit interactions leads to a unique combination of the spin and valley degrees of freedom. In addition, the 2D character of the monolayers and weak dielectric screening from the environment yield a significant enhancement of the Coulomb interaction. The resulting formation of bound electron-hole pairs, or excitons, dominates the optical and spin properties of the material. Here we review recent progress in our understanding of the excitonic properties in monolayer TMDs and lay out future challenges. In particular, we focus on the consequences of the strong direct and exchange Coulomb interaction. Finally, the impact on valley polarization is described and the tuning of the energies and polarization observed in applied electric and magnetic fields is summarized.
I Introduction
Atomically thin transition metal dichalcogenides (TMDs) have unique physical properties which could be of value for a broad range of applications Wang et al. (2012b); Geim and Grigorieva (2013); Butler et al. (2013); Xia et al. (2014); Xu et al. (2014); Yu et al. (2015a); Castellanos-Gomez (2016); Mak and Shan (2016). The investigation of bulk and thin layers of TMDs can be traced back decades Frindt (1966); Wilson and Yoffe (1969); Bromley et al. (1972), but starting with the emergence of graphene Novoselov et al. (2004, 2005), many additional techniques for producing, characterizing, and manipulating atomically thin flakes were developed. This led to rapid progress in the study of monolayers of other van der Waals systems like the TMDs. Monolayer (ML) MoS is a typical member of the group VI TMD family of the form MX and was isolated in early studies, for example in Frindt (1966); Joensen et al. (1986); here M is the transition metal (Mo, W) and X the chalcogen (S, Se, Te), see Fig. 1a. However, only around 2010, were the TMDs confirmed to be direct band gap semiconductors in monolayer form, with up to 20% absorption per monolayer Mak et al. (2010); Splendiani et al. (2010). These discoveries launched intense research activity exploring the electronic properties and physics of single- and few-layer TMDs.
The transition metal chalcogenides are a group of about 60 materials, most of which are layered structures in their bulk form with weak interlayer van-der-Waals interactions Wilson and Yoffe (1969). By using micro-mechanical cleavage (commonly referred to as exfoliation or “scotch-tape technique”), one can obtain few-layer and monolayer crystals, typically a few to tens of micrometers in lateral dimension Castellanos-Gomez et al. (2014). There are currently vigorous efforts to grow large-area TMD monolayes by chemical vapor deposition (CVD) Zhan et al. (2012) and by van der Waals epitaxy in ultrahigh vacuum Zhang et al. (2014b); Xenogiannopoulou et al. (2015), but many of the intriguing properties reviewed here were identified in high-quality monolayers prepared from naturally occurring or synthesized bulk crystals by exfoliation.
The group VI semiconducting dichalcogenides with M=Mo, W and X=S, Se, Te share several important properties and their monolayers are stable enough under ambient conditions to perform optical and electrical characterization. With respect to the electronic structure, they are indirect band gap semiconductors in their bulk form Bromley et al. (1972). When thinned down to the limit of a single monolayer, the band gap becomes direct. The corresponding band extrema are located at the and points of the hexagonal Brillouin zone and give rise to interband transitions in the visible to near-infrared spectral range. The presence of a direct gap is particularly interesting for potential device applications because of the associated possibility for efficient light emission. Promising device prototypes have already been demonstrated with diverse functionality, including phototransitors based on monolayer MoS Lopez-Sanchez et al. (2013), sensors Perkins et al. (2013), logic circuits Radisavljevic et al. (2011b); Wang et al. (2012a), and light producing and harvesting devices Ross et al. (2014); Lopez-Sanchez et al. (2014); Cheng et al. (2014); Pospischil et al. (2014) among others. In addition to being direct, the optical transitions at the gap are also valley selective, as and circularly polarized light can induce optical transitions only in the and valleys in momentum space, respectively Cao et al. (2012); Xiao et al. (2012). In contrast to graphene, an additional interesting feature of these materials is the presence of strong spin-orbit interactions, which introduce spin splitting of several hundred meV in the valence band and of a few to tens of meV in the conduction bands Xiao et al. (2012); Kosmider et al. (2013); Molina-Sánchez et al. (2013), where the spin states in the inequivalent valleys and are linked by time reversal symmetry.
Since their emergence in 2010, the properties of these direct-gap monolayer materials with valley selective optical selections rules have been investigated in detail using both linear and nonlinear optical spectroscopic techniques. Following absorption of a photon with suitable energy, an electron is promoted to the conduction band, leaving behind a hole in the valence band. In TMD MLs the electrons and holes are tightly bound together as excitons by the attractive Coulomb interaction, with typical binding energies on the order of 0.5 eV Ramasubramaniam (2012); Cheiwchanchamnangij and Lambrecht (2012); Qiu et al. (2013); He et al. (2014); Chernikov et al. (2014); Wang et al. (2015a). As a result, the optical properties at both cryogenic and room temperatures are determined by strong exciton resonances. At the corresponding transition energies, the light-matter interaction is strongly enhanced in comparison to the transitions in the continuum of unbound electrons and holes. While the exciton radii are small, their properties remain within the Wannier-Mott regime and preserve analogies to the electronic structure of the hydrogen atom. For these materials with almost ideal 2D confinement and reduced dielectric screening from the environment, the Coulomb attraction between the hole and the electrons is one to two orders of magnitude stronger than in more traditional quasi-2D systems such as GaAs or GaN quantum wells used in today’s optoelectronic devices Chichibu et al. (1996). Nevertheless, despite important differences, the optical properties of ML TMDs show similarities to the exciton physics studied in detail in GaAs or ZnSe quantum wells Vinattieri et al. (1994); Maialle et al. (1993); Pelekanos et al. (1992); Bradford et al. (2001), for example, rendering these systems a useful benchmark for comparing certain optical properties. In addition, the Coulomb interaction in TMD MLs also determines the valley polarization dynamics of excitons and influences the order of optically bright versus dark states. Overall, the physics of these robust excitons are both of fundamental interest and of crucial importance for engineering and exploiting the properties of these materials in potential applications. These factors motivate this short review, which aims to present the current state of the art, as well as open questions that need to be addressed.
The basics of the band structure and the optical spectroscopy techniques used to reveal the exciton physics in these materials are introduced in the remainder of section I. Neutral exciton binding energies and their impact on light-matter coupling effects are discussed in section II. Exciton physics at higher densities and in the presence of free carriers are described in section III. Finally, the impact of the Coulomb interaction and external fields on valley physics is outlined in section IV, and open questions and challenges are addressed throughout the text to stimulate further work on the excitonic properties of atomically thin materials.
(a) Monolayer transition metal dichalcogenide crystal structure. The transition metal atoms appear in black, the chalcogen atom in yellow. (b) Typical band structure for MX
Figure 1: (a) Monolayer transition metal dichalcogenide crystal structure. The transition metal atoms appear in black, the chalcogen atom in yellow. (b) Typical band structure for MX monolayers calculated using Density Functional Theory (DFT) and showing the quasiparticle band gap at the points and the spin-orbit splitting in the valence band Ramasubramaniam (2012). (c) Schematic in a single-particle picture showing that the order of the conduction bands is opposite in MoX and WX monolayers Kormanyos et al. (2015). The contribution from Coulomb exchange effects that has to be added to calculate the separation between optically active (bright - spin allowed) and optically inactive (dark - spin forbidden) excitons is not shown Echeverry et al. (2016)
i.1 Basic band structure and optical selection rules
In addition to the strong Coulomb interaction in ML TMDs, the crystal symmetry and orbital character of the bands are responsible for the underlying spin-valley properties and optical selection rules. Bulk TMDs in the semiconducting 2H phase consist of X-M-X building blocks with weak van-der-Waals bonding between the layers and are characterized by the point symmetry group for stoichiometric compounds Wilson and Yoffe (1969); Ribeiro-Soares et al. (2014). In bulk TMDs, the indirect band gap corresponds to the transition between the valence band maximum (VBM) at the center of the hexagonal Brillouin zone ( point) and the conduction band minimum (CBM) situated nearly half way along the direction Zhao et al. (2013); Yun et al. (2012). The electronic states at the point contain contributions from the orbitals of the chalcogen atom and the orbitals of the transition metal. In contrast, the point conduction and valence band states at the corners of the hexagonal Brillouin zone are very strongly localized in the metal atom plane, as they are composed of transition metal atom states (VB) and states (CB) slightly mixed with the chalcogen orbitals Li et al. (2007); Zhu et al. (2011); Kormanyos et al. (2015). The spatial overlap between adjacent MX layers of the orbitals corresponding to the point (VB) and the midpoint along (CB) is considerable. As a result, in progressing from bulk crystals to few layer samples and eventually to monolayers, the indirect gap energy corresponding to the separation between and the midpoint along increases whereas the point CB and VB energies are nearly unaffected. Therefore, the semiconductor undergoes a crossover from an indirect to a direct gap, the later situated at the points (see Fig. 1b), and resulting in much stronger light emission for MLs as compared to bulk and bilayers Mak et al. (2010); Splendiani et al. (2010).
As compared with bulk samples, the TMD MLs are described by the lower symmetry point group. The symmetry elements include a horizontal reflection plane containing the metal atoms, a threefold rotation axis intersecting the horizontal plane in the center of the hexagon, as well as a mirror-rotation axis, three twofold rotation axes lying in the ML plane, and mirror reflection planes containing the axes Koster et al. (1963). The symmetry of the states at is still lower and characterized by the point group where only , axes and elements are present.
The spin-orbit interaction in TMDs is much stronger than in graphene, the most prominent 2D material. The origin of this distinction lies simply in the relatively heavy elements in the TMDs and the involvement of the transition metal orbitals. In monolayer TMDs, the spin splitting at the point in the valence band is around 200 meV (Mo-based) and 400 meV (W-based) Cheiwchanchamnangij and Lambrecht (2012); Xiao et al. (2012); Zhu et al. (2011); Zhang et al. (2014b); Miwa et al. (2015). This coupling gives rise to the two valence sub-bands and, accordingly, to two types of excitons, A and B, which involve holes from the upper and lower energy spin states, respectively. At the CBM, a smaller, but significant spin splitting is also expected due to partial compensation of the - and -states contributions Kormanyos et al. (2015); Kormányos et al. (2014); Liu et al. (2013); Kosmider et al. (2013). Interestingly, depending on the metal atom (Mo or W), the conduction band spin splitting has a different sign, as indicated in Fig. 1c,d. Hence, at the K point, the spin splitting of both the conduction and valence bands is fully lifted. This stands in marked contrast to typical GaAs or CdTe quantum-well structures where the CBM and VBM occur at the point and both the conduction and valence band states remain spin degenerate. The CB spin splitting results in an energy separation between the spin-allowed and optically active (bright) transitions and the spin-forbidden and optically inactive transitions (dark). The exact amplitude of the splitting for exciton states will also depend on the contribution from the electron-hole Coulomb exchange energy Qiu et al. (2015); Echeverry et al. (2016). The lowest energy transition in MoX is expected to be the bright exciton. In contrast for the WX materials, dark excitons are predicted to be lower energies, in agreement with temperature dependent studies Zhang et al. (2015d); Wang et al. (2015); Withers et al. (2015); Arora et al. (2015a), measurements in transverse magnetic fields Zhang et al. (2017); Molas et al. (2017) and experiments probing excitons with out-of-plane dipole moments Zhou et al. (2017); Wang et al. (2017).
The chiral optical selection rules for interband transitions in the valleys can be deduced from symmetry arguments: The orbital Bloch functions of the VB states at points are invariants, while the CB states transform like the states with angular momentum components of , i.e., according to the irreducible representations of the point group. Therefore, the optical selection rules for the interband transitions at valleys are chiral: the () circularly polarized light can only couple to the transition at () Yao et al. (2008); Xiao et al. (2012); Cao et al. (2012); Mak et al. (2012); Zeng et al. (2012); Sallen et al. (2012). This permits the optical generation and detection of the spin-valley polarization, rendering the TMD monolayers an ideal platform to study the electron valley degree of freedom in the context of valleytronics Behnia (2012); Rycerz et al. (2007); Xiao et al. (2007). In that context, it is important to emphasize, that for an electron to change valley, it has either to flip its spin (see Fig. 1c,d) or undergo an energetically unfavorable transition, especially for the valence states. As a result, optically generated electrons and holes are both valley and spin polarized, which is termed spin-valley locking. Therefore, following the excitation, the exciton emission in TMD MLs can be co-polarized with the laser if the valley polarization lifetime is longer or of the order of the recombination time. This behavior stands in contrast to that of III-V or II-VI quantum wells where excitation with the circularly polarized light usually results only in spin-polarization of the charge carriers Dyakonov (2008).
(a) Schematic real-space representation of the electron-hole pair bound in a Wannier-Mott exciton, showing the strong spatial correlation of the two constituents.
(b) Illustration of a typical exciton wavefunction calculated for monolayer MoS
Figure 2: (a) Schematic real-space representation of the electron-hole pair bound in a Wannier-Mott exciton, showing the strong spatial correlation of the two constituents. (b) Illustration of a typical exciton wavefunction calculated for monolayer MoS from Qiu et al. (2013). The modulus squared of the electron wavefunction is plotted in color scale for the hole position fixed at the origin. The inset shows the corresponding wavefunction in momentum space across the Brillouin zone, including both and exciton states. (c) Schematic representation of the exciton in reciprocal space, with the contributions of the electron and hole quasiparticles in the conduction (CB) and valence (VB) bands, respectively, indicated by the size of the circles. (d) Schematic illustration of the optical absorption of an ideal 2D semiconductor including the series of bright exciton transitions below the renormalized quasiparticle band gap. In addition, the Coulomb interaction leads to the enhancement of the continuum absorption in the energy range exceeding , the exciton binding energy. The inset shows the atom-like energy level scheme of the exciton states, designated by their principal quantum number , with the binding energy of the exciton ground state () denoted by below the free particle bandgap (FP)
i.2 Brief survey of monolayer characterization and optical spectroscopy techniques
Before describing the exciton physics in detail, we summarize some relevant practical information about ML TMD samples and their typical dielectric environment (substrates) and describe the basic techniques used to investigate the optical properties. Monolayer TMDs can be obtained by the mechanical exfoliation Frindt (1966); Novoselov et al. (2005), chemical exfoliation Joensen et al. (1986); Coleman et al. (2011); Smith et al. (2011), or CVD Liu et al. (2012); Najmaei et al. (2013); van der Zande et al. (2013) and van-der-Waals epitaxy growth Xenogiannopoulou et al. (2015); Zhang et al. (2014b); Liu et al. (2015a). Mechanical exfoliation is a convenient method to produce high-quality monolayer flakes from bulk crystals. Controlled growth of large-area monolayer material on different substrates using CVD or van-der-Waals epitaxy is a very active area of research and samples with high crystal quality have been already obtained.
Following isolation of a ML by micromechanical cleavage, the flakes can be deposited onto several kinds of substrates, SiO/Si, fused silica, sapphire, etc. SiO/Si substrates are widely used as (i) SiO can help to optimize the contrast for monolayers in optical microscopy during mechanical exfoliation Lien et al. (2015), and (ii) they are compatible with microelectronics standards Radisavljevic et al. (2011a). Encapsulation of ML flakes in hexagonal boron nitride, a layered material with a band gap in the deep UV Taniguchi and Watanabe (2007), has been shown to enhance the sharpness of the optical transitions in ML TMDs, particularly at low temperaturesChow et al. (2017); Jin et al. (2016); Manca et al. (2017); Cadiz et al. (2017); Wang et al. (2017); Zhou et al. (2017); Ajayi et al. (2017). This improvement is attributed to a reduction in detrimental surface and environmental effects on the samples. In addition to simple optical contrast (differential reflectivity) measurements, Raman spectroscopy is often used to determine the number of layers of TMDs flakes Korn et al. (2011); Tonndorf et al. (2013). The energy spacing between two high-frequency phonon modes A and E can be used to identify thickness of exfoliated molybdenum dichalcogenides MX when it is thinner than 5 layers Zhang et al. (2015b). As only the monolayer is a direct-gap semiconductor (with the possible exception of MoTe bilayers), analyzing the intensity and emission energy of photoluminescence (PL) signals allows identifying monolayer flakes. However, as the PL emission tends to favor low-energy states, including possible defect and impurity sites, care must be taken in applying this approach, especially at low temperatures. As an alternative, optical reflection and transmission spectroscopy can be used to directly probe exciton resonances Mak et al. (2010); Chernikov et al. (2014); Li et al. (2014a); Hill et al. (2015); Stier et al. (2016a); Arora et al. (2016).
Ii Coulomb bound electron-hole pairs
In this section we summarize the main properties of the exciton states in TMD monolayers and discuss their importance for the optical response in terms of their energies (exciton resonances) and oscillator strengths (optically bright versus dark states). We start with a brief introduction of the electron and hole quasi-particle states forming the excitons at the fundamental band gap. Then, we discuss the consequences of the Coulomb interaction, including direct and exchange contributions, followed by an overview of exciton binding energies and light-matter coupling in monolayer TMDs.
The promotion of an electron from the filled valence band to the empty conduction band leaves an empty electron state in the valence band. The description of such a many-body system can be reduced to the two-particle problem of the negatively charged condution electron interacting with a positively charged valence hole. The hole Bloch function is derived from the Bloch function of the empty electron state in the valence band by applying the time-reversal operator Bir and Pikus (1974). Here, represent the spin index, is the valley index, and is the wave vector for a conduction (c) or valence (v) state. As the time reversal operator changes the orbital part of the wavefunction to its complex conjugate and also flips the spin, the hole wavevector is opposite that of the empty electron state, i.e., , the hole valley (and spin) quantum numbers are opposite to those of the empty electron state as well: , . This transformation is natural to describe the formation of the electron-hole pair from the photon with a given polarization. In case of TMD monolayers, a photon with a wavevector projection to the plane of the layer creates an electron with a wavevector in the state in () valley, leaving a state with wavevector in the valence band unoccupied. As a result, the corresponding hole wavevector is , with the center of mass wavevector of the electron-hole pair equal to , as expected for the quasiparticle created by a photon. Accordingly, the hole valley index, , and spin, , are formally opposite to those of the conduction-band electron. In a similar manner, the absorption of photon results in the formation of the electron-hole pair with , Glazov et al. (2014, 2015).
ii.1 Neutral excitons: direct and exchange Coulomb interaction
To discuss the consequences of the Coulomb electron-hole interaction we separate the direct and exchange contributions, both further including long-range and short-range interactions, with certain analogies to traditional quasi-2D quantum well excitons Dyakonov (2008). The long-range part represents the Coulomb interaction acting at inter-particle distances in real space larger than the inter-atomic bond lengths (i.e., for small wavevectors in reciprocal space compared to the size of the Brillouin zone). In contrast, the short-range contribution originates from the overlap of the electron and hole wavefunctions at the scales on the order of the lattice constant ( nm in ML WSe), typically within one or several unit cells (i.e., large wavevectors in reciprocal space).
The direct Coulomb interaction describes the interaction of positive and negative charge distributions related to the electron and the hole. The long-range part of the direct interaction is only weakly sensitive to the particular form of the Bloch functions, i.e., valley and spin states; it rather depends on the dimensionality and dielectric properties of the system. It has an electrostatic origin and provides the dominant contribution to the exciton binding energy, , see Sec. II.2. The short-range part of the direct interaction stems from the Coulomb attraction of the electron and the hole within the same or neighboring unit cells. It is sensitive to the particular form of the Bloch functions and is, as a rule, considered together with the corresponding part of the exchange interaction. In a semi-classical picture, the long-range direct interaction thus corresponds to attractive Coulomb forces between opposite charges. As a consequence, an electron and a hole can form a bound state, the neutral exciton, with strongly correlated relative positions of the two constituents in real space, as schematically shown in Fig. 2a. The concept of correlated electron-hole motion is further illustrated in Fig. 2b, as reproduced from Ref. Qiu et al. (2013), where the modulus squared of the electron wavefunction relative to the position of the hole is presented for the case of the exciton ground state in monolayer MoS. In TMD MLs the resulting excitons are of the so-called Wannier-Mott or large-radius type, since the correlation between an electron and a hole extends over many lattice periods, similar to prototypical semiconductors such as GaAs and CuO. As a consequence, a description in terms of Frenkel excitons does not seem to be appropriate.
In the -space, the exciton wavefunction can be presented as Bir and Pikus (1974); Glazov et al. (2015)
where the correlation of the electron and hole in the exciton is described by a coherent, i.e., phase-locked, superposition of electron and hole states ( and ) around the respective extrema of the bands. Relative contributions of these states to the exciton are described by the expansion coefficients , which are usually determined from the effective two-particle Schrödinger or Bethe-Salpeter equation. Their values are schematically represented by the size of the circles in Fig. 2c, with the results of an explicit calculation shown in the inset of Fig. 2b for electrons in monolayer MoS. As a consequence of the large binding energy of excitons and their small Bohr radius in real-space (), the spread of the exciton in -space is relatively large. Therefore states far away from the points are included in the exciton wavefunction Wang et al. (2015).
As previously noted, the correlation represented in Eq. (1) is strictly related to the relative motion of the carriers. In contrast, the exciton center-of-mass can propagate freely in the plane of the material, in accordance with the Bloch theorem. The resulting exciton states are labeled by the center-of-mass wavevector , electron and hole spin and valley indices, , , , and the relative motion labels . The relative motion states can be labelled by the principal and magnetic quantum number as , with a natural number, and . To choose a notation similar to the hydrogen atom for states, we use here where and for , for etc; the precise symmetry of excitonic states is discussed below in Sec. II.3.
In particular, the principal quantum number is the primary determinant of the respective binding energy, with the resulting series of the ground state () and excited states () of Wannier-Mott excitons roughly resembling the physics of the hydrogen atom, as represented by the energy level scheme in Fig. 2d. The selection rules for optical transitions are determined by the symmetry of the excitonic wavefunctions, particularly, by the set of the spin and valley indices and and the magnetic quantum number . These quantities are of particular importance for the subdivision of the excitons into so-called bright states, or optically active, and dark states, i.e., forbidden in single-photon absorption process, as further discussed in the following sections.
In addition to the formation of excitons, a closely related consequence of the Coulomb interaction is the so-called self-energy contribution to the absolute energies of electron and hole quasiparticles. In a simplified picture, the self energy is related to the repulsive interaction between identical charges and leads to an overall increase of the quasiparticle band gap of a semiconductor, i.e., the energy necessary to create an unbound electron-hole pair in the continuum, referred to as ’free-particle (or quasiparticle) band gap’. In many semiconductors, including TMD monolayers, the self-energy contribution and the exciton binding energy are found to be almost equal, but of opposite sign. Thus, the two contributions tend to cancel one another out with respect to the absolute energies. Nevertheless, these interactions are of central importance as they determine the nature of the electronic excitations and the resulting properties of the material. To demonstrate the later, a schematic illustration of the optical absorption in an ideal 2D semiconductor is presented in Fig. 2d. The changes associated with the presence of strong Coulomb interactions are evident in Fig. 2d and result in the formation of the exciton resonances below the renormalized free-particle band gap. Importantly, the so-called optical band gap is then defined with respect to the lowest energy feature in absorption, i.e., the ground state of the exciton (). The optical gap thus differs from free-particle band gap, which corresponds, as previously introduced, to the onset of the continuum of unbound electrons and holes. This is formally equivalent to the bound exciton state. As a final point, the Coulomb interaction leads to a significant enhancement of the continuum absorption, which is predicted to extend many times of into the band Shinada and Sugano (1966); Haug and Koch (2009).
In comparison to the direct coupling part of the Coulomb interaction, the exchange contribution denotes the Coulomb interaction combined with the Pauli exclusion principle. The latter is a well-known consequence of the fact that both types of quasiparticles (electrons and holes) result from a sea of indistinguishable charged fermions occupying filled bands. In analogy to the direct coupling, the Coulomb exchange can be also separated into the long-range and the short-range parts. In particular, the long-range exchange interaction is of electrodynamic nature, in close analogy to the exchange interaction between an electron and a positron Berestetskii and Landau (1949). It can be thus interpreted as a result of interaction of exciton with the induced electromagnetic field in the process of virtual electron-hole recombination Bir and Pikus (1974); Denisov and Makarov (1973); Goupalov et al. (1998): The bright exciton can be considered as a microscopic dipole which produces an electric field, the back-action of this field on the exciton is the long-range electron-hole exchange interaction. On a formal level, it corresponds to the decomposition of the Coulomb interaction up to the dipole term and calculation of its matrix element on the antisymmetrized Bloch functions Andreani (1995). In TMD monolayers, the long-range exchange part, being much larger than for III-V or II-VI quantum wells, facilitates transitions between individual exciton states excited by the light of different helicity, thus mainly determining the spin-valley relaxation of the excitons, see Sect. IV. At short-range, Pauli exclusion causes the exchange interaction to depend strongly on the spin and valley states of the particles. It thus contributes to the total energies of the many-particle complexes, depending on the spin and valley states of the individual constituents and impacts the separation between optically dark and bright excitons Qiu et al. (2015); Echeverry et al. (2016). Among typical examples are the so-called triplet and singlet exciton states (i.e., the exciton fine structure) corresponding to parallel and anti-parallel alignment of the electron and hole spins, respectively. Lacking a classical analog, the exchange interaction is a more subtle contribution compared to the direct Coulomb interaction. As it is summarized in Table 1, the overall ratio of the direct and exchange contributions in TMDs is on the order of . Nevertheless, as it is discussed in the following sections, the consequences of exchange interaction are of central importance in understanding many-particle electronic excitations in TMD monolayers.
Coulomb term Impact
Direct Exciton binding energy
neutral excitons meV
charged excitons, biexcitons meV
Quasi-particle bandgap
self-energy meV
Exchange Exciton fine structure
long-range neutral exciton spin/valley depolarization
short-range splitting of dark and bright excitons
‘s of meV
Table 1: Impact of different types of electron-hole interaction on optical and polarization properties of excitons in TMD MLs.
One of the distinct properties of TMD monolayers is the unusually strong long-range Coulomb interaction and its unconventional distance dependence, leading to large exciton binding energies and band-gap renormalization effects. First, the decrease of dimensionality results in smaller effective electron and hole separations, particularly, along the ML normal direction, where the wavefunctions of the electron and hole occupy only several angstroms as compared to tens of nanometers in bulk semiconductors. In the simple hydrogenic model, this effect yields to a well-known four-fold increase in exciton binding energy in 2D compared to 3D Ivchenko (2005). Second, the effective masses in the valleys of the electron, , and hole, , in TMD MLs are relatively large, on the order of , with denoting the free electron mass Liu et al. (2013); Kormanyos et al. (2015). Hence, the reduced mass is also larger compared to prominent semiconductor counterparts such as GaAs (). Finally, in TMD MLs, the material is generally surrounded by air/vacuum (or dielectrics with relatively small permitivity). This reduces dielectric screening of the Coulomb interaction, since the electric field produced by the electron-hole pair is present largely outside of the ML itself. These features of the screening also result in a substantial deviation of the electron-hole interaction from the conventional distance dependence, as discussed in detail in Sec. II.2.2. Nevertheless, one can still estimate the impact of the dimensionality, the effective mass, and the reduced screening on the exciton binding energy within the framework of the 2D hydrogen-like model: , where is the Rydberg constant of 13.6 eV and is a typical effective dielectric constant of the system, roughly averaged from the contributions of the ML and the surroundings, is the free electron mass. Clearly, an increase in and decrease in result in the increase of the binding energy. As an example, this simple expression provides a binding energy on the order of 400 meV for realistic parameters of and .
As a final step in introducing the Coulomb terms and their role in the physics of TMD monolayers, we can formally identify the direct and exchange terms in the effective exciton Hamiltonian in -space in the two-band approximation:
where () are the electron (hole) single-particle Hamiltonians, stands for the matrix element of the direct Coulomb interaction between the electron and the hole, and is the matrix of the electron-hole exchange interaction. Here , are the electron and hole spin and valley indices, the dependence of the single-particle Hamiltonians on and is implicitly assumed. The last term comprises the short- and long-range contributions to the electron-hole exchange interaction. In real space, the second line of Eq. (2) corresponds to the standard exciton Hamiltonian in the effective mass approximation with a properly screened Coulomb interaction potential with the additional short-range part in the form with the constant .
ii.2 Exciton binding energy
ii.2.1 Exciton and continuum states in optics and transport
Presentation of commonly used experimental techniques to determine exciton binding energies in TMD monolayers. (a) Direct measurement of the free-particle bandgap energy using scanning tunneling spectroscopy of ML MoSe
Figure 3: Presentation of commonly used experimental techniques to determine exciton binding energies in TMD monolayers. (a) Direct measurement of the free-particle bandgap energy using scanning tunneling spectroscopy of ML MoSe on bilayer graphene(right panel) combined with a measurement of the absolute energy of the exciton ground state from photoluminescence (left panel) Ugeda et al. (2014). (b) Exciton states of ML WS on an SiO/Si substrate from reflectance contrast measurements Chernikov et al. (2014). The extracted transition energies of the states and the inferred band-gap position are presented in the right panel. (c) The linear absorption spectrum and the third-order susceptibility extracted from two-photon photoluminescence excitation spectra of ML WSe on fused silica substrate with exciton resonances of the ground and excited states He et al. (2014). (d) Exciton states as measured by second-harmonic spectroscopy of the A and B transitions in ML WSe Wang et al. (2015a). (e) One-photon photoluminescence excitation spectra and the degree of linear polarization of the luminescence of ML WSe with features of excited state of the A and the ground state of the B exciton Wang et al. (2015a).
To determine the exciton binding energy directly by experiment, one must identify both the absolute energy position of the exciton resonance and that of the free-particle bandgap to obtain . For this purpose, several distinct techniques have been successfully applied to TMD monolayers. The transition energy of the exciton ground state can be readily obtained using optical methods. Due to the strong light-matter coupling (cf. Sect. II.3) the excitons appear as pronounced resonances centered at photon energies corresponding to in optical absorption, reflectance, photoluminescence (PL), photoluminescence excitation (PLE), and photocurrent (PC) measurements. (In case of PL, room-temperature measurements are usually preferred to avoid potential contributions from defect states.) As an example, PL spectra of MoSe monolayer from Ref. Ugeda et al. (2014) are presented in the left panel of Fig. 3a, illustrating the strong emission from the ground-state exciton transition.
In contrast, the precise determination of the free-particle bandgap energy is more challenging problem – and a recurring one for semiconductors with large exciton binding energies where strong exciton resonances may mask the onset of a continuum of states. A direct approach is provided by the scanning tunneling spectroscopy (STS), which measures tunneling currents as a function of the bias voltage through a tip positioned in close proximity to the sample. Such measurements can probe the electronic density of states in the vicinity of the band gap, mapping energy levels of free electrons in both the valence and conduction bands. A typical STS spectrum for a MoSe monolayer supported by bilayer of graphene Ugeda et al. (2014) is presented in the right panel of Fig. 3a. As a function of tip voltage relative to the sample, a region of negligible tunneling current is observed. This arises from the band gap where no electronic states are accessible. The lower and upper onset of the tunnel current correspond to the highest occupied electron states at the valence band maximum (VBM) and the lowest unoccupied states at the conduction band minimum (CBM), respectively. The size of the bandgap is extracted from the difference between these onsets. As previously discussed, the exciton binding energy is then directly obtained from the difference between measured by STS and the exciton transition energy identified in the optical spectroscopy (compare right and left panel in Fig. 3a). The reported values, as summarized in the Table 2, range from 0.22 eV for MoS Zhang et al. (2014a) to 0.55 eV for MoSe Ugeda et al. (2014); further reports include Bradley et al. (2015); Chiu et al. (2015); Rigosi et al. (2016); Zhang et al. (2015a). The differences can be related to (i) the overall precision in extracting the onsets of the tunneling current and (ii) to the use of different conducting substrates required for the STS,i.e., the influence of different dielectric environments. In addition, the complexities of the band structure of the TMDs, with several valley extrema being relatively close in energy (see Sect. I.1) were shown to be of particular importance for the identification of the bands contributing to the initial rise in the tunneling current Zhang et al. (2015a).
As discussed in Sect. II.1 (see Fig. 2d), the onset of the free-particle continuum in the absorption spectra is merged with the series of excited exciton states (), precluding a direct extraction of the bandgap energy in most optical spectroscopy experiments. However, the identification of the series of excited exciton states permits an extrapolation to expected band gap or for the determination of the band gap through the application of suitable models. These methods are analogous to the measurements of the Rydberg (binding) energy of the hydrogen atom from spectral lines from transitions between different electron states. For an ideal 2D system the exciton energies evolve as in a hydrogenic series with Shinada and Sugano (1966); Klingshirn (2007). As clearly shown in reflection spectroscopy Chernikov et al. (2014); He et al. (2014), the exciton states in ML WSe and WS, for example, deviate from this simple dependence, see Fig. 3b. The main reason for the change in the spectrum is the nonlocal dielectric screening associated with the inhomogeneous dielectric environment of the TMD ML. This results in a screened Coulomb potential Cudazzo et al. (2011); Keldysh (1979); Rytova (1967) with a distance dependence that deviates strongly from the usual form, as detailed below, and also introduced in the context of carbon nanotubes Wang et al. (2005); Deslippe et al. (2009).
The energies of the excited states of the excitons can be directly obtained from linear absorption or reflectance spectroscopy. These states are usually identified by their decreasing spectral weight (oscillator strength) and relative energy separations with increasing photon energies. The oscillator strength for an ideal 2D system is given by Shinada and Sugano (1966). As an example, consider the reflectance contrast spectrum (i.e., the difference of the reflectity of the sample and substrate divided by the substrate reflectivity) from a WS monolayer Chernikov et al. (2014), measured at cryogenic temperatures. The spectra, presented after taking a derivative with respect to photon energy to highlight the features and presented in the left panel of Fig. 3b, clearly reveals signatures of the excited exciton states. The right panel summarizes the extracted peak energies and the estimated position of the band gap, as obtained directly from the extrapolation of the data and from model calculations. The corresponding exciton binding energy is 300 meV. Observations of the excited states in reflectance spectra were further reported for WSe He et al. (2014); Hanbicki et al. (2015); Arora et al. (2015a) and WS Hanbicki et al. (2015); Hill et al. (2015) monolayers, both at cryogenic and room temperature, as well as for MoSe Arora et al. (2015b). In addition, the relative energy separations between the ground and excited states of the excitons were found to decrease with thickness of multilayer samples Chernikov et al. (2014); Arora et al. (2015a), reflecting the expected decrease in the binding energy. Similar results were obtained by the related techniques of photoluminescence-excitation spectroscopy (PLE) Hill et al. (2015); Wang et al. (2015a) and photocurrent (PC) Klots et al. (2014) spectroscopy, which also allow identification of the ground and excited-state excitonic transitions. In both cases, this is achieved by tuning the photon energy of the excitation light source, while the luminescence intensity of a lower-lying emission feature (in PLE) or the current from a sample fabricated into a contacted device (in PC) are recorded. PLE is a multistep process: light is first absorbed, then energy relaxation occurs to the emissive exciton. As relaxation via phonons plays an important role in TMD MLs Molina-Sánchez and Wirtz (2011), the PLE spectra contain information on both absorption and relaxation pathways. From PLE measurements, excited states of the excitons were observed in WSe Wang et al. (2015a), WS Hill et al. (2015), MoSe Wang et al. (2015) and MoS Hill et al. (2015) monolayers. In PC, the onset of the bandgap absorption in MoS monolayers was reported in Ref. Klots et al. (2014).
One of the challenges for single-photon spectroscopy, i.e., techniques based on absorption by dipole-allowed transitions, is the dominant response from the exciton ground state, potentially obscuring weaker signatures from the excited states. As an alternative, excited states of the excitons for example for can be addressed via two-photon excitation in TMDs Berkelbach et al. (2015); Wang et al. (2015a); Srivastava and Imamoglu (2015); Ye et al. (2014), while the two-photon absorption by the dipole-allowed transitions for is strongly suppressed. Indeed, in the standard centrosymmetric model -shell excitons are allowed in one-photon processes (and forbidden in all processes involving even number of phonons), while -shell excitons are allowed in two-photon processes and forbidden in one-photon processes Mahan (1968). Note that the specific symmetry of the TMD ML can lead to a mixing between exciton and -states and activation of -states in single-photon transitions as well Glazov et al. (2017); Gong et al. (2017). The mixing is also proposed to originate from a small amount of disorder in the system Berghauser et al. (2017)
Here, a commonly used technique is two-photon photoluminescence excitation spectroscopy (2P-PLE). In this method, the (pulsed) excitation source is tuned to half the exciton transition energy and the resulting luminescence is recorded as a function of the photon excitation energy. Formally, this yields the spectrum of third-order nonlinear susceptibility responsible for two-photon absorption. The result of such a 2P-PLE measurement of a WSe monolayer He et al. (2014) is presented in Fig. 3c. In contrast to the one-photon absorption, the two-photon response is dominated by resonances from the excited exciton states with ; signatures of the ground-state excitons (e.g., B-exciton 1 transition lying at a similar energy) are strongly suppressed. Further reports of the exciton states in 2D TMDs from 2P-PLE include studies of WS Ye et al. (2014); Zhu et al. (2015), WSe Wang et al. (2015a) and MoSe monolayers Wang et al. (2015). Like the analysis of the one-photon spectra, the band gap is extracted either by comparison of the ground and excited state energies with appropriate theoretical models Ye et al. (2014); Wang et al. (2015a) or from the estimated onset of the continuum absorption (free-particle gap) He et al. (2014); Zhu et al. (2015). In addition to the PLE experiments, both the ground and excited states can be also observed directly in second-harmonic generation spectra, as illustrated in Fig. 3d for WSe monolayers Wang et al. (2015a). The second-harmonic generation takes place because, due to the lack of an inversion center in TMD MLs, the -shell and -shell excitons become active both in single- and two-photon processes. This allows for excitation of the given exciton state by two photons and its coherent emission. The microscopic analysis of the selection rules and relative contributions of excitonic states in second-harmonic emission is presented in Ref. Glazov et al. (2017), see also Trolle et al. (2014). Overall, the main challenge with optical techniques is the correct identification of observed features, made more challenging by a the possible mixture of and excitons, as well as coupling to phonon modes Jin et al. (2016); Chow et al. (2017). Topics of current discussion in analyzing different spectra include possible contributions from phonon-assisted absorption, higher-lying states in the band structure, defects, and interference effects.
Further information on exciton states and their energy can be obtained from measurements of intra-exciton transitions in the mid-IR spectral range after optical injection of finite exciton densities Poellmann et al. (2015); Cha et al. (2016) and measurements of the exciton Bohr radii from diamagnetic shifts at high magnetic fields Stier et al. (2016a, b). A summary of the exciton binding energies and the corresponding band-gap energies is presented in Table 2. While the extracted absolute values vary, largely due to the outlined challenges of precisely determining the absolute position of the band gap, the following observations are compatible with the majority of the literature:
(1) Excitons are tightly bound in TMD monolayers due to the quantum confinement and low dielectric screening, with binding energies on the order of several 100’s of meV. The corresponding ground-state Bohr radii are on the order of 1 nm and the wavefunction extends over several lattice constants (for WSe nm), rendering the Wannier-Mott exciton model applicable.
(2) The absolute position of the free-particle bandgap renormalizes by an amount similar to the exciton binding energy in comparison to the respective transition in bulk. Thus, we observe only to a modest absolute shift of the exciton energy in optical spectra when comparing the bulk and monolayers.
(3) The Coulomb interaction deviates from the law due to the spatially inhomogeneous dielectric screening environment (see Sec. II.2.2). This changed distance dependence of the interaction strongly affects the energy spacing of the exciton states, leading to pronounced deviations from the 2D hydrogen model.
Material Sample (Temp.) Exp. technique Bind. energy [eV] Bandgap Reference
WSe Exf. on SiO/Si (RT) Refl., 2P-PLE 0.37 2.02 He et al.,2014
CVD on HOPG (79 K) STS, PL 0.5 2.20.1 Zhang et al.,2015a
Exf. on SiO/Si (4 K) PLE, 2P-PLE, SHG 0.60.2 2.350.2 Wang et al.,2015a
Exf. on SiO/Si (4, 300 K) Refl. 0.887 2.63 Hanbicki et al.,2015
CVD on HOPG (77 K) STS, PL 0.4* 2.080.1 Chiu et al.,2015
WS Exf. on SiO/Si (5 K) Refl. 0.320.04 2.410.04 Chernikov et al.,2014
Exf. on fused silica (10 K) 2P-PLE 0.7 2.7 Ye et al.,2014
Exf. on SiO/Si (RT) 2P-PLE 0.710.01 2.73 Zhu et al.,2014a
Exf. on SiO/Si (4, 300 K) Refl. 0.929 3.01 Hanbicki et al.,2015
Exf. on fused silica (RT) Refl., PLE 0.320.05 2.330.05 Hill et al.,2015
Exf. on fused silica (RT) STS, Refl. 0.360.06 2.380.06 Rigosi et al.,2016
MoSe MBE on 2L graphene/SiC (5 K) STS, PL 0.55 2.18 Ugeda et al.,2014
CVD on HOPG (79 K) STS, PL 0.5 2.150.06 Zhang et al.,2015a
MoS CVD on HOPG (77 K) STS, PL 0.5 2.150.06 Zhang et al.,2014a
Exf., suspended (77 K) PC 0.57 2.5 Klots et al.,2014
Exf. on hBN/fused silica (RT) PLE 0.440.08** 2.470.08** Hill et al.,2015
CVD on HOPG (77 K) STS, PL 0.3* 2.150.1 Chiu et al.,2015
Exf. on fused silica (RT) STS, Refl. 0.310.04 2.170.1 Rigosi et al.,2016
* extracted from the PL data and STS results in Ref. Chiu et al., 2015
** attributed to the B-exciton transition by the authors of Ref. Hill et al., 2015
Table 2: Summary of experimentally determined exciton binding energies and free particle bandgaps in monolayer TMDs from the literature. All values correspond to the A-exciton transition, unless noted otherwise. The numerical formats correspond to the presentations of the data in the respective reports.
ii.2.2 Effective Coulomb potential and the role of the environment
Calculations of excitonic states and binding energies in TMD MLs have been performed by many approaches, including effective mass methods, atomistic tight-binding and density functional theory approaches with various levels of sophistication, see, e.g., Cheiwchanchamnangij and Lambrecht (2012); Komsa and Krasheninnikov (2012); Ramasubramaniam (2012); Qiu et al. (2013); Shi et al. (2013a); Molina-Sánchez et al. (2013); Berghäuser and Malic (2014); Wu et al. (2015a); Trushin et al. (2016); Stroucken and Koch (2015). A simple and illustrative approach to calculate energies of exciton states is provided by the effective mass method. Here, in the Hamiltonian (2), the single-particle kinetic energies and are replaced by the operators and , respectively, with , being the electron and hole in-plane position vectors. Most importantly, the electric field between individual charges in the ML permeates both the material layer and the surroundings outside the monolayer. As a consequence, both the strength and the form of the effective Coulomb interaction between the electron and hole in the exciton are strongly modified by the dielectric properties of the environment Stier et al. (2016b); Raja et al. (2017). In principle, one recovers a 2D hydrogen-like problem with an adjusted effective potential by taking into account the geometry of the system and the dielectric surroundings Keldysh (1979); Cudazzo et al. (2011); Berkelbach et al. (2013); Chernikov et al. (2014); Ganchev et al. (2015).
Typically, the combined system “vacuum + TMD monolayer + substrate” is considered, reproducing the main features of the most common experimentally studied samples. In the effective medium approximation, the dielectric constant of the TMD ML generally far exceeds the dielectric constants of the surroundings, i.e., of the substrate and of the vacuum. As a result, the effective interaction potential takes the form of ( is the relative electron-hole coordinate) only at large distances between the particles where the electrical field resides outside the TMD ML itself. At the intermediate and small distances, the dependence is Cudazzo et al. (2011). The resulting overall form of the effective potential, following Keldysh (1979), is approximated by
where and are the Struve and Neumann functions, is the effective screening length. The latter can either be calculated from ab-intio Berkelbach et al. (2013) or considered as a phenomenological parameter of the theory Chernikov et al. (2014) and typically ranges from roughly 30 Å to 80 Å. Then, within the effective mass approximation, the two-particle Schrödinger equation with the effective potential in the form of Eq. (3) can be solved, e.g., variationally and numerically or, in some cases analytically Ganchev et al. (2015). The result is a series of exciton states described by the envelope functions of the relative motion . Overall, the model potential in the form (3) describes the deviations from the ideal 2D hydrogenic series observed in the experiments and can be used as an input in more sophisticated calculations of excitonic spectra Steinhoff et al. (2014); Berghäuser and Malic (2014). This simple model potential also agrees well with the predictions from high-level ab-intio calculations using Bethe-Salpeter Equations approach Qiu et al. (2013); Ye et al. (2014); Ugeda et al. (2014); Wang et al. (2015a).
Although a reasonably adequate description of the experimental data for the exciton binding energies is already provided by the relatively simple effective mass model with an effective potential in the form of Eq. (3), there are seveal issues debated in the literature that require further studies:
Since the exciton binding energy typically exceeds phonon energies both in TMD ML Zhang et al. (2015c) and in typical substrates, static screening is not necessarily well justified Stier et al. (2016b). However, the frequency range at which the screening constant should be evaluated and whether high-energy optical phonons play a role merits further investigation.
Depending on the material and the substrate, the binding energy can be as large as of the band gap, see Tab. 2. The excitons have also a relatively small radii leading to a sizable extension of the wavefunction in reciprocal space. Therefore, the effective mass model may not provide quantitatively accurate results and the effects of the band non-parabolicity and the spin-orbit coupling should be included.
In addition, the trigonal symmetry of the TMD MLs results in the mixing of the excitonic states with different particularly, in the mixing of the - and -shell excitons (i.e., the states with and ) as demonstrated theoretically in Ref. Glazov et al. (2017); Gong et al. (2017). Further studies of exciton mixing within atomistic approaches such as DFT and tight-binding models to determine quantitatively the strength of this effect are required, in addition to more detailed 1 and 2-photon excitation experiments.
Also the ordering of and resonances remains an open issue in light of recent theoretical predictions of the state-mixing and the experimental challenges are to precisely determine the splitting in TMD MLs and the eventual splitting of the 2 states Srivastava and Imamoglu (2015); Wu et al. (2015a).
On the experimental side, controlling the influence of the dielectric screening of the surroundings is of particular importance. Recent works on this topic include observations of exciton states in different solutions Lin et al. (2014); Ye et al. (2014), measurements of changes in the exciton Bohr radii from diamagnetic shifts on different substrates Stier et al. (2016b) and demonstration of the bandgap and exciton energy renormalization due to external dielectric screening Raja et al. (2017).
Further questions arise with respect to the uniformity of the dielectric environment, with possible variations of the sample-substrate distance and the non-uniform coverage by adsorbates, also considering the recently predicted nanometer spatial sensitivity of the screening effect Rösner et al. (2016). Here experimental comparisons between different capped and uncapped samples will be helpful as well to study, for example, the influence of the substrate morphology on the exciton states.
ii.3 Light-matter coupling via excitons
ii.3.1 Dark and bright excitons
When generated by resonant photon absorption under normal incidence, excitons are optically bright (see also discussion in Sec. II.3.3). But subsequent scattering events with other excitons, electrons, or phonons, and defects can induce spin flips and considerable changes in momentum. Alternatively, in case of a more complex generation process, a variety of exciton states can form. As a result of all the above, an exciton may not necessarily be able to recombine radiatively, for instance if the optical transition is now spin forbidden. Such an exciton is described as optically dark. Another way to generate dark excitons is if a hole and an electron, for instance injected electrically, come together to form an exciton with total angular momentum or large center-of-mass momentum . So whether or not excitons can directly interact with light by the absorption or emission of single photons depends on the center of mass wavevector , the relative motion wavefunction, the valley, (), and spin, (), states of the electron and hole.
In TMD MLs, exciton-photon coupling is governed by chiral optical selection rules: For normally incident light the direct interband transitions at the points of the Brillouin zone are active for light polarization, Fig. 1c,d Yao et al. (2008); Xiao et al. (2012); Zeng et al. (2012); Cao et al. (2012); Mak et al. (2012); Sallen et al. (2012). Considering interband transitions, the spin and valley states of the electron are conserved and the electron and hole are generated within the same unit cell. As a result, the -shell excitonic states (i.e., those with , such as , etc.) where the envelope function , with , are active in polarization and the states with , are active in polarization. The exciton states with (occupied electron states in CB and unoccupied electron states in VB) or (electron and unoccupied state have opposite spins) are dark Glazov et al. (2015). A schematic illustration of bright and dark electron transitions corresponding to the respective exciton states is presented in Fig. 5a. While the above rules describe the A-exciton series, they are essentially the same for the B-exciton states when the opposite signs of the corresponding spin indices are considered. Also, an admixing of the -character to the -like states is theoretically predicted due to the exchange interaction Glazov et al. (2017); Gong et al. (2017) and disorder Berghauser et al. (2017).
(a) Brightening of the dark exciton transition observed in ML WSe
Figure 4: (a) Brightening of the dark exciton transition observed in ML WSe by photoluminescence experiments with in an in-plane magnetic field Zhang et al. (2017) (b) Schematic of the brightening of the dark exciton transitions involving the spin states in the conduction band 1 and 2. For simplicity we do not show the Coulomb exchange energy term that also contributes to the dark-bright splitting Echeverry et al. (2016). (c) and (d) Using in-plane optical excitation and detection, the dark (X) and bright (X) exciton can be distinguished by polarization dependent measurements, adapted from Wang et al. (2017). The WSe ML is encapsulated in hBN for improved optical quality.
For neutral excitons, the order and energy difference between bright and dark excitons is given by the sign and amplitude of the spin splitting in the conduction band and the short-range Coulomb exchange interaction, similar to the situation in quantum dots Crooker et al. (2003). For WS and WSe, the electron spin orientations in the upper valence band and in the lower conduction band are opposite, while in MoS and MoSe, the spins are parallel, as shown in Fig. 1c,d.
As a result, the lowest lying CB to VB transition is spin forbidden (dark) in WS and WSe, the spin allowed transition is at higher energy as indicated in Fig. 4. One experimental approach to measure the energy splitting between the dark and bright state is to apply a strong in-plane magnetic field. This leads to an admixture of bright and dark states which allows detection the dark transitions that gain in intensity as the magnetic field increases, see Fig. 4a,b Zhang et al. (2017); Molas et al. (2017). For ML WSe, the dark excitons lie about 40 meV below the bright transitions. In addition to spin conservation, there is another important differences between the so called bright and dark excitons: Symmetry analysis Slobodeniuk and Basko (2016a); Wang et al. (2017); Zhou et al. (2017) shows that the spin-forbidden dark excitons are optically allowed with a dipole out of the monolayer plane (-mode), whereas the spin-allowed bright excitons have their dipole in the monolayer plane . Therefore optical excitation and detection in the plane of the monolayer (i.e., in the limit of grazing incidence) allows a more efficient detection of these in principle spin-forbidden transitions than experiments with excitation/detection normal to the monolayer, as indicated in Fig. 4c,d. This -mode exciton transition can be clearly identified by its polarization perpendicular to the surface using a linear polarizer. Another approach is to couple the -mode to surface plasmons for polarization selectivity as in Ref. Zhou et al. (2017). Using these techniques, the same dark-bright exciton splitting as reported in the magnetic-field dependent experiments, namely 40-50 meV, could be extracted for ML WSe. The origin of the -mode transition, which remains very weak compared to the spin-allowed exciton, lies in mixing of bands with different spin configuration, i.e., that the valence and conduction bands are not perfectly polarized spin polarized (for perfect spin polarization of the bands, the -mode transition would not be detectable). Of similar origin as the spin-forbidden intra-valley dark excitons are the spin-allowed inter-valley states, where the direct transition of the electron from the valence to conduction band is forbidden due to the momentum conservation. Examples are inter-valley -, -, - and - excitons, where , and refer to the particular points in the Brillouin zone. one.
(a) A schematic overview of typical allowed and forbidden electronic transitions for the respective bright and dark exciton states.
The underlying band structure is simplified for clarity, including only the upper valence band at
Figure 5: (a) A schematic overview of typical allowed and forbidden electronic transitions for the respective bright and dark exciton states. The underlying band structure is simplified for clarity, including only the upper valence band at and the high-symmetry points and in the conduction band. The order of the spin states in the conduction band, corresponds to W-based TMD MLs, see Refs. Liu et al. (2013); Glazov et al. (2014); Kormanyos et al. (2015) for details. (b) Schematic illustration of the exciton ground-state dispersion in the two-particle representation. The light-cone for bright excitons is marked by the photon-dispersion, , and the excitons outside of the cone are essentially dark, where is the speed of light.
ii.3.2 Radiative lifetime
An additional constraint on the optical activity of the excitons is imposed by the center-of-mass wavevector conservation , which should be equal to the projection of the photon wavevector on the TMD ML plane. The range of the wavevectors meeting this requirement obeys, for a ML in vacuum, , where is the photon frequency corresponding to the exciton resonance (for exciton ). Bright excitons within this so-called “light cone” couple directly to light, i.e., can be either be created by the absorption of a photon or spontaneously decay through photon emission, while excitons with are optically inactive.
In general, the radiative decay rate of the bright excitons within the light cone, which also determines the overall strength of optical absorption (i.e., total area of the resonance), is proportional to the probability of finding the electron and the hole within the same unit cell, i.e., to , where is the effective Bohr radius. The strong Coulomb interaction in TMD MLs, leading to the large binding energies of the excitons, also results in relatively small exciton Bohr radii, nm for the state, as discussed above. Estimates of for the exciton within a simple two-band model Glazov et al. (2014, 2015) then yield meV. This corresponds to a radiative decay time ps, in good agreement with experimental observations Poellmann et al. (2015); Moody et al. (2015); Palummo et al. (2015); Jakubczyk et al. (2016); Robert et al. (2016). Hence, the radiative decay times of excitons in TMD MLs are about two orders of magnitude shorter as compared, e.g., with the excitons in GaAs-based quantum wells Deveaud et al. (1991). In addition, the radiative broadening on the order of 1 meV imposes a lower limit on the total linewidth of the bright exciton resonance Cadiz et al. (2017); Moody et al. (2015); Jakubczyk et al. (2016); Dey et al. (2016). This simple analysis is further corroborated by first principle calculations, which predict exciton intrinsic lifetimes as short as hundreds of fs Palummo et al. (2015); Wang et al. (2016b).
Importantly, the presence of the radiative cone determines the overall effective decay rate of an exciton population at finite temperatures through the radiative recombination channel. Which fraction of excitons is within and which fraction is outside the light cone depends on temperature Andreani et al. (1991). The effective radiative decay for thermalized populations is obtained from the radiative decay rate within the light cone , weighted by the fraction of the excitons inside the cone. In case of strictly 2D systems with a parabolic exciton dispersion, above very low temperatures, this fraction decreases linearly with the temperature Andreani et al. (1991). For MoS, the effective radiative recombination time is calculated to be on the order of several 10’s of ps at cryogenic temperatures and to exceed a nanosecond at room temperature Wang et al. (2016b). While radiative recombination is forbidden outside the light cone if wavevector conservation holds, this can be partially relaxed due to the presence of disorder caused, e.g., by impurities or defects, since momentum conservation is relaxed in disordered systems Citrin (1993); Vinattieri et al. (1994).
The effective radiative lifetime is, of course, also affected by the presence of the spin-forbidden intra-valley and inter-valley dark states considering thermal distribution of excitons between these states. It further depends on the relaxation rate of the dark excitons of the reservoir towards low-momentum states Slobodeniuk and Basko (2016b), potentially leading to the additional depletion of the excitons within the radiative cone Kira and Koch (2005). When the excitons are predominantly created within the radiative cone through resonant or near-resonant excitation, an initial ultra-fast decay has been indeed observed Poellmann et al. (2015); Robert et al. (2016) and attributed to the intrinsic radiative recombination time of the bright states. The excitons were shown to thermalize subsequently and to experience slower decay at later times. At room temperature, effective radiative exciton lifetimes as long as 20 ns have been measured in super-acid treated samples Amani et al. (2015) and estimated to be on the order of 100 ns from combined time-resolved PL and quantum yield measurements Jin et al. (2017).
Finally we note, that the overall decay of the exciton population is usually governed by the complex interplay of radiative and non-radiative channels. It is thus affected by the presence of defects and disorder, Auger-type exciton-exciton annihilation at elevated densities Kumar et al. (2014a); Mouri et al. (2014); Sun et al. (2014); Yu et al. (2016), and through the formation of exciton complexes such as biexcitons You et al. (2015); Sie et al. (2015a) and trions Mak et al. (2013); Ross et al. (2013). Finally, radiative recombination itself depends on the optical environment, i.e., the effective density of the photon modes available as final states for the recombination of the excitons. The effective strength of the light-matter interaction is thus modified by the optical properties of the surroundings (e.g., refractive index of the substrate) and can be tuned externally. The integration of the TMD MLs in optical cavities highlights this situation. Indeed, the strong-coupling regime has been demonstrated, where excitons and photons mix to create hybrid quasiparticles, exciton polaritons Liu et al. (2015b); Dufferwiel et al. (2015); Vasilevskiy et al. (2015); Lundt et al. (2016); Sidler et al. (2016); Flatten et al. (2016). The discussion above highlights the complex challenges for interpreting for example photoluminescence emission times measured in experiments in terms of intrinsic decay rates, effective radiative lifetimes and non-radiative channels, for example.
ii.3.3 Exciton formation
In most of the photoluminescence spectroscopy experiments performed on TMDs monolayers, the excitation laser energy is larger than the exciton ground state energy. This means that in addition to exciton formation dynamics also energy relaxation has to be taken into account. Two exciton formation processes are usually considered in semiconductors: (i) direct hot exciton photogeneration, with the simultaneous emission of phonons, in which the constitutive electron-hole pair is geminate Bonnot et al. (1974); or (ii) bimolecular exciton formation which consists of direct binding of electrons and holes Barrau et al. (1973). In 2D semiconductors based on GaAs quantum wells the bimolecular formation process plays an important role Amand et al. (1994); Piermarocchi et al. (1997); Szczytko et al. (2004). When the excitation energy lies below the free particle bandgap in TMD monolayers, the exciton formation process can only be geminate (neglecting Auger like and two-photon absorption effects). Note that this process, which involves a simultaneous emission of phonons, can yield the formation of either intra-valley or inter-valley excitons. When the excitation energy is strongly non-resonant, i.e. above the free particle bandgap, the PL dynamics is very similar compared to the quasi-resonant excitation conditions in MoS or WSe monolayers Korn et al. (2011); Wang et al. (2014); Zhang et al. (2015d). The PL rise time is still very short and no signature of bimolecular formation and energy relaxation of hot excitons can be evidenced, in contrast to III-V or II-VI quantum wells. Indeed, recent reports indicate ultra-fast exciton formation on sub-ps timescales after non-resonant excitation Cha et al. (2016); Ceballos et al. (2016); Steinleitner et al. (2017). While further studies are required, at this stage one can already speculate that the strong-exciton phonon coupling in TMD monolayers seems to yield efficient exciton formation process for a wide range of excitation conditions. We also note, that alternative processes such as multi-exciton generation, i.e., the reverse of Auger-type annihilation, might become important for sufficiently high excess energies.
Iii Excitons at finite carrier densities
The discussion in the previous Section II deals with the fundamental properties of the excitons in TMD MLs in the low-density regime. However, the presence of photoexcited carriers, either in the form of Coulomb-bound or free charges, can significantly affect the properties of the excitonic states, as is the case for traditional 2D systems with translational symmetry, such as quantum wells Haug and Koch (2009).
iii.1 The intermediate and high density regime
We distinguish two partially overlapping regimes of intermediate and high density conditions. These can be defined as follows: In the intermediate density regime the excitons can still be considered as bound electron-hole pairs, but with properties considerably modified compared with the low-density limit. In the high density regime, beyond the so-called Mott transition, excitons are no longer bound states; the electrons and holes are more appropriately described as a dense Coulomb-correlated gas. Under such conditions, the conductivity of the photoexcited material behaves less like the insulating semiconductor with neutral excitons and more like a metal with many free carriers, whence the description of this effect as a photoinduced Mott transition. The transition between two regimes is controlled by the ratio of the average carrier-carrier (or, alternatively, exciton-exciton) separation to the exciton Bohr radius at low density: For the density of carriers (or excitons) can be considered as high. Due to the small Bohr radius of about 1 nm in TMD MLs, the intermediate and high density regimes are reached at significantly higher carrier densities compared to systems with weaker Coulomb interactions, such as III-V or II-VI semiconductor quantum wells. With respect to absolute numbers, the intermediate case with inter-particle distances about 100 to , broadly covers the density range between and several cm. The high density case then corresponds to separations on the order of a few Bohr radii or less and is considered to apply for carrier densities of a few to cm or higher. In particular, the electron-hole pair-density of , often used as a rough upper estimate for the Mott transition Klingshirn (2007), yields cm for TMD MLs.
The main phenomena occurring at elevated carrier densities can be briefly summarized as follows:
First, there are efficient scattering events. Elastic and inelastic scattering of excitons with free carriers or excitons leads to relaxation of the exciton phase, energy, momentum and spin and thus to spectral broadening of the exciton resonances Wang et al. (1993); Shi et al. (2013b); Moody et al. (2015); Chernikov et al. (2015b); Dey et al. (2016). In addition, through inelastic scattering with free charge carriers, an exciton can capture an additional charge and form a bound three-particle state at intermediate densities, the so-called trion states Stébé and Ainane (1989); Kheng et al. (1993); Mak et al. (2013); Ross et al. (2013). Similarly, at intermediate exciton densities, inelastic scattering between excitons can result in a bound two-exciton state, the biexciton state Miller et al. (1982); You et al. (2015); Sie et al. (2015a); Plechinger et al. (2015); Shang et al. (2015), resembling the hydrogen molecule.
Charged excitons (trions) and biexcitons were predicted for bulk semiconductors Lampert (1958) by analogy with molecules and ions. While they naturally appear as a result of Coulomb interactions between three or four charge carriers, we also note that in real systems with finite carrier densities, the correlations between, e.g., excitons/trions and the Fermi sea of electrons (or holes) may be of importance Suris (2003); Sidler et al. (2016); Efimkin and MacDonald (2017). Furthermore, excitons formed from two fermions can be considered as composite bosons at least for not too high carrier densities. Interestingly, excitons are expected to demonstrate at low to intermediate densities collective phenomena such as Bose-Einstein condensation (strictly speaking, quasi-condensation in two-dimensions) and superfluidity Moskalenko (1962); Keldysh and Kozlov (1968); Fogler et al. (2014). First signatures of boson scattering of excitons in monolayer WSe have been reported Manca et al. (2017). Additionally, exciton-exciton scattering can also lead to an Auger-like process: the non-radiative recombination of one exciton and dissociation of the other into an unbound electron and hole, leading to exciton-exciton annihilation, as already mentioned in Sec. II.3.2 Kumar et al. (2014a); Mouri et al. (2014); Sun et al. (2014); Yu et al. (2016); Robert et al. (2016).
Second, finite quasiparticle densities generally lead to what can be broadly called dynamic screening of the Coulomb interaction Haug and Koch (2009); Klingshirn (2007). In analogy to the behavior of quasi-free carriers in metals, it is related to both direct and exchange contributions and typically decreases the effective strength of the Coulomb interaction. As a result of the decreasing electron-hole attraction, the exciton binding energy is reduced; the average electron-hole separation increases, thus also leading to lower oscillator strengths for excitonic transitons i.e. a to weaker light-matter coupling. In addition, the photoinduced screening induces renormalization of the free particle band gap to lower energies. In many cases, including TMD MLs, the decrease of the exciton ground-state () binding energy and the red shift of the bandgap are of similar magnitude, at least in the intermediate-density regime. Hence, while the absolute shifts of the resonance, i.e., of the optical band gap (see Fig. 2), can be rather small, on the order of several 10’s of meV, the underlying changes in the nature of excitations (binding energies, free-particle band gap) are about an order of magnitude larger Steinhoff et al. (2014); Chernikov et al. (2015b, a); Ulstrup et al. (2016).
Third, the presence of free carriers decreases the available phase space for the electron-hole complexes due to the Pauli blocking Haug and Koch (2009). This also results in a decrease of trion and exciton binding energies and the oscillator strengths. In addition, at sufficiently high densities of both electrons and holes, it results in population inversion, i.e., more electrons populating the conduction rather than valence band over a certain range of energy. As in quantum wells Haug and Koch (2009), this regime is expected to roughly coincide with the Mott transition discussed above. Moreover, in the high-density regime, bound electron-hole states cannot be formed and thus the optical spectra are no longer dominated by the exciton resonance. Population inversion then leads to stimulated emission processes and negative absorption for the corresponding transitions Haug and Koch (2009); Chernikov et al. (2015a). In the absence of competing scattering and absorption channels in the respective energy range, this would give rise to amplification of radiation and allow in principle for the use of the material as an active medium in lasing applications; see Refs. Wu et al. (2015b); Ye et al. (2015); Salehzadeh et al. (2015) for reports of lasing in TMD MLs. Many issues in the high-density regime still remains to be explored, both experimentally and theoretically, the prepondence of literature on TMD monolayers having addressed the behavior of the materials at intermediate densities Korn et al. (2011); Wang et al. (2013); Lagarde et al. (2014); Singh et al. (2014); Mai et al. (2014a); Kumar et al. (2014b); Zhu et al. (2014b); Poellmann et al. (2015); Schmidt et al. (2016b). We also note that an accurate, quantitative treatment of many-body physics of strongly interacting systems is a very challenging problem. Promising steps in that direction are presented, for example in Steinhoff et al. (2014, 2015); Schmidt et al. (2016b); Selig et al. (2016). The relative simplicity of the electronic structure of TMD monolayers, their tunability under external conditions and dielectric media, and experimental accessibility and their strong many-body effects make these systems promising test cases for advancing our understanding of fundamental issues in many-body interactions at high densities.
iii.2 Electric charge control
(a) Absorbance and photoluminescence experiments exhibiting signatures of neutral (A) and charged (A
Figure 6: (a) Absorbance and photoluminescence experiments exhibiting signatures of neutral (A) and charged (A) excitons in a charge tunable MoS monolayer Mak et al. (2013). (b) Color contour plot of PL from an electrically gated MoSe monolayer that can be tuned to show emission from positively charged (X) to negatively charged (X) trion species Ross et al. (2014). (c) Contour plot of the first derivative of the differential reflectivity in a charge tunable WSe monolayer. The - and -type regimes are manifested by the presence of X and X transitions. Around charge neutrality, the neutral exciton X and an excited state X are visible Courtade et al. (2017).
While neutral excitons tend to dominate the optical properties of ML TMDs, more complex exciton species also play an important role. Particularly prevalent are charged excitons or trions, the species formed when an exciton can bind another electron (or hole) to form a negatively (or positively) charged three-particle state. Since unintentional doping in TMD layers is often -type Radisavljevic et al. (2011a); Ayari et al. (2007), the formation of negative trions is likely, assuming that adsorbates do not introduce additional significant changes to the doping level. In general, the trion binding energy in semiconductor nano-structures is typically 10% of the exciton binding energy. For a neutral exciton binding energy on the order of 500 meV, this yields an estimated trion binding energy of several tens of meV.
In monolayer MoS, Mak and coworkers observed tightly bound negative trions with a binding energy of about 20 meV Mak et al. (2013), see Fig. 6a, which is one order of magnitude larger than the binding energy in well-studied quasi-2D systems such as II-VI quantum wells Kheng et al. (1993), where trions were first observed. At low temperature in monolayer MoSe, well-separated neutral and charged excitons are observed with a trion binding energy of approximately 30 meV, as clearly demonstrated in charge tunable structures Ross et al. (2013), see Fig. 6b. In this work, the authors also show the full bipolar transition from the neutral exciton to either positively or negatively charged trions, depending on the sign of the applied gate voltage. The binding energies of these two kinds of trion species were found to be similar, an observation consistent with only minor differences in the effective masses of electrons and holes in most of the studied TMDs Liu et al. (2013); Kormanyos et al. (2015). We also note, that in optical spectra, the energy separation between neutral excitons and trions is a sum of the trion binding energy (strictly defined for the zero-density case) and a second term proportional to the Fermi energy of the free charge carriers (see, e.g., Mak et al. (2013); Chernikov et al. (2015b)). In addition to the trion signatures in PL and at sufficiently large free carrier densities, the signatures of the trions are also found in absorption-type measurements Mak et al. (2013); Jones et al. (2013); Chernikov et al. (2014); Chernikov et al. (2015b); Singh et al. (2016).
Electrical charge tuning of excitons is commonly observed in monolayer TMDs devices, also including WSe Jones et al. (2013) and WS Plechinger et al. (2015); Shang et al. (2015). In WS, these two works also reported biexcitons in addition to neutral and charged excitons.
As a fundamental difference to conventional quantum well structures, in monolayer TMDs the carriers have an additional degree of freedom: the valley index. This leads to several optically bright and also dark configurations (for a classification, see e.g. Yu et al. (2015b); Dery and Song (2015); Ganchev et al. (2015); Courtade et al. (2017) ), which can give rise to potentially complex recombination and polarization dynamics Volmer et al. (2017). Charge tunable monolayers that are encapsulated hexagonal boron nitride, result in narrow optical transitions, with low-temperature linewidths typically below 5 meV, as shown in Fig. 6c. This has revealed the trion fine structure related to the occupation of the same or different valleys by the two electrons. An informative comparison between charge tuning in ML WSe and ML MoSe was recenlty reported in Wang et al. (2017). The concept of the trion as a three particle complex is useful at low carrier densities; at elevated densities intriguing new many-body effects have been predicted by several groups Dery (2016); Efimkin and MacDonald (2017); Sidler et al. (2016).
Iv Valley polarization dynamics
iv.1 Valley-polarized excitons
Optical control of valley polarization is one of the most fascinating properties of TMD monolayers. In the majority of cases, due to the strong Coulomb interaction, the valley dynamics of photogenerated electrons and holes cannot be adequately described within a single-particle picture as excitonic effects also impact the polarization dynamics of the optical transitions. As previously discussed and predicted in Refs. Xiao et al. (2012); Cao et al. (2012), optical valley initialization is based on chiral selection rules for interband transitions: polarized excitation results in the inter-band transitions in the valley, and, correspondingly, polarized excitation results in transitions in the valley. Initial experimental confirmation of this effect was reported in steady-state PL measurements in MoS monolayers Zeng et al. (2012); Mak et al. (2012); Cao et al. (2012); Sallen et al. (2012), as well as in WSe and WS systems Jones et al. (2013); Wang et al. (2014); Mai et al. (2014b); Sie et al. (2015b); Kim et al. (2014); Zhu et al. (2014a). Also, the overall degree of polarization has been shown to reach almost unity; we note, however, that extrinsic parameters such as, e.g., short carrier lifetimes due to non-radiative channels can strongly affect this value and detailed analysis of steady-state experiments is challenging. In ML MoSe, however, non-resonant excitation usually results in at most 5% PL polarization Wang et al. (2015b), the reason for this difference remaining a topic of ongoing discussion. Interestingly, for MoSe, the application of a strong out-of-plane magnetic field combined with resonant or nearly resonant optical excitation appears to be necessary to initialize large valley polarization Kioseoglou et al. (2016). Finally, in addition to optical valley initialization, strong circularly polarized emission is also reported from electro-luminescence (EL) in TMD-based light-emitting devices – an interesting and technologically promising observation Zhang et al. (2014c); Onga et al. (2016); Yang et al. (2016).
(a) Exciton PL emission time of the order of 2 ps measured in time-resolved photo-luminescence for ML WSe
Figure 7: (a) Exciton PL emission time of the order of 2 ps measured in time-resolved photo-luminescence for ML WSe at K Robert et al. (2016). (b) Schematic showing that and neutral excitons are coupled by the electron-hole Coulomb exchange interaction Glazov et al. (2014). (c) Decay of the neutral exciton polarization in WSe monolayers on ps time scales as measured by Kerr rotation Zhu et al. (2014b) (d) Decay of resident electron polarization as measured by Kerr rotation in monolayer WS, with a typical time constant of 5 ns Bushong et al. (2016) for T=6 K. (e) Decay of hole polarization in a charge tunable WSe monolayer with a time constant of 2s Dey et al. (2017) for T=4 K, where is the magnetic field applied in the sample plane.
As previously discussed in Sec. II, following excitation with circularly polarized light across the band gap, an exciton is formed from carriers in a specific valley due to the robust, valley dependent optical selection rules. The degree of circular polarization , as measured in steady state PL, can be approximated as , where is exciton lifetime, is the polarization lifetime and is the initially generated polarization. High in steady state PL experiments generally results from a specific ratio of versus and does not necessary require particularly long polarization lifetimes.
Time-resolved studies provide more direct access to the valley dynamics of excitons. In particular, the determination of the exciton PL emission times on the order of several to tens of picoseconds in typical samples at low temperature, together with measurements of the polarization dynamics indicate that the neutral exciton looses its initial valley polarization very quickly, over a few ps. This observation is difficult to understand at the level of individual electrons and holes: The valley polarization in TMDs monolayers should be very stable from within single-particle picture as it requires inter-valley scattering with change in momentum, typically combined with additional electron and hole spin flipping Xiao et al. (2012). Spin conserving inter-valley scattering is generally energetically unfavorable due to spin splittings of several hundreds and tens of meV in the valence and conduction bands, respectively Kormanyos et al. (2015). In considering the valley dynamics following optical excitation, it is, however, crucial to note that rather than observing individual spin and valley polarized carriers, we create and probe the dynamics of valley-polarized excitons.
The Coulomb interaction between the charge carriers does, in fact, strongly impact the valley dynamics in TMD MLs: The long-range exchange interaction between the electron and hole forming an exciton gives rise to a new and efficient decay mechanism for the exciton polarization Yu et al. (2014); Glazov et al. (2014); Yu and Wu (2014); Hao et al. (2016); Zhu et al. (2014b). Indeed, the -interaction results in the admixture of the valence band states in the conduction electron state and of the conduction band states in the hole state in the exciton. As a result of this admixture and of the Coulomb interaction, an exciton with an electron in the valley can effectively recombine and produce to an exciton with an electron in the valley. This process needs neither the transfer of significant momentum of an individual carrier nor its spin flip. It can be interpreted in a purely electrodynamical way if one considers an optically active exciton as a microscopic dipole oscillating at its resonant frequency. Naturally, this mechanism is efficient only for bright exciton states and the dark states are largely unaffected. For a bright exciton propagating in the ML plane with the center of mass wavevector , the proper eigenstates are the linear combinations of states active in the and circular polarization: One eigenstate has a microscopic dipole moment oscillating along the wavevector , this is the longitudinal exciton, and the other one has the dipole moment oscillating perpendicular to the , being the transverse exciton. The splitting between those states, i.e., the longitudinal-transverse splitting, acts as an effective magnetic field and mixes the and polarized excitons, which are no longer eigenstates of the system, leading to depolarization of excitons Maialle et al. (1993); Glazov et al. (2014); Ivchenko (2005); Glazov et al. (2015). As compared with other 2D excitons, e.g., in GaAs or CdTe quantum wells, in TMD MLs the longitudinal-transverse splitting is enhanced by one to two orders of magnitude due to the tighter binding of the electron to the hole in the exciton and, correspondingly, the much higher oscillator strength of the optical transitions Li et al. (2014a). This mechanism, here discussed in the context of valley polarization, also limits valley coherence times Glazov et al. (2014); Hao et al. (2016), see below.
Experimentally, the valley polarization dynamics can be monitored by polarization-resolved time-resolved photoluminescence (TRPL) and pump-probe measurements. By using time-resolved Kerr rotation, Zhu found that in monolayer WSe the exciton valley depolarization time is around 6 ps at 4K, in good agreement with the Coulomb exchange mediated valley depolarization Zhu et al. (2014b); Yan et al. (2017), see Fig. 7c. In ML MoS and MoSe fast exciton depolarization times (ps) were also reported Lagarde et al. (2014); Mai et al. (2014a); Wang et al. (2013); Jakubczyk et al. (2016). These experiments all demonstrate measurable depolarization of the neutral exciton X, although the exact relaxation time may be different in specific measurements depending on the samples used and experimental techniques employed.
Valley depolarization due to the long-range Coulomb exchange is expected to be less efficient for spatially indirect excitons, where the electron-hole overlap is weaker. This configuration applies to type II ML TMD heterostructures, where holes reside in WSe and electrons in MoSe, for example. Indeed Rivera et al. (2016) have observed valley lifetimes of tens of ns for indirect excitons at low temperature, which motivates further valley dynamics experiments in structures with tunable Coulomb interactions, albeit with more complex polarization selection rules. Another type of excitons that is, in principle, unaffected by valley depolarization through Coulomb exchange are optically dark excitons. With a slight mixing of bright excitons with dark excitons (for optical readout), the dark excitons may provide a promising alternative configuration for exciton valley manipulation Zhang et al. (2017).
iv.2 Valley coherence
As discussed in the previous section, excitation with circularly polarized light can induce valley polarization in a TMD monolayer Xiao et al. (2012). Similarly, excitation with linearly polarized light can generate valley coherence, i.e., a coherent superposition of and valley states, as first reported for the neutral exciton in ML WSe Jones et al. (2013). A fingerprint of generated valley coherence is the emission of linearly polarized light from the neutral exciton, polarized along the same axis as the polarization of the excitation, an effect also termed optical alignment of excitons in the earlier literature Meier and Zakharchenya (1984). In addition, valley coherence in the ML is sufficiently robust to allow rotation of the coherent superposition of valley states in applied magnetic fields Wang et al. (2016a); Schmidt et al. (2016a); Cadiz et al. (2017) or with the help of a pseudo-magnetic field generated by circularly polarized light via the optical Stark effect Ye et al. (2017).
iv.3 Valley polarization dynamics of trions and free charge carriers
For manipulating valley polarization of bright, direct excitons within the radiative cone, the radiative lifetime in the ps range sets an upper for the available time scale. In addition neutral exciton valley polarization of the neutral exciton decays rapidly due to the Coulomb-exchange mediated mechanism discussed above and shown in Fig. 7c. This depolarization mechanism does not apply to single carriers for which spin-valley locking due to the large spin-orbit spin splittings is expected to lead to significantly longer polarization lifetimes. In the presence of resident carriers, optical excitation can lead to the formation of charged excitons also called trions, Sec. III.2. Commonly observed bright trions decay on slightly longer timescales than excitons, namely in about 30 ps at K Wang et al. (2014), which means that the time range for valley index manipulation is still restricted to ultra-fast optics. For future valleytronics experiments and devices, it is therefore interesting to know whether the resident carriers left behind after recombination are spin and valley polarized.
Several recent time-resolved studies point to encouragingly long polarization dynamics of resident carriers in monolayer TMDs at low temperature. Polarization decays of 3-5 ns were observed in CVD-grown MoS and WS monolayers that were unintentionally electron-doped Yang et al. (2015b, a); Bushong et al. (2016), as can be seen in Fig. 7d. Longer times up to tens of ns were observed in unintentionally hole-doped CVD-grown WSe Hsu et al. (2015); Song et al. (2016). Using time-resolved Kerr rotation, the spin/valley dynamics of resident electrons and holes in charge-tunable WSe monolayer were recenty measured by Dey et al. (2017). In the -type regime, long (70 ns) polarization relaxation of electrons were observed and considerably longer (s) polarization relaxation of holes were revealed in the -doped regime (see Fig. 7e), as expected because of the strong spin-valley locking of holes in the valence band of monolayer TMDs. Long hole polarization lifetimes were also suggested by a recent report of microsecond hole polarizations of indirect excitons in WSe/MoS bilayers Kim et al. (2016). In this case rapid electron-hole spatial separation following neutral exciton generation leads to long-lived indirect excitons, in which the spatial overlap of the electron and hole is relatively small. If the two layers are not aligned with respect to the in-plane angle, there is also an additional mismatch of the respective band extrema in momentum space Yu et al. (2015c). The resulting oscillator strength is very small and should directly lead to a rather slow spin-valley depolarization through long-range exchange coupling, previously discussed in Sec. IV.1. One of the most important challenges at this early stage is to identify the conditions and mechanisms that promote transfer of the optical generated valley polarization of trions or neutral excitons to the resident carriers Dyakonov (2008); Glazov (2012).
iv.4 Lifting valley degeneracy in external fields
(a) Schematic of Zeeman shifts in magnetic field B perpendicular to the monolayer plane. (b) Measurements on
Figure 8: (a) Schematic of Zeeman shifts in magnetic field B perpendicular to the monolayer plane. (b) Measurements on MoSe MLs from MacNeill et al. (2015) that show a clear Zeeman splitting. (c) Reflectivity measurements on WS MLs in high magnetic fields and (d) the Zeeman splitting extracted for A- and B-excitons Stier et al. (2016a).
In the absence of any external or effective magnetic or electric field, the exciton transitions involving carriers in the and valley are degenerate and the spin states in the two types of valleys are related by time reversal symmetry. This symmetry can be broken through the application of an external magnetic field perpendicular to the plane of the monolayer. There are two important consequences that are briefly discussed below: first, the valley states split by a Zeeman energy typically on the order of a few meV at 10 Tesla. Second, the valley polarization could change due to this splitting, as the lower energy valley might be populated preferentially.
Application of a magnetic field along the direction (perpendicular to the ML plane) gives rise to a valley Zeeman splitting in monolayer WSe and MoSe Li et al. (2014b); MacNeill et al. (2015); Aivazian et al. (2015); Srivastava et al. (2015); Wang et al. (2015), lifting the valley degeneracy. In these studies, an energy difference on the meV scale is found between the and polarized PL components, stemming from the () valley, respectively, as . In monolayer MoSe, the and PL components are clearly split in magnetic fields of 6.7 T as shown in Fig. 8 MacNeill et al. (2015). The valley Zeeman splitting scales linearly with the magnetic field as depicted in Fig. 8 and the slope gives the effective exciton -factor as , where is the Bohr magneton. The exciton -factor measured for instance in PL contains contribution from electron and hole -factors. In several magneto-optics experiments also on ML MoTe and WS Arora et al. (2016); Stier et al. (2016a); Mitioglu et al. (2015, 2016) the exciton -factor is about . The origin of this large -factor is currently not fully understood. The exact energy separation of the valley and spin states is important for spin and valley manipulation schemes. In addition, the -factor also contains important information on the impact of remote bands on the optical transitions, in a similar way as the effective mass tensor, see discussions in Wang et al. (2015); MacNeill et al. (2015). There are basically two approaches to calculate the Zeeman splittings in TMD MLs. One is based on the atomic approach by considering atoms as essentially isolated and associating the -factors of the conduction and valence band states with the spin and orbital contributions of corresponding and atomic shells Aivazian et al. (2015); Srivastava et al. (2015). The other approach is based on the Bloch theorem and -perturbation theory which allows to relate -factor with the band structure parameters of TMD ML MacNeill et al. (2015). Merging these approaches, which can be naturally done within atomistic tight-binding models Wang et al. (2015); Rybkovskiy et al. (2017), is one of the open challenges for further theoretical studies.
At zero magnetic field, the valley polarization in optical experiments is only induced by the circularly polarized excitation laser. At finite magnetic fields, a valley Zeeman splitting is induced and the observed polarization may now also depend on the magnetic field strength. For ML WSe, sign and amplitude of the valley polarization, even in magnetic fields of several Tesla, is mainly determined by the excitation laser helicity Wang et al. (2015); Mitioglu et al. (2015). In contrast, the sign and amplitude of the valley polarization detected via PL emission in MoSe and MoTe is mainly determined by the sign and amplitude of the applied magnetic field Wang et al. (2015); MacNeill et al. (2015); Arora et al. (2016).
In contrast to a perpendicular magnetic field, in monolayer MoS an in-plane magnetic field () up to 9 T does not measurably affect the exciton valley polarization or splitting Sallen et al. (2012); Zeng et al. (2012), as expected from symmetry arguments.
An elegant, alternative way of lifting valley degeneracy is using the optical Stark effect. Typically a circularly polarized pulsed laser with below bandgap radiation is used to induce a shift in energy of the exciton resonance Joffre et al. (1989); Press et al. (2008). This shift becomes valley selective in ML TMDs, with induced effective Zeeman splitting is up to 20 meV, corresponding to effective magnetic fields of tens of Tesla Sie et al. (2015b); Kim et al. (2014). The effective magnetic field created by the Stark effect can also be employed to rotate a coherent superposition of valley states Ye et al. (2017).
V Summary and Perspectives
In this short review we have detailed some of the remarkable optical properties of transition metal dichalcogenide monolayers. The strong Coulomb interaction leads to exciton binding energies of several hundred meV and excitons therefore dominate the optical properties up to room temperature. The ultimate thinness of these materials provides unique opportunities for engineering the excitonic properties. For example, the dielectric environment can be tuned. Here first experiments show that encapsulation of TMD monolayers in hexagonal boron nitride, for example, significantly reduces the exciton binding energy Stier et al. (2016a). More experiments will show in the future how sensitive the exciton ground, excited states and the free carrier bandgap are to changes in their dielectric environment Ye et al. (2014); Raja et al. (2017), which will depend on the spatial extent of the different states.
Another route to engineering the optical properties, particularly the polarization dynamics, is to place ferromagnetic layers close to the monolayer. These proximity effects might be able to lift valley degeneracy even without applying any external magnetic fields, a great prospect for controlling spin and valley dynamics Zhong et al. (2017); Zhao et al. (2016).
In this article we have concentrated on excitons in single monolayers, but many of these concepts apply also to more complex exciton configurations in van der Waals heterostructures Geim and Grigorieva (2013) where the electrons and holes do not necessarily reside in the same layer. Here many possibilities can be explored, such as studies of Bose-Einstein condensates and superfluidity; the wide choice of layered materials allows tuning the oscillator strength of the optical transitions as well as the spin- and valley polarization lifetimes Kim et al. (2016); Rivera et al. (2016); Fogler et al. (2014); Ceballos et al. (2014); Nagler et al. (2017).
G.W. and B.U. acknowledge funding from ERC Grant No. 306719. A.C. gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Programme (CH 1672/1-1). M.M.G. acknowledges support through RF President grant MD-1555.2017.2 and RFBR projects 17-02-00383 and 17-52-16020. T.F.H. wishes to acknowledge support through the AMOS program within the Chemical Sciences, Geosciences, and Biosciences Division, Office of Basic Energy Sciences of the U.S. Department of Energy under Contract No. DE-AC02-76-SFO0515 and by the Gordon and Betty Moore Foundation’s EPiQS Initiative through Grant No. GBMF4545. X.M. and T.A. thank ANR MoS2ValleyControl. X.M. also acknowledges the Institut Universitaire de France.
We thank group members and colleagues, past and present, for stimulating discussions, in particular T. Berkelbach, D. Reichman, C. Robert, I.C. Gerber, M.V. Durnev, M.A. Semina and E.L. Ivchenko.
*present address G.W.: Cambridge Graphene Centre, University of Cambridge, Cambridge CB3 0FA, UK
For everything else, email us at [email protected]. |
eda9bb8d046b2014 | Ionization by intense and short electric pulses - classical picture
2019. április 26., 10:56
A PTE TTK Fizikai Intézet és a PAB meghívja Tőkési Károly (Magyar Tudományos Akadémia Atommagkutató Intézet) "Ionization by intense and short electric pulses - classical picture" c. előadására.
Helyszín: PTE TTK A/401. előadóterem
Időpoint: 2019. 05. 02. (csütörtök) 13:00
We present theoretical studies of the ionization of simple targets like hydrogen atom, positronium and water molecule as a result of the interaction with an ultrashort external electric field. Doubly-differential momentum distributions and angular momentum distributions of ejected electrons calculated in the framework of the Coulomb-Volkov and strong field approximations, as well as classical calculations are compared with the exact solution of the time dependent Schrödinger equation. We show that in the impulsive limit, the Coulomb-Volkov distorted wave theory reproduces the exact solution. The validity of the strong field approximation is probed both classically and quantum mechanically.
We show that classical mechanics describes the proper quantum momentum distributions of the ejected electrons right after a sudden momentum transfer, however pronounced the differences at latter stages that arise during the subsequent electron-nucleus interaction. Although the classical calculations reproduce the quantum momentum distributions, it fails to describe properly the angular momentum distributions, even in the limit of strong fields. The origin of this failure can be attributed to the difference between quantum and classical initial spatial distributions.
• A PTE TTK Fizikai Intézet hivatalos oldala okostelefonon és tableten! Kattintson ide! |
de8ea0d016bce81c | söndag 29 maj 2016
Restart of Quantum Mechanics: From Observable/Measurable to Computable
Schrödinger and Heisenberg receiving the Nobel Prize in Physics in 1933/32..
If modern physics was to start today instead of as it did 100 years ago with the development of quantum mechanics as atomistic mechanics by Bohr-Heisenberg and Schrödinger, what would be the difference?
Bohr-Heisenberg were obsessed with the question:
• What can be observed?
motivated by Bohr's Law:
• We are allowed to speak only about what can be observed.
Today, with the computer to the service of atom physics, a better question may be:
• What can be computed?
possibly based on an idea that
• It may be meaningful to speak about what can be computed.
Schrödinger as the inventor of the Schrödinger equation as the basic mathematical model of quantum mechanics, never accepted the Bohr-Heisenberg Copenhagen Interpretation of quantum mechanics with the Schrödinger wave function as solution of the Schrödinger equation interpreted as a probability of particle configuration, with collapse of the wave function into actual particle configuration under observation/measurement.
Schrödinger sought an interpretation of the wave function as a physical wave in a classical continuum mechanical meaning, but had to give in to Bohr-Heisenberg, because the multi-dimensionality of the Schrödinger equation did not allow a direct physical interpretation, only a probabilistic particle interpretation. Thus the Schrödinger equation to Schrödinger became a monster out of control, as expressed in the following famous quote:
• If we have to go on with these damned quantum jumps, then I'm sorry that I ever got involved.
And Schrödinger's equation is a monster also from computational point of view, because solution work scales severely exponentially with the number of electrons and thus is beyond reach even for small $N$.
But the Schrödinger equation is an ad hoc model with only weak formal unphysical rationale, including the basic ingredients of (i) linearity and (ii) multi-dimensionality.
Copenhagen quantum mechanics is thus based on a Schrödinger equation, which is an ad hoc model and which cannot be solved with any assessment of accuracy because of its multi-dimensionality and thus cannot really deliver predictions which can be tested vs observations, except in very simple cases.
The Copenhagen dogma is then that predictions of the standard Schrödinger equation always are in perfect agreement with observation, but a dogma which cannot be challenged because predictions cannot be computed ab initio.
In this situation it is natural to ask, in the spirit of Schrödinger, for a new Schrödinger equation which has a direct physical meaning and to which solutions can be computed ab initio, and this is what I have been exploring in many blog posts and in the book (draft) Many-Minds Quantum Mechanics.
The basic idea is to replace the linear multi-d standard Schrödinger equation with a computable non-linear system in 3d as a basis of a new form of physical quantum mechanics. I will return with more evidence of the functionality of this approach, which is very promising...
Note that a wonderful thing with computation is that it can be viewed as a form of non-destructive testing, where the evolution of a physical system can be followed in full minute detail without any form of interference from an observer, thus making Bohr's Law into a meaningless limitation of scientific thinking and work from a pre-computer era preventing progress today.
PS It is maybe wise to be a little skeptical to assessments of agreement between theory and experiments to an extremely high precision. It may be that things are arranged or rigged so as to give exact agreement, by changing computation/theory or experiment.
Inga kommentarer:
Skicka en kommentar |
e1fdaa3df89d8e8b | Next Article in Journal / Special Issue
Quasar Black Hole Mass Estimates from High-Ionization Lines: Breaking a Taboo?
Previous Article in Journal / Special Issue
Radiative and Collisional Molecular Data and Virtual Laboratory Astrophysics
Article Menu
Export Article
Atoms 2017, 5(3), 32; doi:10.3390/atoms5030032
Stark Broadening from Impact Theory to Simulations
Roland Stamm 1,*, Ibtissem Hannachi 1,2, Mutia Meireni 1, Laurence Godbert-Mouret 1, Mohammed Koubiti 1, Yannick Marandet 1, Joël Rosato 1, Milan S. Dimitrijević 3Orcid and Zoran Simić 3
Département de Physique, Aix-Marseille Université, CNRS, PIIM UMR 7345, 13397 Marseille CEDEX 20, France
PRIMALAB, Faculty of Sciences, University of Batna 1, Batna 05000, Algeria
Astronomical Observatory, Volgina 7, 11060 Belgrade, Serbia
Correspondence: Tel.: +33-491-288-621
Academic Editor: Ulrich Jentschura
Received: 31 August 2017 / Accepted: 11 September 2017 / Published: 20 September 2017
Impact approximation is widely used for calculating Stark broadening in a plasma. We review its main features and different types of models that make use of it. We discuss recent developments, in particular a quantum approach used for both the emitter and the perturbers. Numerical simulations are a useful tool for gaining insight into the mechanisms at play in impact-broadening conditions. Our simple model allows the integration of the Schrödinger equation for an emitter submitted to a fluctuating electric field. We show how we can approach the impact results, and how we can investigate conditions beyond the impact approximation. The simple concepts developed in impact and simulation approaches enable the analysis of complex problems such as the effect of plasma rogue waves on hydrogen spectra.
stark broadening; impact approximation; numerical simulation
1. Introduction
Stark profiles are used in astrophysics and other kinds of plasmas for obtaining information on the charged environment of the emitting particles. Using light for conveying information on the plasma often requires a modeling of both the plasma and the radiator. We will review different situations requiring different modeling approaches. The impact-broadening approach considers the emitter-plasma interaction as a sequence of brief separate collisions decorrelating the radiative dipole. Impact models are very effective for many types of plasmas, and can be applied to different kinds of emitters, hydrogen being an exception for most plasma conditions. Many different models using impact approximation have been developed, and we will review the most commonly-used. One can distinguish firstly between models keeping the quantum character of the perturbers, and those using a classical trajectory for the charged particles. Full quantum approaches require specific calculation techniques, which, once established, have proved to be of general interest. Another way to look at the models is the degree of accuracy required. It is often not necessary to have an accuracy better than about 20%, since the experimental errors are often of the same order or worse. This has enabled the development of a semi-empirical impact model, useful especially in cases where one does not have a sufficient set of atomic data for adequate application of more sophisticated methods with which one can readily obtain a large number of line shapes [1], making it an effective diagnostic tool.
A typical starting point for a line shape formalism in a plasma is a full quantum formalism for the emitter and the perturbers. It can be written as a linear response for the emitter dipole operator, and provides the response of an emitter at a time t, knowing its state at an initial time [2]. This response in time allows the physical measurement of the spectrum to which it is linked by a Fourier transform. Quantum formalism introduces specific computational difficulties, but also brings powerful tools such as the angular averages. We will briefly discuss such approaches, and how they compare to semi-classical calculations. Classical path impact approximations have been widely developed, and exist in several levels of accuracy, depending on whether one is interested in a rapid analysis of a large number of spectra, or one asks for an accurate analysis of a few lines. We will identify situations for which other models are helpful, e.g., for the case were the emitter-perturber interactions cannot be represented by a sequence of collisions. Such models use the statistical properties of the electric field created by the perturbing particles. In astrophysics, model microfield methods provide an efficient alternative for cases where neither the impact nor the static approximation are valid. For such situations, several models have been developed and interfaced with atomic data. Their accuracy can be tested by simulation techniques avoiding some approximations, but at the expenses of computer time. Such computer simulations can be used to analyze the various physical processes involved in plasmas under arbitrary conditions. We will illustrate their use in the case of plasma rogue waves.
2. Impact Broadening
A detailed and accurate modeling of Stark broadening started almost sixty years ago with the development of a general impact theory having the ability of retaining the quantum character of the emitters and perturbers, and allowing both elastic and inelastic collisions between such particles [2]. The line shape is obtained by a Fourier transform of the dipole autocorrelation function (DAF), a quantity expressed as a trace over all possible states of the quantum emitter plus perturbers system:
C ( t ) = T r [ d . T + ( t ) d T ( t ) ρ ] ,
where d is the dipole moment of the emitter, T ( t ) = e x p ( i H t / ) the evolution operator and ρ the density matrix, these last two quantities being dependent on the Hamiltonian H for the whole system. Such an expression could be calculated by density functional theory or quantum Monte Carlo methods, taking advantage of the development of computational techniques and computer hardware [3]. Such studies have proved to be efficient for describing the properties of dense plasmas found in the interior of gaseous planets, the atmospheres of white dwarfs or the laboratory plasmas created by energetic lasers. They might be useful for understanding some features in the spectrum of such plasmas, but have not been developed yet in the context of line broadening. Probably the main reason for this is that there is no clear evidence that the dynamical effects of multiple quantum perturbers can affect a line shape. Another reason is that for most of the plasma conditions and line shapes studied, we can use the impact approximation, which assumes that the various perturbers interact separately with the emitter (binary collision assumption), and that the average collision is weak. A validity condition for the impact approximation is that the collision time is small compared with the decorrelation time of C(t). If this condition and the binary collision assumptions are verified, it is possible to use a constant collision operator to account for all the effects of the perturbers on the emitter. Different approaches using impact approximation have been proposed, but we can distinguish firstly between quantum impact models that retain the quantum behavior of both emitters and perturbers, and the semiclassical impact models treating only the emitter as a quantum particle. A pictorial representation of the full quantum emitter-perturber interaction is provided by the use of wave packets for the perturbers. Each wave packet is scattered in a region within the reach of the interaction potential with the emitter. Quantum collision formalism can be applied, enabling the calculation of cross sections with the aid of scattering amplitudes. Although quantum impact calculations have been performed since the seventies [4,5], such calculations are not very numerous for line broadening due to their computational difficulty. In particular, they involve a calculation of the scattering matrix or S matrix [2]. Many calculations have been applied to isolated lines of various ions, a case for which the width w takes the compact form of an average over the perturbers after the use of the optical theorem [2]:
w = 1 2 N { v [ σ i + σ f + d Ω | f i ( θ , φ ) f f ( θ , φ ) | 2 ] } A v ,
where N is the perturbers’ density, v their velocity, σ i and σ f are inelastic cross sections, f i and f f the forward-scattering amplitudes in a direction given by θ , φ for the initial i and final f states, and { } A v stands for a Boltzmann thermal average.
With the advent of accurate atomic structure and S matrix codes, such impact quantum calculations have been given a new life [6,7], and are most often in good agreement with other calculations. A very efficient calculation has been proposed starting from Equation (2), using a Bethe-Born approximation [8] for evaluating inelastic cross sections. This semi-empirical model uses an effective Gaunt factor, a quantity which measures the probability of an incident electron changing its kinetic energy [9]. This function has been modified and improved to develop the modified semi-empirical model which is frequently used for calculations of isolated ion lines [1].
For most plasma conditions and line shapes studied, the wave packets associated with the perturbers are small and do not spread much in time. This enables the use of classical perturbers following classical paths. Different approaches use this approximation together with the impact approximation for the electron perturbers. Early calculations of hydrogen lines with comparisons to experimental profiles proved that a profile using an impact electron broadening [10], together with a static approximation for the ion perturbers, is in overall agreement for the Balmer Hβ line in an arc plasma with a density N = 2.2 × 1022 m−3 and a temperature T = 10,400 K. The remaining discrepancies concerned the central part of the line and the far line wings, two regions that required an improvement of the model.
For isolated lines of neutral atoms and ions, the semiclassical perturbation (SCP) method [11] has been successfully applied to numerous lines, and is implemented in the STARK-B database [12]. The SCP method was inspired by developments in the quantum theory of collisions between atoms and electrons or ions, and, e.g., performs the angular averages with Clebsch-Gordan coefficients. It has the ability to generate several hundred lines rapidly for a set of densities and temperatures in a single run. The accuracy of the SCP method is assessed by a comparison to experimental spectra, and is about 20 to 30% for the widths of simple spectra, but could be worse for some complex spectra. The method is continuously improving, and has been interfaced with atomic structure codes [11].
An interesting point is raised by the comparison of impact quantum and semiclassical calculations, and a comparison of those with experiments. Quantum calculations have often been found to predict narrower lines than those of semiclassical models [13]. Semiclassical calculations may be brought closer to quantum widths, e.g., by a refined calculation of the minimum impact parameter allowing the use of a classical path [14]. Surprisingly, quantum widths of Li-like and boron-like ions often show a worse agreement with experiments than semiclassical calculation, thus calling for further calculations and analysis [7,15]. As an example of such, more recent quantum calculations [15] are in fairly good agreement with experiments [15].
3. Simulations of Impact Theory and Ion Dynamics
The need for a model that does not assume the impact approximation arose out of the study of hydrogen lines, with the surge of accurate profile measurements in near equilibrium plasmas [16]. It appeared that a standard model using a static ion approximation, and an impact electron collision operator, showed pronounced differences with the measures near the line centers, and also in the far wings. The line wings were well reproduced by the so-called unified theory, which retains the static interaction between an electron and the atom as a strong collision occurs [17,18]. The difference in the central part of the line was linked to the use of the static ion approximation, since it depended on the reduced mass of the emitter-ion perturber system. The observation of the Lyman-α (Lα) line [19] showed later that the experimental profile was a factor 2.5 broader than the theoretical line using static ions in arc plasma conditions. This was a strong motivation for developing a technique able to retain ion dynamics in a context where the electric field is created by numerous ions in motion. Since perturbative approaches were unable to account for multiple strong collisions, a computer simulation has been proposed for describing the motion of the ions. The effect on the emitter of the time dependent ion electric field is obtained by a numerical integration of the Schrödinger equation. Early calculations showed the effect of ion dynamics in the central part of the line, and were able to strongly reduce the difference between experimental and simulation profiles [20,21,22].
Simple hydrogen plasma simulations may be used to illustrate the behavior of an electric field component during a time interval of the order of the line shape time of interest. This time is usually taken as the DAF decorrelation time, and can also be defined as the inverse of the line width. The electric field experienced by an atom surrounded by moving charged particles can be calculated at the center of a cubic box, using particles with straight line trajectories. The edge of the cube should be assumed to be equal to a few times the Debye length λ D = ε 0 k B T / ( N e 2 ) , with T and N the hydrogen plasma temperature and density, respectively, e the electron charge, kB the Boltzmann constant, and ε 0 the permittivity of free space. If we simulate only the ion perturbers, we assume that each particle creates a Debye shielded electric field, in an attempt to retain ion-electron correlations. Random number generators are used to obtain the uniform positions and Maxwell-Boltzmann distributed velocities of the charged particles. If an ion leaves the cubic box, it is replaced by a new one created near the cube boundaries. For the weak coupling conditions assumed, a large number of particles (several thousand commonly) is retained in a cube with a size larger than the Debye length. Such a model provides a good approximation for the time-dependent electric field in a weakly coupled ion plasma at equilibrium, although it suffers from inaccuracies, especially if the size of the box is not large enough [23]. We show in Figure 1 the time dependence of one component of the ionic electric field calculated at the center of the box for an electron density Ne = 1019 m−3, and a temperature T = 40,000 K. The electric field is expressed in units of E 0 = 1 / ( 4 π ε 0 r 0 2 ) , where r 0 is the average distance between particles defined by r 0 3 = 3 / ( 4 π N e ) . The time interval of 5 ns used in Figure 1a is the Lα time of interest for such plasma conditions. The validity condition of the binary collision approximation requests that the Weisskopf radius ρ w = n 2 / m e v i , with n the principal quantum number of the Lα upper states (n = 2), and v i = 2 k B T / m p the thermal ion velocity ( m e and m p are resp. the electron and proton mass), is much smaller than the average distance between particles. This ratio is for Lα and protons of the order of 0.04, enabling the use of an impact approximation. The electric field in Figure 1a clearly exhibits several large fields that are well separated in time during the 5 ns of the Lα time of interest. During this time interval, only a few fields (3 in Figure 1a) have a magnitude larger than 50 E0, but about 20 have a magnitude of 10 E0 or more. A piece of the same field history is shown in Figure 1b during a time interval equal to the time of interest for the Balmer-β (Hβ) line. For this time interval of 0.3 ns, the electric field shows much fewer fluctuations, the atom is no longer submitted to a sequence of sharp collisions, and we can no longer use the impact approximation. This is confirmed by a value of 0.16 for the ρ w / r 0 ratio, making the use of an impact approximation for this line problematic. Looking now at Figure 1a,b, we can see a background of electric field fluctuations with a small magnitude of about E 0 , and a typical time scale longer than the collision time r 0 / v i . Such fluctuations correspond to the sum of electric fields of distant particles with a magnitude on the order of E 0 . For hydrogen lines affected by the linear Stark effect, it is well known that this effect of weak collisions is dominant in near impact regimes [10], and results from the long range of the Coulomb electric field.
Using several thousand samples of such electric fields, it is possible to calculate the DAF for each line studied. This requires for each field history E ( t ) an integration of the Schrödinger equation of the emitter submitted to a dipolar interaction potential d . E ( t ) . We obtain the emitter’s evolution operator by finite difference computational methods, using time steps adjusted to ensure the best compromise between computer time cost and accuracy [24]. The integration time interval is provided by the time of interest for the line calculated, and a first estimate for the time step is a hundredth of the collision time. In the following, we retain only the broadening of the upper states of the line, resulting in some loss of accuracy for the first Balmer lines, but in a much faster calculation. We show in Figure 2a the DAF of Lα for the same plasma conditions as in Figure 1. The ab-initio DAF (solid line) is obtained by a simulation of the ions retaining also the effect of electrons with an impact approximation. We observe that this simulation is close to an impact calculation for both ions and electrons (dashed line). For the same condition and the Hβ line, the decay of the ab-initio DAF is significantly smaller than for the impact calculation, indicating again a deviation from ion impact broadening for this line.
Another way of taking account of ion dynamics is with the help of stochastic processes. A stepwise constant stochastic process is used to model the electric field felt by the atom [25]. The process requires the knowledge of the microfield probability distribution function, and of a waiting time distribution function controlling the jumps from one field to the next one. Such model microfield methods are efficient for retaining ion dynamics effects, and are used for a diagnostic of hydrogen lines [26]. Stochastic processes are also used in the line shape code using the frequency fluctuation model for an inclusion of ion dynamics [27]. During the last decades, several simulations and models have been developed with the ability of retaining ion dynamics. The field is still active, with ion dynamics being one of the issues discussed in the Spectral Line Shapes in Plasmas workshop, providing many new analyses [28].
4. Effect of Plasma Waves
Plasmas sustain various types of waves, which behave differently in a linear and nonlinear regime. A way to distinguish between the two regimes is to calculate the ratio W of the wave energy density to the plasma energy density, given by:
W = ε 0 E L 2 / 4 N e k B T ,
where EL is the electric field magnitude of the wave. For values of W much smaller than 1, we expect a linear behavior of the waves. In a linear regime, electronic Langmuir waves oscillate at a frequency close to the plasma frequency ω p = N e e 2 / m e ε 0 , and can be excited even by thermal fluctuations. We assume that the numerous emitters on the line of sight are submitted to different Langmuir waves, each with the same frequency ωp, but a different direction and phase chosen at random, and a magnitude sampled using a half-normal probability density function (PDF). In the following, we have used this half-normal PDF for the reduced electric field magnitude F = E/E0:
P ( F ) = 2 σ π e x p ( F 2 2 σ 2 )
In this expression, we use the standard deviation σ of a normal distribution, and thus obtain the mean value EL of E by writing E L = σ E 0 2 / π . Each Langmuir wave has a different electric field history, and we obtain the DAF by an average over about a thousand such field histories. For a plasma with a density Ne = 1019 m−3, and a temperature T = 105 K, we first calculated the Lα DAF for Langmuir waves with a mean electric field magnitude corresponding to W = 0.01 (EL = 15E0). The response of the DAF is a periodic oscillation with a period equal to 2π/ωp, but with an amplitude much smaller than 1 for this average field magnitude of 15E0. After a product with an impact DAF for retaining the effect of the background electron and ion plasma, there remains no visible effect of the waves on the convolution DAF for the value W = 0.01. This ratio can take much larger values, however; especially if an external energy source such as a beam of charged particles is present. As W increases, nonlinear phenomena start showing up, enabling, for instance, wave-wave couplings. Although only recently investigated in plasmas, the occurrence of rogue waves has been raised in various plasma conditions [29,30,31]. Rogue waves have been studied in many dynamical systems, and are known to the general public by the observation and study of rogue or freak waves that suddenly appear in the ocean as large isolated waves. In oceanography, rogue waves are defined as waves whose height is more than twice the mean of the largest third of the waves in a wave record. Rogue waves appear to be a unifying concept for studying localized excitations that exceed the strength of their background structures. They are studied in nonlinear optics [32], Bose-Einstein condensates [33], and many other fields outside of physics. For our line shape problem in plasmas, we postulate that nonlinear processes create rogue waves from a random background of smaller Langmuir waves. The physical mechanism at play is the coupling of the Langmuir wave with ion sound and electromagnetic waves; density fluctuations of the sound waves affect the high frequency waves through ωp. The first Zakharov equation [34] shows how density fluctuations affect Langmuir waves, and a second equation how a Langmuir wave packet can produce a density depression via the ponderomotive force [35]. We will not discuss these equations here, which are particularly useful for a study of wave collapse. Most present rogue wave studies rely on the nonlinear Schrödinger equation (NLSE), which is obtained in the adiabatic limit (slowly changing density perturbations) of the Zakharov equations [35]. A one-dimensional solution of the NLSE is commonly used to approximate the response of nonlinear media. Stable envelope solitons are possible solutions of the 1D NLSE. We will assume that there is a contribution of a stable envelope soliton for each history of the microfield, similarly to what we did for the background Langmuir wave. Using a ratio W = 0.1, the average peak magnitude of such solitons will be 3 times the amplitude of background Langmuir waves, fitting them in the category of rogue waves. A possible shape for the envelope is a Lorentzian, with a time dependence that bears some similarity with the celebrated Peregrine soliton [36]. We observe in Figure 3 that the DAF of Lα obtained with a product of the impact DAF and the Langmuir rogue wave DAF for W = 0.1 is affected by oscillations at the plasma frequency.
Looking at the line shape obtained with a Fourier transform, we can see in Figure 4 that the peak of the line including the wave effect is about 10% lower than the impact line peak, but this with almost no effect on the line width. Not shown in Figure 4, we noticed that the wing of the line affected by rogue waves had a slightly slower decay than the impact profile, indicating a transfer of intensity from the center toward large line shifts. It is remarkable that a such rogue wave had a rather small effect on the profile. This is probably due to the fact that we are in impact conditions for this line. In impact regimes, decorrelation is very effective, leaving only a small broadening contribution to the type of rogue waves that we considered. A larger broadening effect would be observed by considering wave collapse, a phenomenon occurring as W takes larger values of the order of 1 or more for such plasma conditions. The emitters then experience the effect of a sequence of solitons which can significantly increase the broadening [37].
5. Conclusions
Impact approximation mainly consists of saying that, on an average, it takes many collisions to change the quantum state of an atom. When this approximation is valid, the effect of the numerous fluctuating interactions of the emitter with the perturbers can be expressed with a constant collision operator. We have briefly described several models using impact approximation. A wide variety of impact models have been proposed, ranging from full quantum calculations to semiclassical approaches. Impact calculations allow expression of the width and shift of a line in terms of quantum scattering cross-sections. Such calculations have enabled many improvements in the application of quantum theory for obtaining observable quantities such as a line shape. The comparison between experimental and theoretical spectra is of great benefit for the validation of such models. It is thus crucial to be able to rapidly obtain numerous spectra for the lines of many atoms and ions. This is possible using models such as the semiclassical perturbation or the semi-empirical formalism. We have also shown how a computer simulation can reproduce the results of the impact approximation for hydrogen lines. Such simulations involve several thousand particles, however, and are certainly not the most efficient technique for obtaining the impact profile. The main advantage of simulations is that they can go beyond the impact approximation, for situations with many perturbers acting simultaneously on the emitter. We have briefly recalled the problem of ion dynamics, and have proposed a simple simulation for a calculation of the effect of Langmuir rogue waves of Lα in an impact regime.
This work is supported by the funding agency Campus France (Pavle Savic PHC project 36237PE). This work has also been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training program 2014–2018 under Grant agreement no 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
Author Contributions
Conflicts of Interest
The authors declare no conflict of interest.
1. Dimitrijević, M.S.; Konjević, N. Stark widths of doubly- and triply-ionized atom lines. J. Quant. Spectrosc. Radiat. Transf. 1980, 24, 451–459. [Google Scholar] [CrossRef]
2. Baranger, M. General Impact Theory of Pressure Broadening. Phys. Rev. 1958, 112, 855–865. [Google Scholar] [CrossRef]
3. McMahon, J.M.; Morales, M.A.; Pierleoni, C.; Ceperley, D.M. The properties of hydrogen and helium under extreme conditions. Rev. Mod. Phys. 2012, 84, 1607–1653. [Google Scholar] [CrossRef]
4. Barnes, K.S.; Peach, G. The shape and shift of the resonance line of Ca+ perturbed by electron collisions. J. Phys. B 1970, 3, 350–362. [Google Scholar] [CrossRef]
5. Bely, O.; Griem, H.R. Quantum-mechanical calculation for the electron-impact broadening of the resonance line of singly ionized magnesium. Phys. Rev. A 1970, 1, 97–105. [Google Scholar] [CrossRef]
6. Elabidi, H.; Ben Nessib, N.; Sahal-Bréchot, S. Quantum-mechanical calculations of the electron-impact broadening of spectral lines for intermediate coupling. J. Phys. B 2004, 37, 63–71. [Google Scholar] [CrossRef]
7. Elabidi, H.; Sahal-Bréchot, S.; Dimitrijević, M.S. Quantum Stark broadening of Ar XV lines. Strong collision and quadrupolar potential contributions. Adv. Space Res. 2014, 54, 1184–1189. [Google Scholar] [CrossRef]
8. Griem, H.R. Semi-empirical formulas for the electron-impact widths and shifts of isolated ion lines in plasmas. Phys. Rev. 1968, 165, 258–266. [Google Scholar] [CrossRef]
9. Van Regemorter, H. Rate of collisionnal excitation in stellar atmospheres. Astrophys. J. 1962, 136, 906–915. [Google Scholar] [CrossRef]
10. Griem, H.R.; Kolb, A.C.; Shen, K.Y. Stark broadening of hydrogen lines in a plasma. Phys. Rev. 1959, 116, 4–16. [Google Scholar] [CrossRef]
11. Sahal-Bréchot, S.; Dimitrijević, M.S.; Ben Nessib, N. Widths and shifts of isolated lines of neutral and ionized atoms perturbed by collisions with electrons and ions: An outline of the semiclassical perturbation (SCP) method and of the approximations used for the calculations. Atoms 2014, 2, 225–252. [Google Scholar] [CrossRef]
12. Sahal-Bréchot, S.; Dimitrijević, M.S.; Moreau, N. STARK-B Database; LERMA, Observatory of Paris, France and Astronomical Observatory: Belgrade, Serbia, 2014; Available online: (accessed on 12 August 2017).
13. Griem, H.R. Principles of Plasma Spectroscopy; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
14. Alexiou, S.; Lee, R.W. Semiclassical calculations of line broadening in plasmas: Comparison with quantal results. J. Quant. Spectrosc. Radiat. Transf. 2006, 99, 10–20. [Google Scholar] [CrossRef]
15. Elabidi, H.; Sahal-Bréchot, S.; Ben Nessib, N. Quantum Stark broadening of the 3s-3p spectral lines in Li-like ions; Z-scaling and comparison with semi-classical perturbation theory. Eur. Phys. J. D 2009, 54, 51–64. [Google Scholar] [CrossRef]
16. Wiese, W.L.; Kelleher, D.E.; Paquette, D.R. Detailed study of the Stark broadening of Balmer lines in a high density plasma. Phys. Rev. A 1972, 6, 1132–1153. [Google Scholar] [CrossRef]
17. Voslamber, D. Unified model for Stark broadening. Z. Naturforsch. 1969, 24, 1458–1472. [Google Scholar]
18. Smith, E.W.; Cooper, J.; Vidal, C.R. Unified classical-path treatment of Stark broadening in plasmas. Phys. Rev. 1969, 185, 140–151. [Google Scholar] [CrossRef]
19. Grützmacher, K.; Wende, B. Discrepancies between the Stark–broadening theories of hydrogen and measurements of Lyman-α Stark profiles in a dense equilibrium plasma. Phys. Rev. A 1977, 16, 243–246. [Google Scholar] [CrossRef]
20. Stamm, R.; Voslamber, D. On the role of ion dynamics in the Stark broadening of hydrogen lines. J. Quant. Spectrosc. Radiat. Transf. 1979, 22, 599–609. [Google Scholar] [CrossRef]
21. Stamm, R.; Smith, E.W.; Talin, B. Study of hydrogen Stark profiles by means of computer simulation. Phys. Rev. A 1984, 30, 2039–2046. [Google Scholar] [CrossRef]
22. Stamm, R.; Talin, B.; Pollock, E.L.; Iglesias, C.A. Ion-dynamics effects on the line shapes of hydrogenic emitters in plasmas. Phys. Rev. A 1986, 34, 4144–4152. [Google Scholar] [CrossRef]
23. Rosato, J.; Capes, H.; Stamm, R. Ideal Coulomb plasma approximation in line shapes models: Problematic issues. Atoms 2014, 2, 253–258. [Google Scholar] [CrossRef]
24. Vesely, F. Computational Physics, an Introduction; Plenum Press: New York, NY, USA, 1994. [Google Scholar]
25. Brissaud, A.; Frisch, U. Theory of Stark broadening-II exact line profile with model microfield. J. Quant. Spectrosc. Radiat. Transf. 1971, 11, 1767–1783. [Google Scholar] [CrossRef]
26. Stehlé, C. Stark broadening of hydrogen Lyman and Balmer in the conditions of stellar envelopes. Astron. Astrophys. Suppl. Ser. 1994, 104, 509–527. [Google Scholar]
27. Talin, B.; Calisti, A.; Godbert, L.; Stamm, R.; Lee, R.W.; Klein, L. Frequency-fluctuation model for line-shape calculations in plasma spectroscopy. Phys. Rev. A 1995, 51, 1918–1928. [Google Scholar] [CrossRef] [PubMed]
28. Spectral Line Shapes in Plasmas (SLSP) Code Comparison Workshop. Available online: (accessed on 20 July 2017).
29. Moslem, W.M.; Shukla, P.K.; Eliasson, B. Surface plasma rogue waves. EPL 2011, 96, 25002. [Google Scholar] [CrossRef]
30. Ahmed, S.M.; Metwally, M.S.; El-Hafeez, S.A.; Moslem, W.M. On the generation of rogue waves in dusty plasmas due to modulation instability of nonlinear Schrödinger equation. Appl. Math Inf. Sci. 2016, 10, 317–323. [Google Scholar] [CrossRef]
31. Mc Kerr, M.; Kourakis, I.; Haas, F. Freak waves and electrostatic wavepacket modulation in a quantum electron–positron–ion plasma. Plasma Phys. Control. Fusion 2014, 56, 035007. [Google Scholar] [CrossRef]
32. Erkintalo, M.; Genty, G.; Dudley, J.M. Rogue-wave-like characteristics in femtosecond supercontinuum generation. Opt. Lett. 2009, 34, 2468–2470. [Google Scholar] [CrossRef] [PubMed]
33. Bludov, Y.V.; Konotop, V.V.; Akhmediev, N. Matter rogue waves. Phys. Rev. A 2009, 80, 033610. [Google Scholar] [CrossRef]
34. Zakharov, V.E. Collapse of Langmuir waves. Sov. Phys. JETP 1972, 35, 908–914. [Google Scholar]
35. Robinson, P.A. Nonlinear wave collapse and strong turbulence. Rev. Mod. Phys. 1997, 69, 507–573. [Google Scholar] [CrossRef]
36. Bailung, H.; Sharma, S.K.; Nakamura, Y. Observation of Peregrine solitons in a multicomponent plasma with negative ions. Phys. Rev. Lett. 2011, 107, 255005. [Google Scholar] [CrossRef] [PubMed]
37. Hannachi, I.; Stamm, R.; Rosato, J.; Marandet, Y. Effect of nonlinear wave collapse on line shapes in a plasma. EPL 2016, 114, 23002. [Google Scholar] [CrossRef]
Figure 1. Electric field component in units of E 0 in a plasma with a density Ne = 1019 m−3, and a temperature T = 40,000 K, during (a) a time interval of the order of the time of interest for the Lα line, and (b) a time interval of the order of the time of interest for the Hβ line.
Atoms 05 00032 g001
Figure 2. Dipole autocorrelation functions with an ab-initio simulation (solid line) and in the impact limit (dotted line) in a plasma with a density Ne = 1019 m−3, and a temperature T = 40,000 K, for (a) the Lα Lyman transition, and (b) the Hβ Balmer transition.
Atoms 05 00032 g002
Figure 3. Lα dipole autocorrelation function in a plasma with a density Ne = 1019 m−3, and a temperature T = 105 K, calculated with a product of the impact DAF and the Langmuir rogue wave DAF for W = 0.1.
Atoms 05 00032 g003
Figure 4. Lα in a plasma with a density Ne = 1019 m−3, and a temperature T = 105 K, calculated with an impact approximation (dashed line), and with a Fourier transform of the DAF in Figure 3 (solid line).
Atoms 05 00032 g004
Back to Top |
7106814e6d8c87a0 | söndag 31 juli 2011
The Sky Dragon Strikes Back
Andrew Skolnick has mounted a ferocious attack on the Slayers of the Sky Dragon on Judy Curry's blog as a large set of comments (out of 2000) on the blog post Slaying a Greenhouse Dragon.
The attack is supported by a Youtube clip entitled Needling the Deniers aimed at disproving my new derivation of Planck's law of blackbody radiation showing that the basic postulate of CO2 alarmism of backradiation is fiction.
The clip shows that a needle can be heated in a microwave oven, which is known to everybody with some experience of a such a device. Skolnick thus demonstrates that low frequency waves (microwaves) can heat an absorber to higher temperature than the blackbody temperature corresponding to the frequency.
Does this mean that a blackbody can heat another blackbody of higher temperature, that a cold atmosphere can radiatively heat a warmer Earth surface? Of course not!
But what about the microwave oven then? Isn't this a counter-example? No, it is not because the amplitude of the microwave radiation is much larger than that of blackbody radiation of the corresponding temperature. The heating in a microwave oven is thus not blackbody heating; it is amplifed blackbody heating, and therefore the microwave heating of a needle is not a counter-example to my proof that blackbody backradiation from cold to warm is fiction.
But it is good that Skolnick brings this issue to the table, which allows one more head of the Sky Dragon to be eliminated. Thank you Andrew!
fredag 29 juli 2011
Mathematical Secret of Flight 6: Wikipedia Cover Up
To see that our new theory of flight fills a need, it is instructive to study how Wikipedia covers up the lack of a convincing theory in the literature:
• For example, there are explanations based directly on Newton’s laws of motion and explanations based on Bernoulli’s principle.
• Both principles can be used to explain lift, but each appeals to a different audience.
• This article will start with the simplest explanation; more complicated and alternative explanations will follow.
• In attempting to explain why the air flows the way it does (e.g. why the flow follows the upper surface of the airfoil and why the streamtubes change size), the situation gets considerably more complex.
• It is here that many simplifications are made in presenting lift to various audiences.
We see that one part of Wikipedia struggles to hide that there is no theory of flight, while another part tells the truth by citing John D. Anderson, Curator of Aerodynamics at the National Air and Space Museum:
• It is amazing that today, almost 100 years after the first flight of the Wright Flyer, groups of engineers, scientists, pilots, and others can gather together and have a spirited debate on how an airplane wing generates lift. Various explanations are put forth, and the debate centers on which explanation is the most fundamental.
As a last line of defense Wikipedia presents the classical theory by Kutta-Zhukovsky (which we have shown to be incorrect).
• The effects of viscosity are contained within a thin layer of fluid called the boundary layer, close to the body. As flow over the airfoil commences, the flow along the lower surface turns at the sharp trailing edge and flows along the upper surface towards the upper stagnation point. The flow in the vicinity of the sharp trailing edge is very fast and the resulting viscous forces cause the boundary layer to accumulate into a vortex on the upper side of the airfoil between the trailing edge and the upper stagnation point.[26] This is called the starting vortex. The starting vortex and the bound vortex around the surface of the wing are two halves of a closed loop. As the starting vortex increases in strength the bound vortex also strengthens, causing the flow over the upper surface of the airfoil to accelerate and drive the upper stagnation point towards the sharp trailing edge. As this happens, the starting vortex is shed into the wake, and is a necessary condition to produce lift on an airfoil. If the flow were stopped, there would be a corresponding "stopping vortex". Despite being an idealization of the real world, the “vortex system” set up around a wing is both real and observable; the trailing vortex sheet most noticeably rolls up into wing-tip vortices.
In both politics and science, cover up is a most essential part of the game, because admitting that there are no answers to questions which should have answers, destroys credibility and authority, the core values of both politics and science. But pretending to have answers when no answers are available has a high cost, as demonstrated in Dr Faustus of Modern Physics.
The above connects to my old controversy with Wikipedia about d'Alembert's paradox discussed in posts on d'Alembertgate and the 2009 knol Wikipedia Inquisition, leading to a banning of my voice on Wikipedia. This makes it impossible to give any form of link to the new theory of flight on Wikipedia, as if understanding what keeps an airplane in the air would be dangerous knowledge which must be kept hidden to the people.
onsdag 27 juli 2011
Mathematical Secret of Flight 5: Bird Wing
The thesis by Heather Falconsong Howard studies techniques for generating photo-realistic and fantasy digital bird and avian creatures in film, TV and games, based on an analysis of the design of real birds wings.
Particular attention is given to little covert feathers covering the space between groups of main fetahers, which also seem to act like little wing flaps delaying separation.
This is indicated in the above picture from the thesis which represents the classical Prandtl scenario of separation based on 2d recirculation to stagnation.
Our new analysis of separation and generation of lift opens to a different understanding of the action of birds wings. In particular we expect to find a connection between the separation pattern of our new analysis with point-stagnation and streamwise vorticity, and the arrangement of feathers of a bird wing including covert feathers and a periodic wavy trailing edge. We will report on our findings in upcoming posts...
The design of birds' wings thus suggest that the smooth surface and sharp straight trailing edge of a standard airplane wing may not be optimal. A further indication in this direction is given by the slotted wing tips of gliding hawks and the slotted jet flaps of Skywalk paragliders, to which we will also return...
måndag 25 juli 2011
Backradiation in Stefan-Boltzmann's Law: Folklore or Science?
Stefan-Boltzmann's Law can be formulated in the following two algebraically equivalent, but physically different forms:
1. E = sigma Te^4 - sigma Ta^4, (photon particle model: difference of two-way gross flows)
2. E = sigma (Te^4 - Ta^4) ~ 4 sigma Te^3 (Te - Ta), (wave model: net one-way flow)
where E is the intensity of the heat energy transferred from a blackbody (emitter) of temperature Te to a blackbody (absorber) of temperature Ta smaller than Te, and sigma is a constant.
Version 1 is the basis of CO2 alarmism based on "backradiation" of sigma Ta^4 from absorber to emitter, as transfer of heat energy from cold to warm.
In Slaying the Sky Dragon and Mathematical Physics of Blackbody Radiation I present a derivation of Version 2 based on a principle of finite precision computation in a wave model, without backradiation. And without backradiation CO2 alarmism crumbles.
The original version by Stefan and Boltzmann is formulated with Ta = 0 as Version 0. without backradiation (in which case 1. and 2. look identical), as an integrated version of Planck's law based on a statistical particle model.
Which is the correct formulation? Version o, 1 or 2? Particle statistics or waves? Let's list some answers from the web supposedly reflecting scientific sources:
The list can be made much longer, but we dont find any support of 1. and backradiation. And without backradiation CO2 alarmism crumbles.
The following questions present themselves:
• Why is 1. found only in the CO2 alarmism of IPCC, and not elsewhere?
• Is 1. a free invention which lacks original scientific source?
• Is 1. a form of hyper-reality for which the original is missing?
• Is 1. a form of folklore known by everybody to be true, yet without any individual scientist claiming to have demonstrated the statement.
• Is 1. an expression of "scientific consensus" for which no original scientific source is required?
What do you think? Is CO2 alarmism based on backradiation, folklore or real science?
fredag 22 juli 2011
The Emitter-Absorber Relation of Radiation
There is a lot of confusion concerning the physics behind Plank's radiation law and its integrated form of Stefan-Boltzmann's law in the following two algebraically equivalent but physically different forms:
Version 1 reflects two-way energy transfer by two way photon particles emitted by both emitter and absorber into a void (of zero Kelvin), and can be seen as an a hoc version cooked up from Planck's original law of one blackbody emitting into a void (of zero Kelvin).
Version 1 reflects simple physics of particles with the two bodies like two very young children playing side by side without interaction both spitting out photons in two directions into a void (of zero Kelvin).
Version 2 reflects more complex physics with the two bodies playing together, talking to each other by a two-way wave equation, but with one-way net transfer of heat: The effect of the finite precision computation is a high frequency cut-off depending on temperature limiting the ability of the absorber to re-emit only frequencies below cut-off, with frequencies above cut-off being absorbed and turned into heat.
Version 2 is like two educated people talking and listening to each other, with the emitter being the smarter and the frequencies above the cut-off of the dumber being absorbed by the dumber and then transformed into heat (frustration).
Which version is better? The trivial 1 or the educated 2? Is there an intimate relation between emitter and absorber into a system relation, where emission from one body is directly connected to absorption of another? Is the play between adults more interesting that than between babies?
Are these questions above your cut-off frequency and will only lead to heated frustration?
tisdag 19 juli 2011
Answer to Question by Roy Spencer
In an exchange between Roy Spencer and Marty Hertzberg regarding the proper use of the Stefan-Boltzmann equation in climate modeling, Roy asks the question:
which can be turned around into:
• How does the Earth's surface 'know' what the temperature of the atmosphere is before it 'decides' at what rate it should emit IR?
Roy asks this question to challenge the view put forward by Marty that the temperature Ta
of the environment (the absorber) of a (blackbody) emitter at temperature Te bigger than Ta, determines the amount of energy E emitted according to Stefan-Boltzmann's law written in the form:
expressing E as a multiple of the difference in temperature Te - Ta. In particular, this means that there is no backradiation of energy from a cold atmosphere to a warmer Earth surface, only the other way.
This is different from a common view adopted by Roy of a two-way emission of photons carrying energy between emitter and absorber, in which Stefan-Boltzman's law would be written
• E = sigma Te^4 - sigma Ta^4
expressing the net energy transfer E as the difference between a two way gross flow of energy
with emitter and absorber both emitting into a background of zero K. In this view, a cold atmosphere would be warming a warmer Earth surface by sigma Ta^4, while the Earth surface
would be transferring sigma Te^4 to the atmosphere.
In the new derivation of Planck's law behind Stefan-Boltzmann presented in Slaying the Sky Dragon, with more details in the upcoming book Mathematical Physics of Blackbody Radiation,
I give support to Marty's standpoint, which is in opposition to Roy's.
Roy's question is how the Earth reads the temperature of the atmosphere and an answer is suggested by my analysis: The emitter and absorber stay in contact by two-way electromagnetic waves described by Maxwell's equations. This contact makes it possible for both emitter and absorber to read the temperature of the other by reading the spectrum of the other.
This is like two superpowers reading the destruction capability of the other without pressing the buttons to exchange of mutual destruction, see picture above.
The basic idea is that the contact allowing temperature reading is established by two-way electromagnetic waves, while the transfer of energy is one-way (from warm to cold).
Mathematically the two versions of Stefan-Boltzmann may look equivalent, but the physics behind the two is different: Roy's is two-way stream of particles carrying energy back and forth.
Marty's and mine is two-way electromagnetic waves carrying information with one-way transfer of energy.
The difference comes out when looking at perturbations of forcing, as perturbations of net one-way flow of energy (small) vs perturbations of gross two-way flow of energy (big). This is the origin of CO2 alarmism: How to make something small into something big.
Does this answer your question Roy?
In support of Marty's and my view, one may add that a precise macroscopic mathematical model for electromagnetic waves is known (Maxwell's equations), while that of photon flight appears to be unknown (unless it is assumed to be a trivial ray model).
lördag 16 juli 2011
Monstrosity of Quantum Mechanics 7: Basic Postulates
In what sense are the basic postulates of quantum mechanics not Harry Potter fantasy?
Lubos Motl makes in The Unbreakable Postulates of Quantum Mechanics a heroic effort to justify quantum mechanics almost 100 years after its formulation, starting with:
The mission is to convince skeptics about the truths of the following basic postulates:
1. The set of possibilities in which a physical system may be found is described by a linear Hilbert space (more precisely by the rays in this space) equipped with an inner product.
2. Complex (nonzero) linear combinations of allowed states are allowed states, too.
3. A physical system composed out of N separated (or fully independent) subsystems has the Hilbert space equal to the tensor product of the Hilbert space describing the individual subsystems.
4. Physical quantities, also referred to as "observables" in the fancy quantum mechanical context, are encoded in Hermitean (linear) operators acting on the Hilbert space.
5. In particular, the evolution in time is generated by the operator known as the Hamiltonian.
6. The exponentials of its imaginary multiples are the operators that evolve the system over a finite interval and these operators are unitary; similarly, other symmetry transformations are given by other unitary (or anti-unitary, if the time reversal is included) operators.
7. The expectation values of the quantity "A" are given by the inner product ; if "A" is replaced by the projection operator "P", this expectation value expresses the probability that the condition connected with "P" will be satisfied once the system is measured.
The motivations for 1 - 7 presented by Lubos tell us something essential about the solidity of quantum mechanics. Let see how Lubos motivates 1 - 3:
1. Why do we know that there is a Hilbert space? If a physical theory has a content, it must be able to manipulate with the information. We insert some information that we know - and it spits out another piece of information that we didn't know but that is predicted, or postdicted, by the theory. So there must exist some states; which state was realized in Nature, is realized in Nature, or will be realized in Nature, is the way to phrase all the information we have or we want to have about the world or its pieces. That was true even in classical physics: different states of a physical system were given by points in the phase space (spanned by the positions and their canonical momenta).
2. The new thing about quantum mechanics is that the complex linear superpositions of two allowed states are also allowed states. How do we know that? Well, we may actually design procedures that create such combined states in practice.
3. Now, there are other postulates and universal rules of quantum mechanics. For example, the composite systems are described by tensor products of Hilbert spaces. It's not hard to see why: if the dimensions of Hilbert spaces H1, H2 are equal to d1, d2, there are clearly d1 basis vectors of H1 and d2 basis vectors of H2. These basic vectors parameterize some linearly independent (i.e. fully mutually exclusive) possibilities. The set of linearly independent possibilities for the composite system obviously has to be the Cartesian product of the two sets for the separate subsystems. And the "linear envelope" of this Cartesian product - the new basis - is the tensor product of the original spaces. Its dimension - its number of basis vectors - is equal to d1.d2 as expected. This conclusion is pretty much inevitable, by basic logic.
When you read this as a mathematician you understand that the motivation is weak, formal and touches triviality elevated to deep insight into the true inner mechanisms of the microscopic world. The Hilbert space assumption essentially reflects that the Schrödinger equation is linear. But why physics on atomic scales is linear allowing superposition, is not motivated. This appears as an ad hoc assumption which could be made by one who has recently fallen in love with linear Hilbert space theory and has been so overwhelmed by emotion that rational thinking has disappeared.
The argument that "we may actually design procedures that create such combined states (superposed) in practice" sounds hollow, knowing that this principle of quantum computing has shown to be very difficult to demonstrate.
Atomic physics concerns the interaction of elementary particles by certain forces and thus can be thought as N-body problems. But an N-body problem is not linear, and so it requires a lot of fantasy to believe that the N-body problem of quantum mechanics through some miracle decides to show up as linear.
without being able to find any reasonable one.
tisdag 12 juli 2011
Why Prandtl Was Wrong 4
Lift and drag of a NACA0012 wing in computation by Unicorn and experiment.
We have asked if it is possible to check if drag and lift of a body moving through a fluid originate from a thin boundary layer which separates from the body surface into the fluid, as is the mantra of Ludwig Prandtl, the father of modern fluid mechanics, formulated in an 8 page note in 1904.
To check in experiment is cumbersome because the viscosity of a real fluid is never exactly zero and thus it can be argued that no real fluid can satisfy a slip boundary condition with zero skin friction without any boundary layer.
But to check in computation is perfectly possible: just set the skin friction to zero in a Navier-Stokes code, that is use a slip boundary condition and see what happpens. Will drag and lift develop in accordance with observation in solutions of the Navier-Stokes equations with slip
without boundary layers?
Yes! Computations without boundary layer give correct drag and lift!
The conclusion is inevitable:
• Prandtl was wrong: Drag and lift do not originate from boundary layers.
• Prandtl's scenario of fluid separation is incorrect.
• The mantra of modern fluid mechanics is incorrect.
For further details see the new article Analysis of Separation in Turbulent Incompressible Flow which exhibits a scenario of fluid separation which is fundamentally different from that of Prandtl and which is supported by mathematical analysis, computation and observation.
måndag 11 juli 2011
Blackbody Radiation as a Generic Emergent Phenomenon
• E = gamma T f^2,
• with a high frequency cut-off proportional to T,
where f is the frequency and gamma is a constant.
söndag 10 juli 2011
Large Boundary Layer Collider: Why Prandtl Was Wrong 3
Part of the Large Boundary Layer Collider at the European Spallation Source in Lund, Sweden.
According to Ludwig Prandtl, named the father of modern fluid mechanics, both drag and lift of a body moving through air or water originate for a thin boundary layer.
This is the fundamental postulate of modern fluid mechanics formulated in 1904, but it is now being questioned. Is modern fluid mechanics based on a postulate which is does not correspond to physical reality?
The answer may be given by the European Spallation Source (ESS) in Lund, Sweden: The world's biggest proton accelerator (see picture).
The idea is to eliminate the boundary layer by bombarding it with high energy protons, and once the boundary layer has been removed completely this way, drag and lift will be measured. If drag and lift remain the same under removal of the boundary layer, then drag and lift do not originate from any boundary layer, and modern fluid mechanics is based on incorrect physics.
But ESS will not be ready to use before 2020, and thus it is natural to ask if there is some other quicker and cheaper way of eliminating a boundary layer? Yes, there is. But what is it?
Follow the thrilling uncovering of one of modern physics most well kept secrets...
PS An alternative to ESS would be to use liquid helium with next to zero viscosity, but to reach a sufficiently large Reynolds number, the dimension of the experiment needs to be 10 times bigger than that of the Large Hadron Collider and thus is out of reach, for the moment at least.
But as UN global warming alarmism is now fading away maybe this experiment could become the next big initiative by the UN backed by EU. DS
lördag 9 juli 2011
Why Prandtl Was Wrong 2
One way of eliminating a butterfly.
Question and Answer 1:
Question and Answer 2:
• How can one prove that a boundary layer is not the origin of drag and lift of a body?
• Eliminate the boundary layer and notice drag and lift without boundary layer.
But how to eliminate a butterfly and how to eliminate a boundary layer? Follow the thrilling
continuation of this story...
fredag 8 juli 2011
Why Prandtl Was Wrong 1
Prandtl initating modern fluid mecahnics in 1904: A very satisfactory explanation of the physical process in the boundary layer between a fluid and a solid body could be obtained by the hypothesis of an adhesion of the fluid to the walls, that is, by the hypothesis of a zero relative velocity between fluid and wall (no-slip).
Ludwig Prandtl is named the father of modern fluid mechanics because he discovered the boundary layer of a slightly viscous fluid flowing around a solid body, like air flowing around a moving car or airplane, as a thin layer where the fluid velocity rapidly changes from the free flow velocity away from the body to that of the body surface as an expression of a no-slip boundary condition.
Prandtl claimed that the that turbulent flow in the aft of a body results from separation of turbulent boundary layer away from the body surface into the free flow.
This has become the mantra of modern fluid mechanics: The truth of slightly viscous fluid flow is to be found in thin boundary layers. Both drag and lift of a body moving through a fluid are effects of a no-slip boundary condition creating a thin boundary layer.
In a sequence of posts we shall show that Prandtl was wrong: Drag and lift do not originate from a thin no-slip boundary layer.
But how can one show that Prandtl was wrong? Something to reflect upon a rainy summer day.
Hint 1: Suppose you observe the same drag and lift with the boundary layers eliminated. Can you then be sure that drag and lift do not originate from boundary layers? Yes, you probably say. But how to "eliminate" the boundary layers?
onsdag 6 juli 2011
The Secret of Separation
The Secret of Flight revealed in previous posts is hidden in the secret of separation of the flow at the trailing edge of a wing. The above picture reveals the Secret of Separation:
You see a piece of the trailing edge of a horisontal wing as viewed from behind with opposing (more or less) vertical flows from the upper and lower side of the wing which are meeting in retardation and somehow have to be directed into a (more or less) horisontal backward direction to leave the wing (out of the screen). You can think of two opposing armies approaching each other and the question is how the conflict is to be resolved.
Now, retardation in opposing flows is exponentially unstable (direct confrontation is unstable) and thus the flow seeks a flow pattern without opposing flow, and finds one as depicted above: The opposing flows are shifted horsiontally and turned into a set of counter-rotating swirling vortical motions like the one you can see in a bathtub drain, as seen here in a different perspective.
The result is a separation without unstable opposing flows supported by a zig-zag pattern of low/high pressure with low pressure inside the vortices, as shown here. The resulting pressure distribution is what gives both drag and lift to a wing.
Something to think about in the hammock, or in your sailing boat because the secret of sailing is the same. |
41f5875289754bea | Skip to content
Area 51
June 28, 2012
Complexity Theory Conspiracy Theories
Graham Steel is a member of Team Prosecco at INRIA Paris-Roquencourt in France. He along with Romain Bardou of the related SECSI team at INRIA, and Riccardo Focardi, Yusuke Kawamoto, Lorenzo Simionato, and Joe-Kai Tsay in other countries, have written a paper to appear at CRYPTO 2012 that shows how to break RSA tokens in record time. The INRIA team names combine to say that dry white wine is sexy, which makes us think of spy movies, which often involve conspiracies.
Today Ken and I want to talk about possible conspiracy theories that involve computational complexity.
We learned of this through my Georgia Tech colleague Chris Peikert being quoted in the New York Times article on the story. The RSA secure token system is is a hardware device that is widely used by industry and governments. They have at least dented the system if not destroyed it. Of course following research crypto etiquette they have published their results, rather than keep them secret. But what if they had decided to keep them secret? What if we did not know that the RSA token system is breakable? Indeed.
The 2012 film “Travelling Salesman” has a similar premise. Four mathematicians have found a polynomial-time algorithm for TSP, so that not only all other NP-complete problems but also Factoring and related crypto problems have polynomial-time algorithms. They wrestle with the government officials’ desire to keep their discoveries secret. Although the film has been out for two weeks, its Wikipedia page currently lists its only critical reaction as coming from … us. And neither of us has seen the movie yet. What do you do when life becomes a house of mirrors?
All this sets us thinking hard about possible conspiracy theories. Were the sexy wine people the first to discover the RSA token flaw? Did others know about it for years and not announce their results? This detailed blog post by Matthew Green shows trouble brewing for years. But then why involve Chris, who isn’t even cited in the paper or any other coverage we’ve seen? Is all this a warning for us to go underground, to be seen only as “Pip”? One can get a pretty neat conspiracy theory started here. Hence this discussion.
Conspiracy Theory Theory
Conspiracy theories come in “historical” and “futuristic” flavors. Historical ones try to explain some real world events as having been caused by a covert group or group-within-a-group, which by definition is unknown to most of us. Futuristic ones postulate something that is currently unknown, and the group concerned may even be known.
Our friends at Wikipedia have a list of prominent theories here. It is interesting to note that Katherine Young states
“…(t)he fact remains, however, that not all conspiracies are imagined by paranoids.”
And we add, not all conspiracy theories are wrong either. It is incontestably true that a US President was assassinated by conspiracy in the ’60s: Lincoln. How might we possibly tell which are which?
One of the most fun recent conspiracy theories is based on the upcoming London Olympic Games. Their logo is:
Go here for an amusing, we think, discussion of how school children actually designed the logo via tangrams. This is a nice example of a fun theory.
Well there is also a non-fun theory: Iran threatened to boycott the Games based on the rumor that the logo really spells “Zion”—as if the Illuminati were behind it. The main supporting argument is that the little central diamond cannot be part of “2012,” but goes neatly as the dot for the ‘i’ in “Zion.”
However the tangram aesthetic has something to say here. How many of you like us have doodled during lectures or meetings, the kind of doodle where you make a 2-coloring where regions touch at points? The diamond similarly holds the other parts of the London 2012 logo together.
With historical conspiracy theories the known event E is presumed unlikely without the conspiracy as explanation—but usually the conspiracy itself should be presumed unlikely. When an alternative explanation is natural enough to have higher prior likelihood, such as we claim for the logo’s diamond, that’s concrete evidence against the theory. In the futuristic case the relevant “prior probabilities” may be harder to judge, but current expertise may enable one to gauge them.
Ten Theory Theories
Let’s turn now to computer and complexity based conspiracy theories. We are kidding here—let us repeat, we are just having fun. We do not really believe any of these on our “Top Ten” list—or do we?
{\bullet } Quantum Computers Already Exist. Notwithstanding—cool to use that word—our recent many columns on quantum computers, some believe that they already exist. Certain agencies here and elsewhere might be running one right now—how could we know?
Now to test our framework, is it true that those skeptical of quantum computing are the ones who assign the lowest “prior” to this unknown postulate? Or does the allegedly conspiratorial nature of the skepticism correlate positively with it? On the other hand, does a technological advance like this one with ion traps enhance the postulate?
{\bullet } Factoring Really Is Easy. This is similar to the last, but now they can factor in polynomial time on a laptop, rather than need a quantum computer. Ken and I think this one has a much higher prior, almost on the order of “Breaking Engima Really Is Easy” in 1939.
{\bullet } John von Neumann’s Proof. Recall that Kurt Gödel’s letter to von Neumann was never answered. Or was it? The problem solving ability of von Neumann is legendary, so could he have actually proved it long ago? He worked for various secret government agencies, so would they tell us if they really had a proof?
{\bullet } The Supercomputer Fraud. Actually hardware is mostly lights and fakes. Inside is one laptop running a secret very clever algorithm that can solve huge systems of equations fast… OK, here’s another principle: sometimes a special case of a conspiracy theory can be taken seriously.
{\bullet } The Memory Chip Fraud. The number of atoms in the observable universe is believed to be less than {2^{270}}, while {2^{206}} Planck instants gives a generous 300 billion year timespan for our pocket of the cosmos. Thus every act of storing something to memory in the whole history of our pocket can be coded within 500 bits. Just doubling that leaves a lot of room for error correction and hashing and mirroring. The mathematics involved here has been known since Claude Shannon in the 1940’s. Hence no computer needs more than a single 1K memory chip, let alone Bill Gates’ 640K. The rest is for sales pitches—come on, you don’t really believe your cheap digital camera is storing millions of individual bits in the time it takes to press a button, do you?
OK, this is a joke, but it leads into the next two, which aren’t.
{\bullet} No True Randomness. Every string we write down or read is compressible to, say, 500 or 1,000 bits. That is, all our computing and instrumentation works within the range of strong pseudo-random generators, perhaps in blocks. Strong PRGs are commonly believed to exist. How could we tell the difference? One computer scientist who believes this is Jürgen Schmidhuber.
{\bullet} The Simulation Argument. This is legion in popular culture from “The Matrix” and “Inception” and other sci-fi, so we’ll just refer you to Nick Bostrom’s formulation of it. In theory we could tell the difference if something happened in the manner of The Truman Show where a light labeled “Sirius” falls from the sky. But are there any such events?
We offer one complexity-related observation. Although it is routine to say that classes like {\mathsf{P}} and {\mathsf{BQP}} have universal simulation, this isn’t strictly true. The universal function for {\mathsf{P}} doesn’t belong to {\mathsf{P}}—if it did, then {\mathsf{P}} would be in some fixed polynomial time bound, which it isn’t. Although proving this is technically murkier for “random” or “promise” classes like {\mathsf{BQP}}, the essential idea holds for any reasonable complexity class. Thus a universal simulation involves dropping down to a lower grade than the resources on which you draw. If our universe is convincingly universal, perhaps this is a well-motivated reason to reject the argument.
{\bullet } Computer Chess Fraud. Ironically the highest-level accusation wasn’t against a human for cheating with a computer, but rather against a computer for cheating with a human. Garry Kasparov accused IBM’s Deep Blue of making moves with “deep intelligence and creativity” that could only come from a human, presumably Ken’s friend Grandmaster Joel Benjamin. Kasparov demanded to see the logs of Deep Blue’s calculations of a particular move that was later revealed to be far from best, even though it caused Kasparov to resign a game where he still had considerable drawing chances. When IBM refused, the conspiracy theory took off, and might have done so even without Kasparov’s fanning—it still percolates to this day even though the logs were later posted.
Today it is easy for anyone to test the moves with inexpensive—even free—chess programs that are apparently stronger than Deep Blue was, even running on cheap hardware. In several of Ken’s tests the aforementioned disputed move, 45.Ra6 in Game 2, looks good until fairly high depth when it suffers a big swing down in value. Such a swing may be an unlikely event, but the tests give a natural explanation that Deep Blue probably didn’t sense the swing in time. Here is a graphic of one of Ken’s tests, showing the Rybka 4.1 program at depth 18 after the swing down.
Incidentally Ken is not convinced by the analytical conclusion stated here that Kasparov didn’t have a draw when he famously resigned. He believes the 51.Ra1 move given there can be met by 51…Kf8, and after 52.Qc7, the slinky 52…Kg8. Black may have to lose several pawns in exchange for White’s advanced d-pawn, but then Black gets counterplay by pushing the e-pawn. (The move 45…h5 wasn’t played—Ken inserted it to overcome an off-by-one bug in the Arena chess GUI’s automatic-analysis routine.)
{\bullet } Barney Google. With the goo-goo-googly eyes…tracking all activity on your PC or at least what’s relevant to commerce, and ingesting data. Could this be undetectable by everyone? Making an undetectable Trojan might just be the flip side of the wicked problem of designing a completely secure OS.
Open Problems
Do you believe in any of our ten conspiracy theories? Do you have your own? What are they?
[revised TSP film’s Wikipedia critical-reception link—now has others besides us]
28 Comments leave one →
1. June 28, 2012 12:44 am
I once made the following remarks regarding Osama Ben Laden’s killing by a US special operation team:
* For all we (as in we, the general public) knew at the time, Osama Ben Laden could have been dead for quite a while, due to illness or other reasons.
* Neither Al Qaeda nor the US had an interest in publishing such information: the former because such an organization benefits from the story of the charismatic chief escaping all attempts on his life, the second because it is not good for public relations to admit you were spending billions about killing a man who was already dead.
* In contrast, the US had an interest in appearing to kill Ben Laden: not only does it signal to the world that anyone on the planet, however well hidden, will not escape the US’ wrath, it also makes a good case for withdrawing from Afghanistan with an appearance of victory. The Afghan war is unpopular and costs billions and billions and many lives; the Obama administration has an interest in withdrawing in a victor’s posture.
* The only information we have about this death was supplied by senior members of president Obama’s administration. The raid was made by a limited cadre of elite soldiers sworn to secrecy.
Therefore, it seems not impossible that the Osama Ben Laden was already dead before the raid and that the raid was staged so as to provide the Obama administration with a victory. I do not believe it was the case, but at least it is not impossible and the hypothesis cannot be quickly dismissed.
The only facts we have, as far as I know, is president Obama’s word. The reason why we believe the raid took place is that we generally rather like president Obama. Yet, we should remember the Gulf of Tonkin incident and the “weapons of mass destruction” in Iraq: it is not unheard of that senior US officials lie to the world in order to pursue war objectives. Richard Nixon is also a good example of a US president caught in a web of deception.
• Tom permalink
June 29, 2012 11:54 pm
Al Qaeda would be easily able to expose such a lie, thus discrediting the US government.
• albatross permalink
July 3, 2012 1:52 pm
It’s more plausible that OBL is still alive, being subjected to enhanced interrogation of some kind, since AQ would have no way to disprove that and probably wouldn’t even know about it. But I wouldn’t bet on it.
Alternatively, it’s possible that OBL died of natural causes under the protection of the ISI, we found out about it, and the whole raid was some kind of scam. Again, this isn’t the way to bet, but I don’t see how we could distinguish the official story from either of these from available information.
• anonymous. permalink
June 30, 2012 9:38 am
“The raid was made by a limited cadre of elite soldiers sworn to secrecy.”
Remember the whole team of elite soldiers died when their helicopter crashed. And we know that dead people do not speak.
• throwaway permalink
July 3, 2012 12:22 am
You’re wrong. I have friends in the Navy, specifically ST6, and our friends died in that crash. The men who died in the blackhawk crash were NOT part of that raid. How dare you allude to them dying for a coverup of some half-baked conspiracy theory.
2. June 28, 2012 12:47 am
(I realize the above theory is not about theoretical computer science. I mention it because it illustrates something that theoretical computer scientists are well aware of: that one should always be aware of the hypotheses that are used in order to reach a conclusion, and were they come from.)
3. John Sidles permalink
June 28, 2012 4:46 am
If P=NP then comedic possibilities become near-certainties … per “On the unexpected efficacy of SAT solvers” (Bulletin of the AMS, April 1, 2020).
4. Craig permalink
June 28, 2012 6:54 am
My favorite conspiracy theory: The entire internet is controlled by the Illuminati.
• rjlipton permalink*
June 28, 2012 9:03 am
Great example.
• Craig permalink
June 28, 2012 12:39 pm
Of course, you know that the whole the purpose of the internet is to keep the “sheeple” busy watching conspiracy theory videos on youtube, while the Illuminati secretly take over the world.
• June 29, 2012 7:40 pm
Yes!!! Not internet only! One day you can receive a message with Code, that code will stop your OS and CPU.
5. John Sidles permalink
June 28, 2012 7:39 am
A chess-themed conspiracy narrative that has enjoyable complexity-theoretic overtones is Ian Watson’s novel Queen Magic, King Magic.
Watson’s novel theme is (spoiler alert) the “burning arrow” discovery/realization by the individual chess pieces that they have the power to alter the rules of their own game, and even the power to enter-and-alter other games … including our game as human chess-players. Recommended. 🙂
6. June 28, 2012 9:27 am
I always thought that it was well-known that there was no true-randomness. For example: the Schrodinger equation is deterministic. Thus the wave-function of the universe also is deterministic. Result: no true randomness. 🙂
• June 28, 2012 10:18 am
I read that by analogy with NFA-to-DFA. The Schrödinger equation describes the DFA, but we experience one computation by the NFA in “frog’s-eye view” mode, and that can involve true randomness. If “Many Worlds” is true then others might be experiencing other computations.
Another way to get true randomness is locally, a-la this paper by Max Tegmark. But IMHO you need there to exist 2^n bits of “stuff” in order to get n random bits that way, and that strikes me as excessive.
• June 28, 2012 11:25 am
Isn’t the “frog’s-eye view” perspective just an attribute of reducing the context in consideration? That is, the non-determinism in an NFA is there at the choice of when to change states, but if we pull back somewhat, the overall ‘system’ is in fact deterministic?
• June 28, 2012 12:38 pm
“Pulling back” could be a 2^n expansion, which I think stays virtual, not real. Further analogy: when matching to a regular expression with grep the DFA never gets built, and what is actuated is a path thru the NFA.
• June 29, 2012 11:11 am
I was having one of those ‘moments’ when I wrote this:
but oddly, the part where I talk about how information is a serialization of the underlying complexity seems to fit well with what Max Tegmark is saying in the initial part of his paper (I haven’t had time yet to read the whole thing).
In his sense, the universe is a fairly simple formal system governed by basic rules, and it’s the interaction of the epiphenomenon that form the microscopic superpositions. Taking a path through that provides information whose complexity is tied to its traversal. But taken as a whole, with a normalized path is far less information in the system.
In that sense random is not applicable to the underlying macroscopic superpositions, but rather to the twists and turns of the path as we choose to observe it. So in a sense although the DFA’s don’t need to be build explicitly in grep, they are still the controlling factor for the behavior of the seemingly non-deterministic path in the NFA?
Or is it another one of those ‘moments’ 🙂
7. June 28, 2012 11:16 am
Isn’t factoring really easy? You just post a question somewhere on the Internet asking for the factors for any given number, head off to a bar and then wait for someone to answer it 🙂
I kinda wish everything were a simulation, then at least it would make some kind of sense …
8. June 28, 2012 5:34 pm
My favorite theory conspiracy is:
I’ve solved P vs. NP ( but the Theoretical Science community hides this breakthrough because I’ve discovered also that the Cook-Levin Theorem is false.
9. June 29, 2012 11:49 am
“Quantum Computers Already Exist” – I think one could one-up this by additionally claiming that quantum computing skeptics are really CIA (or Mossad/Chinese inteligence/…) plants aiming to disrupt the progress in the field while the US military has already built a quantum computer.
10. Andrew McDowell permalink
June 29, 2012 2:21 pm
For elaboration on the theory that intelligence agencies or others are accumulating but not publishing bugs, see
11. July 2, 2012 5:53 am
We should add “shaped charges and millisecond timing are not necessary for CD” to the list.
or point out that they are.
12. Ari Juels permalink
July 2, 2012 2:30 pm
Conspiracies are hard to pull off because the world is complex place, an obstacle course of caveats for the hapless conspirator. A prime example that I’m in a good position to comment on: The break of the “RSA token system” you mention, by which attackers have “dented the system if not destroyed it.”
Could this vulnerability have remained hidden for years, exploited by the Knights Templar or the Rosicrucians or (given their affinity for attacking RSA: a cult of neo-Pythagoreans?
Perhaps, except that…
The RSA token system, in the sense of the well known authentication technology, wasn’t broken, or even dented. What was broken was a token model (SID 800) that “wrapped” keys under the RSA algorithm itself. It was the RSA algorithm that came under attack, not the SecurID algorithm.
But then it wasn’t really the RSA algorithm itself that was attacked—rather (as Matt Green writes) a particular implementation (padding) using a vulnerable RSA encryption standard called PKCS #1 v1.5.
Could this vulnerability, then, have remained secretly in recent years in the hands of a cabal of conspirators?
Perhaps. Except that a researcher named Daniel Bleichenbacher published it in a well known paper in 1998. And RSA itself published a fix to the vulnerable standard as well, in the form of PKCS #1 v.2, also back in 1998.
So conspirators in this case wouldn’t be using secret knowledge to achieve their ends. Rather, they’d be relying on knowledge that’s been public and remediated, in principle, for well over a decade. They’d be exploiting an engineering lapse or inadvisable attempt at backward compatibility by RSA and other security vendors, rather than an ultra-secret backdoor.
Additionally, the opportunity for exploitation of the SID 800 is somewhat limited in practice. An attacker must have possession of the device itself and knowledge of its user’s PIN to mount the attack. Without getting into details, in most deployments of the token, an attacker with the device and PIN generally has no need to mount the Bleichenbacher attack to begin with to compromise a system. I don’t have details on other devices implicated in the CRYPTO paper, but suspect the case is similar.
So a Rosicrucian or neo-Pythagorean cult worth its salt would probably have looked for some other way to exert clandestine dominion over the world’s uninitiated masses. Angry Birds, maybe?
13. July 5, 2012 3:02 pm
In fact, Dan Brown already in 1998 wrote in “Digital Fortress” about NSA supercomputer TRANSLTR deriving “its power … from new advances in quantum computing …”.
14. July 12, 2012 12:23 pm
Its always great to see conspiracy news!
1. Ten conspiracy theories for nerds (or conspiracy theory theory) — Marginal Revolution
2. The Speed of Communication « Gödel’s Lost Letter and P=NP
3. Los límites de la computación: los números primos | Diseño Web Tiendas Online Barato en Valencia | Diseño y Posicionamiento Web y Tiendas Online en Valencia al alcance de todos
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
a89bab1cb86d4f47 | Table of Contents Author Guidelines Submit a Manuscript
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 982753, 16 pages
Research Article
Estimates for Unimodular Multipliers on Modulation Hardy Spaces
1Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2Department of Mathematics, University of Wisconsin-Milwaukee, Milwaukee, WI 53201, USA
3School of Science, Hangzhou Dianzi University, Hangzhou 310016, China
Received 23 November 2012; Accepted 23 January 2013
Academic Editor: Baoxiang Wang
It is known that the unimodular Fourier multipliers are bounded on all modulation spaces for . We extend such boundedness to the case of all and obtain its asymptotic estimate as t goes to infinity. As applications, we give the grow-up rate of the solution for the Cauchy problems for the free Schrödinger equation with the initial data in a modulation space, as well as some mixed norm estimates. We also study the boundedness for the operator , for the case and Finally, we investigate the boundedness of the operator for and obtain the local well-posedness for the Cauchy problem of some nonlinear partial differential equations with fundamental semigroup .
1. Introduction
A Fourier multiplier is a linear operator whose action on a test function on is formally defined by The function is called the symbol or multiplier of .
In this paper, we will study the unimodular Fourier multipliers with symbol for . They arise when one solves the Cauchy problem for dispersive equations. For example, for the solution of the Cauchy problem we have the formula . Here is the Laplacian and is the multiplier operator with symbol (see [1] for its definition). The cases are of particular interest because they correspond to the (half-) wave equation, the Schrödinger equation, and (essentially) the Airy equation, respectively.
Unimodular Fourier multipliers generally do not preserve any Lebesgue space , except for . The -spaces are not the appropriate function spaces for the study of these operators and the so-called modulation spaces are good alternative classes for the study of unimodular Fourier multipliers. The modulation spaces were first introduced by Feichtinger [24] to measure smoothness of a function or distribution in a way different from spaces, and they are now recognized as a useful tool for studying pseudodifferential operators [57]. We will recall the precise definition of modulation spaces in Section 2 below.
Recently, the boundedness of unimodular Fourier multipliers on the modulation spaces has been investigated in [1, 815]. Particularly, one has the following results.
Theorem A (see [11]). Let ,, , and . One has, for , where
Here (and throughout this paper), we use the notation to mean that there is a positive constant independent of all essential variables such that .
Theorem B (see [15]). Let , , and . Then is bounded from to if and only if
In this paper, we use a different method from [15] to prove the following theorem, which, in particular, uses the modulation Hardy spaces that will be later defined in Section 2.
Theorem 1. Let , , . For a positive , denote . Let if n is even and if is odd.(i)Assume . If and , one has Particularly, the above inequality holds for all if is a positive even number. (ii) For any , one has for any .
Here (iii) Assume . If , then for all .
We want to make a few remarks on Theorem 1. First, (iii) in Theorem 1 says that when , compared to the case in (i), one obtains a larger range of and a smaller range of . We do not know if there is a unified formula regarding and for all dimension . Second, in the proof we will see that, in the low frequency parts of the definition of , the fractional Schrödinger semigroup has a growth when is growing, but it gains an arbitrary regularity. In the high frequency part, the semigroup can be controlled by at each piece of its decomposition with frequency . This phenomenon was also more precisely observed in [1, 15] (see also [11]). Thirdly, the case was studied in [8, 16].
Since the norm is dominated by the norm and the Riesz transforms are bounded on , by the Riesz transform characterization of the (see Section 2), we easily obtain the following corollary.
Corollary 2. Let , and . One has for where
Our next result shows that the asymptotic factor in Theorem 1 is the best for all , at least for .
Theorem 3. Let . The asymptotic factor in Theorem 1 is the best. Precisely, for , if then
In the next theorem, we state some mixed norm estimates.
Theorem 4. Let and . For , suppose .(i)If , then (ii)If , then
We consider the following linear Cauchy problem with negative power:
We give the grow-up rate of the solution to the above Cauchy problem in the modulation spaces.
Theorem 5. Assume and .(i)Let . One has that for any (ii)For any , one has
Now, we study the following Cauchy problem of the nonlinear dispersive equations (NDE): where for some positive integer . For , the space is defined by
We obtain the quantitative forms about the solution to the above Cauchy problem of the nonlinear dispersive equations.
Theorem 6. Let , , and assume
Assume for any
There exists such that the above Cauchy system (NDE) has a unique solution , where depends on the norm and .
According to the inclusions of modulation space (see Proposition in [13]), we know the space of initial data if .
Theorem 7. Let . Assume and for any
The rest of the paper is organized as follows. In Section 2, we recall or establish some necessary lemmas and known results. Sections 3 and 4 are devoted to the proofs of Theorems 1 and 3, respectively. Finally, in Section 5, we give some applications including the boundedness for the operator in the case and , including negative .
2. Preliminaries
2.1. The Definitions
The modulation space is originally defined by Feichtinper in 1983 on the locally compact Ablian groups . When , the modulation space can be equivalently defined by using the unit-cube decomposition to the frequency space (see Appendix in [13], also [14, 17]). The following definition is based on the unit-cube decomposition introduced in [13].
Let be a fixed nonnegative-valued function in with support in the cube and satisfy for any in the cube . By a standard constructive method, we may assume that for all , where is the -shift of that is defined by
For each , we use as its symbol of a smooth projection on the frequency space. Precisely, for any , we have
Let be a Banach space of measurable functions on with quasi-norm . We define the modulation space where By definition, we have the inclusion It is known that the definition of the modulation space is independent of the choice of functions . In this paper, we are particularly interested in the cases and , where is the Lebesgue space and is the real Hardy space. For all , we call the modulation spaces and the modulation Hardy space. As a usual notation we similarly define By the definition and known properties of , we have that for all , and for all , For simplicity in notation, we denote The following imbedding relation can be found in Proposition of [18]. Let , . If then
2.2. Spaces
It is well known that the Hardy space coincides with the Lebesgue space when . For , the space has many characterizations. We will use its Riesz transform characterization in this paper. For an integer and multi-index , let denote the generalized Riesz transform where each is the Riesz transform of if and . It is known that for and all , where is a sum of finite terms.
The operator is a convolution. We have Also it is well known that is bounded on spaces for any .
2.3. Some Lemmas and Known Results
Lemma 8. Let and . Suppose that there is an integer , such that for all test functions for and for . Here and is a real number. Then for , one has where is an arbitrary positive number.
Proof. The case is proved in [11]. It suffices to show the lemma for . By the Riesz transform characterization of , for , we have By checking the Fourier transform, we have the identity where So for , one has A similar argument shows that for , for any . The rest of the lemma easily follows from the definition of the modulation spaces.
Lemma 9 (see [18, 19]). Let denote an open set and . If and the rank of the matrix is at least for all (), then
Lemma 10. Let and . Suppose that is a function with support in . Then
Proof. The case is known [20]. It then suffices to show that for , for large . Let be a standard bump radial function supported in the set and satisfying, for all , Noting the support condition of , we write where the sets , are defined by For , we use polar coordinates to write where is the induced Lebesgue measure on the unit sphere . When is even, taking integration by parts for times on the inside integral, we obtain When is odd, we use integration by parts for times on the inside integral, Again we obtain that for odd , For , without loss of generality, we assume . Perform integration by parts on the variable for suitable amount of times. We similarly obtain For , invoking Lemma 9, we obtain Noting that contains no more than numbers of , it is easy to check
The lemma is proved.
Lemma 11 (see [21, pages 163–171]). Let and Suppose that is a Fourier multiplier with symbol . If is a bounded function which is of class in and if with , then is a bounded operator on and
Lemma 12. Let and . For all , one has
This lemma can be found in Section 4.2 of [11].
Lemma 13. Let be a compact subset in , and let . There exists a constant depending only on the diameter of and , such that for all satisfying .
This lemma is the Nikol'skij-Triebel inequality, see Proposition in [20] (also Lemma 2.5 in [22]).
Lemma 14. Let and be compact subsets of . Then there exists a constant depending only on the diameters of and , such that for all , satisfying and .
This is Lemma 2.6 in [22] (see also Proposition in [20]).
Lemma 15 (Pitt's theorem). If and , then
Lemma 16. Let and satisfy Then one has
This result is a particular case of Lemma 2.5 in [8].
3. Proof of Theorem 1
The operator is a convolution operator with the symbol . This symbol is a function on with compact support. Clearly for any and , we have that for ,
So Lemma 11 implies the following estimate.
Proposition 17. Let . For any with , one has
By the proof of Lemma 8 and Proposition 17, we have that for all ,
The following proposition extends Lemma 12 to all .
Proposition 18. Let . For any with , for any , one has
Proof. The proof uses the same idea used in proving the case which was represented in [11]. For the convenience of the reader, we present its proof.
Let be the kernel of . Then By Lemma 14 and (46), we have Thus to prove the proposition, it suffices to show
For simplicity, we prove the case . The proof for , is tedious but shares the same idea as that for .
First we study the case . For , and , if we denote If , we denote
Also, for and , we define sets It is easy to check Let We have for , Write where
It is easy to check that if and supp, the phase function satisfies So by Lemma 9, we have
Observe the easy fact that if and supp , for any integer , Perform integration by parts on and variables both for times such that . An easy computation shows that
The estimates for and are exactly the same. We only estimate . Take integration by parts on variable for times with . Again, a simple computation shows that if we chose a suitably large . These estimates on , , indicate provided .
We now turn to show the case . For , and , let be the numbers defined above. For and , we define sets It is easy to check Let Thus, Using the same argument as we used before, we can show We complete the proof of Proposition 18.
We are now in a position to prove Theorem 1.
Proof. By an argument involving interpolation and duality, it suffices to show the case . Using Proposition 18, the inequality in (76) and the definition of the modulation spaces, we easily obtain (ii) in Theorem 1.
To show (i) and (iii) in Theorem 1, by Proposition 18 and the definition of the modulation spaces, it suffices to show Again, by Lemma 14, the proof of the inequality in (101) can be reduced to show that for , We show (iii) first. The proof of may illustrate the method. When By Hölder's inequality and the Plancherel theorem, the first term above For the second term, performing integration by parts, we obtain since
Now we return to show (i) of Theorem 1. We will prove only the case . Write Using Hölder's inequality and the Plancherel theorem, we obtain
For , we denote sets We now write To show (102), it now suffices to show that for each ,
Using the Leibniz rule, for any positive integer , we have Here, an easy induction argument shows that, for , where is a homogeneous function of degree for each . We now write where By the definition, it is easy to see that each is an and function with support in the cube .
Let . Performing integration by parts on variables for times, we have
We first estimate each , . Recall that we assume . Let , so . By the choice of and the assumption it is easy to see . Therefore, by Hölder's inequality, we obtain For each , by the choice of , the assumption on , and an easy computation, it is not difficult to see that we may obtain a number in the interval satisfying By Hölder's inequality and Pitt's theorem, for each , we obtain Combining all the estimates, we have It remains to estimate .
It is easy to see that the choice of and the condition |
925d9e9ff53c141f | Sunday, April 2, 2017
Biphoton Inspiral
The matter-energy equivalence principle shows that the energy of a photon of light is equivalent to mass and the mass of an atom therefore increases when it absorbs light. In fact, the sun's gravity bends the path of a photon just like a the sun's gravity bends the path of a passing asteroid and so sufficiently energetic photons will attract each other and merge into matter. The Higgs boson at 125 GeV collision of two protons is consistent with the inspiral merger of two photons, a biphoton, at 125 GeV to make two hydrogen atoms along with a lot of other particles.
Just like the inspiral merger of two black holes, a photon pair inspiral merger is what makes up each particle of matter with complementary photons trapped in each others gravity wells. Thus all matter is equivalent to a bound photon pair resonance that we interpret as electrons, protons, and neutrons of matter.
Photons travel at the speed of light, c, and the photon pair emits a gravity wave as they inspiral and eventually merge into matter at an event horizon. But matter is not stable until certain photon thresholds and so the electron is the simplest photon superposition. Spinning black holes are large matter accretions that likewise involve the inspiral of photons.
The biphoton nature of matter is completely consistent with the electrons, protons, and neutrons that science observes along with the particle zoo of higher energy matter. The biphoton hydrogen exists because of the emission of a Rydberg photon at the CMB creation, where all matter condensed from the primordial cold photon vapor. The Rydberg photons of all matter exist today as the CMB and their entanglement with matter today is what we call gravity, the basic force that holds biphotons together as matter.
Charge force is then a particular resonance between the electron and proton biphoton that satisfies the quantum action of the Schrödinger equation and h/c2. The Rydberg biphoton is the archetype of the universe and forms the inner and outer forces that science calls charge and gravity. While the Rydberg photon emitted at the CMB creation is responsible for gravity, the Rydberg photon exchange is the bond between an electron and proton in hydrogen. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.